pax_global_header00006660000000000000000000000064131513161170014511gustar00rootroot0000000000000052 comment=257691ae0e24b718bcefd07a4141f9fbe651b54d pika-0.11.0/000077500000000000000000000000001315131611700125145ustar00rootroot00000000000000pika-0.11.0/.checkignore000066400000000000000000000000561315131611700147770ustar00rootroot00000000000000**/docs **/examples **/test **/utils setup.py pika-0.11.0/.codeclimate.yml000066400000000000000000000001551315131611700155670ustar00rootroot00000000000000languages: - python exclude_paths: - docs/* - tests/* - utils/* - pika/examples/* - pika/spec.py pika-0.11.0/.coveragerc000066400000000000000000000000621315131611700146330ustar00rootroot00000000000000[run] branch = True [report] omit = pika/spec.py pika-0.11.0/.gitignore000066400000000000000000000002621315131611700145040ustar00rootroot00000000000000*.pyc *~ .idea .coverage .tox .DS_Store .python-version pika.iml codegen pika.egg-info examples/pika examples/blocking/pika atlassian*xml build dist docs/_build *.conf.in venvs/ pika-0.11.0/.travis.yml000066400000000000000000000021751315131611700146320ustar00rootroot00000000000000language: python python: - 2.6 - 2.7 - 3.3 - 3.4 - 3.5 - 3.6 before_install: - sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse" - sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse" - sudo apt-get update -qq - sudo apt-get install libev-dev/trusty install: - which -a python - python --version - which pip - pip --version - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install unittest2 ordereddict; fi - if [[ $TRAVIS_PYTHON_VERSION != '2.6' ]]; then pip install pyev; fi - pip install -r test-requirements.txt - pip freeze services: - rabbitmq before_script: - sudo rabbitmqctl status script: - nosetests after_success: - codecov deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: python: 2.7 tags: true all_branches: true password: secure: "V/JTU/X9C6uUUVGEAWmWWbmKW7NzVVlC/JWYpo05Ha9c0YV0vX4jOfov2EUAphM0WwkD/MRhz4dq3kCU5+cjHxR3aTSb+sbiElsCpaciaPkyrns+0wT5MCMO29Lpnq2qBLc1ePR1ey5aTWC/VibgFJOL7H/3wyvukL6ZaCnktYk=" pika-0.11.0/CHANGELOG.rst000066400000000000000000001213431315131611700145410ustar00rootroot000000000000000.11.0 2017-07-29 ----------------- `GitHub milestone `_ - Simplify Travis CI configuration for OS X. - Add `asyncio` connection adapter for Python 3.4 and newer. - Connection failures that occur after the socket is opened and before the AMQP connection is ready to go are now reported by calling the connection error callback. Previously these were not consistently reported. - In BaseConnection.close, call _handle_ioloop_stop only if the connection is already closed to allow the asynchronous close operation to complete gracefully. - Pass error information from failed socket connection to user callbacks on_open_error_callback and on_close_callback with result_code=-1. - ValueError is raised when a completion callback is passed to an asynchronous (nowait) Channel operation. It's an application error to pass a non-None completion callback with an asynchronous request, because this callback can never be serviced in the asynchronous scenario. - `Channel.basic_reject` fixed to allow `delivery_tag` to be of type `long` as well as `int`. (by quantum5) - Implemented support for blocked connection timeouts in `pika.connection.Connection`. This feature is available to all pika adapters. See `pika.connection.ConnectionParameters` docstring to learn more about `blocked_connection_timeout` configuration. - Deprecated the `heartbeat_interval` arg in `pika.ConnectionParameters` in favor of the `heartbeat` arg for consistency with the other connection parameters classes `pika.connection.Parameters` and `pika.URLParameters`. - When the `port` arg is not set explicitly in `ConnectionParameters` constructor, but the `ssl` arg is set explicitly, then set the port value to to the default AMQP SSL port if SSL is enabled, otherwise to the default AMQP plaintext port. - `URLParameters` will raise ValueError if a non-empty URL scheme other than {amqp | amqps | http | https} is specified. - `InvalidMinimumFrameSize` and `InvalidMaximumFrameSize` exceptions are deprecated. pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. - Removed deprecated parameter `type` in `Channel.exchange_declare` and `BlockingChannel.exchange_declare` in favor of the `exchange_type` arg that doesn't overshadow the builtin `type` keyword. - Channel.close() on OPENING channel transitions it to CLOSING instead of raising ChannelClosed. - Channel.close() on CLOSING channel raises `ChannelAlreadyClosing`; used to raise `ChannelClosed`. - Connection.channel() raises `ConnectionClosed` if connection is not in OPEN state. - When performing graceful close on a channel and `Channel.Close` from broker arrives while waiting for CloseOk, don't release the channel number until CloseOk arrives to avoid race condition that may lead to a new channel receiving the CloseOk that was destined for the closing channel. - The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. 0.10.0 2015-09-02 ----------------- `0.10.0 `_ - a9bf96d - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - 388c55d - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - 4780de3 - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 ------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsynchoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsynchoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as enhanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd99e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- `0.9.14 `_ - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- `0.9.13 `_ **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- `0.9.12 `_ **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- `0.9.11 `_ **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- `0.9.10 `_ **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ `0.9.9 `_ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ `0.9.8 `_ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ `0.9.7 `_ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ `0.9.6 `_ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ `0.9.5 `_ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-0.11.0/CONTRIBUTING.md000066400000000000000000000017601315131611700147510ustar00rootroot00000000000000# Contributing ## Test Coverage To contribute to Pika, please make sure that any new features or changes to existing functionality **include test coverage**. *Pull requests that add or change code without coverage have a much lower chance of being accepted.* ## Prerequisites Pika test suite has a couple of requirements: * Dependencies from `test-dependencies.txt` are installed * A RabbitMQ node with all defaults is running on `localhost:5672` ## Installing Dependencies To install the dependencies needed to run Pika tests, use pip install -r test-requirements.txt which on Python 3 might look like this pip3 install -r test-requirements.txt ## Running Tests To run all test suites, use nosetests Note that some tests are OS-specific (e.g. epoll on Linux or kqueue on MacOS and BSD). Those will be skipped automatically. ## Code Formatting Please format your code using [yapf](http://pypi.python.org/pypi/yapf) with ``google`` style prior to issuing your pull request. pika-0.11.0/LICENSE000066400000000000000000000030021315131611700135140ustar00rootroot00000000000000Copyright (c) 2009-2017, Tony Garnock-Jones, Gavin M. Roy, Pivotal and others. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Pika project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pika-0.11.0/MANIFEST.in000066400000000000000000000000421315131611700142460ustar00rootroot00000000000000include LICENSE include README.rstpika-0.11.0/README.rst000066400000000000000000000067761315131611700142230ustar00rootroot00000000000000Pika ==== Pika is a RabbitMQ (AMQP-0-9-1) client library for Python. |Version| |Status| |Coverage| |License| |Docs| Introduction ------------- Pika is a pure-Python implementation of the AMQP 0-9-1 protocol including RabbitMQ's extensions. - Python 2.6+ and 3.3+ are supported. - Since threads aren't appropriate to every situation, it doesn't require threads. It takes care not to forbid them, either. The same goes for greenlets, callbacks, continuations and generators. It is not necessarily thread-safe however, and your mileage will vary. - People may be using direct sockets, plain old `select()`, or any of the wide variety of ways of getting network events to and from a python application. Pika tries to stay compatible with all of these, and to make adapting it to a new environment as simple as possible. Documentation ------------- Pika's documentation can be found at `https://pika.readthedocs.io `_ Example ------- Here is the most simple example of use, sending a message with the BlockingConnection adapter: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() channel.basic_publish(exchange='example', routing_key='test', body='Test Message') connection.close() And an example of writing a blocking consumer: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() for method_frame, properties, body in channel.consume('test'): # Display the message parts and ack the message print(method_frame, properties, body) channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print('Requeued %i messages' % requeued_messages) connection.close() Pika provides the following adapters ------------------------------------ - AsyncioConnection - adapter for the Python3 AsyncIO event loop - BlockingConnection - enables blocking, synchronous operation on top of library for simple uses - LibevConnection - adapter for use with the libev event loop http://libev.schmorp.de - SelectConnection - fast asynchronous adapter - TornadoConnection - adapter for use with the Tornado IO Loop http://tornadoweb.org - TwistedConnection - adapter for use with the Twisted asynchronous package http://twistedmatrix.com/ Contributing ------------ To contribute to pika, please make sure that any new features or changes to existing functionality **include test coverage**. *Pull requests that add or change code without coverage will most likely be rejected.* Additionally, please format your code using `yapf `_ with ``google`` style prior to issuing your pull request. .. |Version| image:: https://img.shields.io/pypi/v/pika.svg? :target: http://badge.fury.io/py/pika .. |Status| image:: https://img.shields.io/travis/pika/pika.svg? :target: https://travis-ci.org/pika/pika .. |Coverage| image:: https://img.shields.io/codecov/c/github/pika/pika.svg? :target: https://codecov.io/github/pika/pika?branch=master .. |License| image:: https://img.shields.io/pypi/l/pika.svg? :target: https://pika.readthedocs.io .. |Docs| image:: https://readthedocs.org/projects/pika/badge/?version=stable :target: https://pika.readthedocs.io :alt: Documentation Status pika-0.11.0/appveyor.yml000066400000000000000000000044201315131611700151040ustar00rootroot00000000000000# Windows build and test of Pika environment: erlang_download_url: "http://erlang.org/download/otp_win64_18.3.exe" erlang_exe_path: "C:\\Users\\appveyor\\erlang.exe" erlang_home_dir: "C:\\Users\\appveyor\\erlang" rabbitmq_installer_download_url: "https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.1/rabbitmq-server-3.6.1.exe" rabbitmq_installer_path: "C:\\Users\\appveyor\\rabbitmq-server-3.6.1.exe" matrix: - PYTHON_ARCH: "32" PYTHONHOME: "C:\\Python27" cache: # RabbitMQ is a pretty big package, so caching it in hopes of expediting the # runtime - "%erlang_exe_path%" - "%rabbitmq_installer_path%" install: - SET PYTHONPATH=%PYTHONHOME% - SET PATH=%PYTHONHOME%\Scripts;%PYTHONHOME%;%PATH% # For diagnostics - ECHO %PYTHONPATH% - ECHO %PATH% - python --version - ECHO Upgrading pip... - python -m pip install --upgrade pip setuptools - pip --version - ECHO Installing wheel... - pip install wheel build_script: - ECHO Building distributions... - python setup.py sdist bdist bdist_wheel - DIR /s *.whl artifacts: - path: 'dist\*.whl' name: pika wheel before_test: # Install test requirements - ECHO Installing pika... - python setup.py install - ECHO Installing pika test requirements... - pip install -r test-requirements.txt # List conents of C:\ to help debug caching of rabbitmq artifacts - DIR C:\ - ps: $webclient=New-Object System.Net.WebClient - ECHO Downloading Erlang... - ps: if (-Not (Test-Path "$env:erlang_exe_path")) { $webclient.DownloadFile("$env:erlang_download_url", "$env:erlang_exe_path") } else { Write-Host "Found" $env:erlang_exe_path "in cache." } - ECHO Starting Erlang... - start /B /WAIT %erlang_exe_path% /S /D=%erlang_home_dir% - set ERLANG_HOME=%erlang_home_dir% - ECHO Downloading RabbitMQ... - ps: if (-Not (Test-Path "$env:rabbitmq_installer_path")) { $webclient.DownloadFile("$env:rabbitmq_installer_download_url", "$env:rabbitmq_installer_path") } else { Write-Host "Found" $env:rabbitmq_installer_path "in cache." } - ECHO Installing and starting RabbitMQ with default config... - start /B /WAIT %rabbitmq_installer_path% /S - ps: (Get-Service -Name RabbitMQ).Status test_script: - nosetests # Not deploying Windows builds yet TODO deploy: false pika-0.11.0/docs/000077500000000000000000000000001315131611700134445ustar00rootroot00000000000000pika-0.11.0/docs/Makefile000066400000000000000000000126641315131611700151150ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pika.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pika.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/pika" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pika" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pika-0.11.0/docs/conf.py000066400000000000000000000017441315131611700147510ustar00rootroot00000000000000# -*- coding: utf-8 -*- import sys sys.path.insert(0, '../') #needs_sphinx = '1.0' extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.intersphinx'] intersphinx_mapping = {'python': ('https://docs.python.org/3/', 'https://docs.python.org/3/objects.inv'), 'tornado': ('http://www.tornadoweb.org/en/stable/', 'http://www.tornadoweb.org/en/stable/objects.inv')} templates_path = ['_templates'] source_suffix = '.rst' master_doc = 'index' project = 'pika' copyright = '2009-2015, Tony Garnock-Jones, Gavin M. Roy, Pivotal and others.' import pika release = pika.__version__ version = '.'.join(release.split('.')[0:1]) exclude_patterns = ['_build'] add_function_parentheses = True add_module_names = True show_authors = True pygments_style = 'sphinx' modindex_common_prefix = ['pika'] html_theme = 'default' html_static_path = ['_static'] htmlhelp_basename = 'pikadoc' pika-0.11.0/docs/contributors.rst000066400000000000000000000034011315131611700167310ustar00rootroot00000000000000Contributors ============ The following people have directly contributes code by way of new features and/or bug fixes to Pika: - Gavin M. Roy - Tony Garnock-Jones - Vitaly Kruglikov - Michael Laing - Marek Majkowski - Jan Urbański - Brian K. Jones - Ask Solem - ml - Will - atatsu - Fredrik Svensson - Pedro Abranches - Kyösti Herrala - Erik Andersson - Charles Law - Alex Chandel - Tristan Penman - Raphaël De Giusti - Jozef Van Eenbergen - Josh Braegger - Jason J. W. Williams - James Mutton - Cenk Alti - Asko Soukka - Antti Haapala - Anton Ryzhov - cellscape - cacovsky - bra-fsn - ateska - Roey Berman - Robert Weidlich - Riccardo Cirimelli - Perttu Ranta-aho - Pau Gargallo - Kane - Kamil Kisiel - Jonty Wareing - Jonathan Kirsch - Jacek 'Forger' Całusiński - Garth Williamson - Erik Olof Gunnar Andersson - David Strauss - Anton V. Yanchenko - Alexey Myasnikov - Alessandro Tagliapietra - Adam Flynn - skftn - saarni - pavlobaron - nonleaf - markcf - george y - eivanov - bstemshorn - a-tal - Yang Yang - Stuart Longland - Sigurd Høgsbro - Sean Dwyer - Samuel Stauffer - Roberto Decurnex - Rikard Hultén - Richard Boulton - Ralf Nyren - Qi Fan - Peter Magnusson - Pankrat - Olivier Le Thanh Duong - Njal Karevoll - Milan Skuhra - Mik Kocikowski - Michael Kenney - Mark Unsworth - Luca Wehrstedt - Laurent Eschenauer - Lars van de Kerkhof - Kyösti Herrala - Juhyeong Park - JuhaS - Josh Hansen - Jorge Puente Sarrín - Jeff Tang - Jeff Fein-Worton - Jeff - Hunter Morris - Guruprasad - Garrett Cooper - Frank Slaughter - Dustin Koupal - Bjorn Sandberg - Axel Eirola - Andrew Smith - Andrew Grigorev - Andrew - Allard Hoeve - A.Shaposhnikov *Contributors listed by commit count.* pika-0.11.0/docs/examples.rst000066400000000000000000000014421315131611700160150ustar00rootroot00000000000000Usage Examples ============== Pika has various methods of use, between the synchronous BlockingConnection adapter and the various asynchronous connection adapter. The following examples illustrate the various ways that you can use Pika in your projects. .. toctree:: :glob: :maxdepth: 1 examples/using_urlparameters examples/connecting_async examples/blocking_basic_get examples/blocking_consume examples/blocking_consumer_generator examples/comparing_publishing_sync_async examples/blocking_delivery_confirmations examples/blocking_publish_mandatory examples/asynchronous_consumer_example examples/asynchronous_publisher_example examples/twisted_example examples/tornado_consumer examples/tls_mutual_authentication examples/tls_server_authentication pika-0.11.0/docs/examples/000077500000000000000000000000001315131611700152625ustar00rootroot00000000000000pika-0.11.0/docs/examples/asynchronous_consumer_example.rst000066400000000000000000000355241315131611700242060ustar00rootroot00000000000000Asynchronous consumer example ============================= The following example implements a consumer that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: # -*- coding: utf-8 -*- import logging import pika LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection(pika.URLParameters(self._url), self.on_connection_open, stop_ioloop_on_close=False) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ # This is the old connection IOLoop instance, stop its ioloop self._connection.ioloop.stop() if not self._closing: # Create a new connection self._connection = self.connect() # There is now a new connection, needs a new ioloop to run self._connection.ioloop.start() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel %i was closed: (%s) %s', channel, reply_code, reply_text) self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume(self.on_message, self.QUEUE) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param str|unicode body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.11.0/docs/examples/asynchronous_publisher_example.rst000066400000000000000000000363241315131611700243470ustar00rootroot00000000000000Asynchronous publisher example ============================== The following example implements a publisher that will respond to RPC commands sent from RabbitMQ and uses delivery confirmations. It will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a publisher can do. publisher.py:: # -*- coding: utf-8 -*- import logging import pika import json LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExamplePublisher(object): """This is an example publisher that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. It uses delivery confirmations and illustrates one way to keep track of messages that have been sent and if they've been confirmed by RabbitMQ. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' PUBLISH_INTERVAL = 1 QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Setup the example publisher object, passing in the URL we will use to connect to RabbitMQ. :param str amqp_url: The URL for connecting to RabbitMQ """ self._connection = None self._channel = None self._deliveries = None self._acked = None self._nacked = None self._message_number = None self._stopping = False self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. If you want the reconnection to work, make sure you set stop_ioloop_on_close to False, which is not the default behavior of this adapter. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection(pika.URLParameters(self._url), on_open_callback=self.on_connection_open, on_close_callback=self.on_connection_closed, stop_ioloop_on_close=False) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.open_channel() def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._stopping: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self._connection.ioloop.stop) def open_channel(self): """This method will open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ confirms the channel is open by sending the Channel.OpenOK RPC reply, the on_channel_open method will be invoked. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel was closed: (%s) %s', reply_code, reply_text) self._channel = None if not self._stopping: self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_bindok(self, unused_frame): """This method is invoked by pika when it receives the Queue.BindOk response from RabbitMQ. Since we know we're now setup and bound, it's time to start publishing.""" LOGGER.info('Queue bound') self.start_publishing() def start_publishing(self): """This method will enable delivery confirmations and schedule the first message to be sent to RabbitMQ """ LOGGER.info('Issuing consumer related RPC commands') self.enable_delivery_confirmations() self.schedule_next_message() def enable_delivery_confirmations(self): """Send the Confirm.Select RPC method to RabbitMQ to enable delivery confirmations on the channel. The only way to turn this off is to close the channel and create a new one. When the message is confirmed from RabbitMQ, the on_delivery_confirmation method will be invoked passing in a Basic.Ack or Basic.Nack method from RabbitMQ that will indicate which messages it is confirming or rejecting. """ LOGGER.info('Issuing Confirm.Select RPC command') self._channel.confirm_delivery(self.on_delivery_confirmation) def on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ confirmation_type = method_frame.method.NAME.split('.')[1].lower() LOGGER.info('Received %s for delivery tag: %i', confirmation_type, method_frame.method.delivery_tag) if confirmation_type == 'ack': self._acked += 1 elif confirmation_type == 'nack': self._nacked += 1 self._deliveries.remove(method_frame.method.delivery_tag) LOGGER.info('Published %i messages, %i have yet to be confirmed, ' '%i were acked and %i were nacked', self._message_number, len(self._deliveries), self._acked, self._nacked) def schedule_next_message(self): """If we are not closing our connection to RabbitMQ, schedule another message to be delivered in PUBLISH_INTERVAL seconds. """ LOGGER.info('Scheduling next message for %0.1f seconds', self.PUBLISH_INTERVAL) self._connection.add_timeout(self.PUBLISH_INTERVAL, self.publish_message) def publish_message(self): """If the class is not stopping, publish a message to RabbitMQ, appending a list of deliveries with the message number that was sent. This list will be used to check for delivery confirmations in the on_delivery_confirmations method. Once the message has been sent, schedule another message to be sent. The main reason I put scheduling in was just so you can get a good idea of how the process is flowing by slowing down and speeding up the delivery intervals by changing the PUBLISH_INTERVAL constant in the class. """ if self._channel is None or not self._channel.is_open: return hdrs = {u'مفتاح': u' قيمة', u'键': u'值', u'キー': u'値'} properties = pika.BasicProperties(app_id='example-publisher', content_type='application/json', headers=hdrs) message = u'مفتاح قيمة 键 值 キー 値' self._channel.basic_publish(self.EXCHANGE, self.ROUTING_KEY, json.dumps(message, ensure_ascii=False), properties) self._message_number += 1 self._deliveries.append(self._message_number) LOGGER.info('Published message # %i', self._message_number) self.schedule_next_message() def run(self): """Run the example code by connecting and then starting the IOLoop. """ while not self._stopping: self._connection = None self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 try: self._connection = self.connect() self._connection.ioloop.start() except KeyboardInterrupt: self.stop() if (self._connection is not None and not self._connection.is_closed): # Finish closing self._connection.ioloop.start() LOGGER.info('Stopped') def stop(self): """Stop the example by closing the channel and connection. We set a flag here so that we stop scheduling new messages to be published. The IOLoop is started because this method is invoked by the Try/Catch below when KeyboardInterrupt is caught. Starting the IOLoop again will allow the publisher to cleanly disconnect from RabbitMQ. """ LOGGER.info('Stopping') self._stopping = True self.close_channel() self.close_connection() def close_channel(self): """Invoke this command to close the channel with RabbitMQ by sending the Channel.Close RPC command. """ if self._channel is not None: LOGGER.info('Closing the channel') self._channel.close() def close_connection(self): """This method closes the connection to RabbitMQ.""" if self._connection is not None: LOGGER.info('Closing connection') self._connection.close() def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) # Connect to localhost:5672 as guest with the password guest and virtual host "/" (%2F) example = ExamplePublisher('amqp://guest:guest@localhost:5672/%2F?connection_attempts=3&heartbeat_interval=3600') example.run() if __name__ == '__main__': main() pika-0.11.0/docs/examples/asyncio_consumer.rst000066400000000000000000000352661315131611700214100ustar00rootroot00000000000000Asyncio Consumer ================ The following example implements a consumer using the :class:`Asyncio adapter ` for the `Asyncio library `_ that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: from pika import adapters import pika import logging LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return adapters.AsyncioConnection(pika.URLParameters(self._url), self.on_connection_open) def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ if not self._closing: # Create a new connection self._connection = self.connect() def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel %i was closed: (%s) %s', channel, reply_code, reply_text) self._connection.close() def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param str|unicode body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume(self.on_message, self.QUEUE) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.11.0/docs/examples/blocking_basic_get.rst000066400000000000000000000022541315131611700216070ustar00rootroot00000000000000Using the Blocking Connection to get a message from RabbitMQ ============================================================ .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.basic_get ` method will return a tuple with the members. If the server returns a message, the first item in the tuple will be a :class:`pika.spec.Basic.GetOk` object with the current message count, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The second item will be a :py:class:`~pika.spec.BasicProperties` object and the third will be the message body. If the server did not return a message a tuple of None, None, None will be returned. Example of getting a message and acknowledging it:: import pika connection = pika.BlockingConnection() channel = connection.channel() method_frame, header_frame, body = channel.basic_get('test') if method_frame: print(method_frame, header_frame, body) channel.basic_ack(method_frame.delivery_tag) else: print('No message returned') pika-0.11.0/docs/examples/blocking_consume.rst000066400000000000000000000024301315131611700213340ustar00rootroot00000000000000Using the Blocking Connection to consume messages from RabbitMQ =============================================================== .. _example_blocking_basic_consume: The :py:meth:`BlockingChannel.basic_consume ` method assign a callback method to be called every time that RabbitMQ delivers messages to your consuming application. When pika calls your method, it will pass in the channel, a :py:class:`pika.spec.Basic.Deliver` object with the delivery tag, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The third argument will be a :py:class:`pika.spec.BasicProperties` object and the last will be the message body. Example of consuming messages and acknowledging them:: import pika def on_message(channel, method_frame, header_frame, body): print(method_frame.delivery_tag) print(body) print() channel.basic_ack(delivery_tag=method_frame.delivery_tag) connection = pika.BlockingConnection() channel = connection.channel() channel.basic_consume(on_message, 'test') try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close()pika-0.11.0/docs/examples/blocking_consumer_generator.rst000066400000000000000000000072031315131611700235670ustar00rootroot00000000000000Using the BlockingChannel.consume generator to consume messages =============================================================== .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.consume ` method is a generator that will return a tuple of method, properties and body. When you escape out of the loop, be sure to call consumer.cancel() to return any unprocessed messages. Example of consuming messages and acknowledging them:: import pika connection = pika.BlockingConnection() channel = connection.channel() # Get ten messages and break out for method_frame, properties, body in channel.consume('test'): # Display the message parts print(method_frame) print(properties) print(body) # Acknowledge the message channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print('Requeued %i messages' % requeued_messages) # Close the channel and the connection channel.close() connection.close() If you have pending messages in the test queue, your output should look something like:: (pika)gmr-0x02:pika gmr$ python blocking_nack.py Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Requeued 1894 messages pika-0.11.0/docs/examples/blocking_delivery_confirmations.rst000066400000000000000000000020751315131611700244460ustar00rootroot00000000000000Using Delivery Confirmations with the BlockingConnection ======================================================== The following code demonstrates how to turn on delivery confirmations with the BlockingConnection and how to check for confirmation from RabbitMQ:: import pika # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Turn on delivery confirmations channel.confirm_delivery() # Send a message if channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1)): print('Message publish was confirmed') else: print('Message could not be confirmed') pika-0.11.0/docs/examples/blocking_publish_mandatory.rst000066400000000000000000000021411315131611700234060ustar00rootroot00000000000000Ensuring message delivery with the mandatory flag ================================================= The following example demonstrates how to check if a message is delivered by setting the mandatory flag and checking the return result when using the BlockingConnection:: import pika # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Enabled delivery confirmations channel.confirm_delivery() # Send a message if channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1), mandatory=True): print('Message was published') else: print('Message was returned') pika-0.11.0/docs/examples/comparing_publishing_sync_async.rst000066400000000000000000000054031315131611700244520ustar00rootroot00000000000000Comparing Message Publishing with BlockingConnection and SelectConnection ========================================================================= For those doing simple, non-asynchronous programing, :py:meth:`pika.adapters.blocking_connection.BlockingConnection` proves to be the easiest way to get up and running with Pika to publish messages. In the following example, a connection is made to RabbitMQ listening to port *5672* on *localhost* using the username *guest* and password *guest* and virtual host */*. Once connected, a channel is opened and a message is published to the *test_exchange* exchange using the *test_routing_key* routing key. The BasicProperties value passed in sets the message to delivery mode *1* (non-persisted) with a content-type of *text/plain*. Once the message is published, the connection is closed:: import pika parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() In contrast, using :py:meth:`pika.adapters.select_connection.SelectConnection` and the other asynchronous adapters is more complicated and less pythonic, but when used with other asynchronous services can have tremendous performance improvements. In the following code example, all of the same parameters and values are used as were used in the previous example:: import pika # Step #3 def on_open(connection): connection.channel(on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() # Step #1: Connect to RabbitMQ parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.SelectConnection(parameters=parameters, on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-0.11.0/docs/examples/connecting_async.rst000066400000000000000000000035031315131611700213410ustar00rootroot00000000000000Connecting to RabbitMQ with Callback-Passing Style ================================================== When you connect to RabbitMQ with an asynchronous adapter, you are writing event oriented code. The connection adapter will block on the IOLoop that is watching to see when pika should read data from and write data to RabbitMQ. Because you're now blocking on the IOLoop, you will receive callback notifications when specific events happen. Example Code ------------ In the example, there are three steps that take place: 1. Setup the connection to RabbitMQ 2. Start the IOLoop 3. Once connected, the on_open method will be called by Pika with a handle to the connection. In this method, a new channel will be opened on the connection. 4. Once the channel is opened, you can do your other actions, whether they be publishing messages, consuming messages or other RabbitMQ related activities.:: import pika # Step #3 def on_open(connection): connection.channel(on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('exchange_name', 'routing_key', 'Test Message', pika.BasicProperties(content_type='text/plain', type='example')) # Step #1: Connect to RabbitMQ connection = pika.SelectConnection(on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-0.11.0/docs/examples/direct_reply_to.rst000066400000000000000000000054731315131611700212140ustar00rootroot00000000000000Direct reply-to example ============================== The following example demonstrates the use of the RabbitMQ "Direct reply-to" feature via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. direct_reply_to.py:: # -*- coding: utf-8 -*- """ This example demonstrates the RabbitMQ "Direct reply-to" usage via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. """ import pika SERVER_QUEUE = 'rpc.server.queue' def main(): """ Here, Client sends "Marco" to RPC Server, and RPC Server replies with "Polo". NOTE Normally, the server would be running separately from the client, but in this very simple example both are running in the same thread and sharing connection and channel. """ with pika.BlockingConnection() as conn: channel = conn.channel() # Set up server channel.queue_declare(queue=SERVER_QUEUE, exclusive=True, auto_delete=True) channel.basic_consume(on_server_rx_rpc_request, queue=SERVER_QUEUE) # Set up client # NOTE Client must create its consumer and publish RPC requests on the # same channel to enable the RabbitMQ broker to make the necessary # associations. # # Also, client must create the consumer *before* starting to publish the # RPC requests. # # Client must create its consumer with no_ack=True, because the reply-to # queue isn't real. channel.basic_consume(on_client_rx_reply_from_server, queue='amq.rabbitmq.reply-to', no_ack=True) channel.basic_publish( exchange='', routing_key=SERVER_QUEUE, body='Marco', properties=pika.BasicProperties(reply_to='amq.rabbitmq.reply-to')) channel.start_consuming() def on_server_rx_rpc_request(ch, method_frame, properties, body): print 'RPC Server got request:', body ch.basic_publish('', routing_key=properties.reply_to, body='Polo') ch.basic_ack(delivery_tag=method_frame.delivery_tag) print 'RPC Server says good bye' def on_client_rx_reply_from_server(ch, method_frame, properties, body): print 'RPC Client got reply:', body # NOTE A real client might want to make additional RPC requests, but in this # simple example we're closing the channel after getting our first reply # to force control to return from channel.start_consuming() print 'RPC Client says bye' ch.close() pika-0.11.0/docs/examples/heartbeat_and_blocked_timeouts.rst000066400000000000000000000043071315131611700242150ustar00rootroot00000000000000Ensuring well-behaved connection with heartbeat and blocked-connection timeouts =============================================================================== This example demonstrates explicit setting of heartbeat and blocked connection timeouts. Starting with RabbitMQ 3.5.5, the broker's default hearbeat timeout decreased from 580 seconds to 60 seconds. As a result, applications that perform lengthy processing in the same thread that also runs their Pika connection may experience unexpected dropped connections due to heartbeat timeout. Here, we specify an explicit lower bound for heartbeat timeout. When RabbitMQ broker is running out of certain resources, such as memory and disk space, it may block connections that are performing resource-consuming operations, such as publishing messages. Once a connection is blocked, RabbiMQ stops reading from that connection's socket, so no commands from the client will get through to te broker on that connection until the broker unblocks it. A blocked connection may last for an indefinite period of time, stalling the connection and possibly resulting in a hang (e.g., in BlockingConnection) until the connection is unblocked. Blocked Connectin Timeout is intended to interrupt (i.e., drop) a connection that has been blocked longer than the given timeout value. Example of configuring hertbeat and blocked-connection timeouts:: import pika def main(): # NOTE: These paramerers work with all Pika connection types params = pika.ConnectionParameters(heartbeat_interval=600, blocked_connection_timeout=300) conn = pika.BlockingConnection(params) chan = conn.channel() chan.basic_publish('', 'my-alphabet-queue', "abc") # If publish causes the connection to become blocked, then this conn.close() # would hang until the connection is unblocked, if ever. However, the # blocked_connection_timeout connection parameter would interrupt the wait, # resulting in ConnectionClosed exception from BlockingConnection (or the # on_connection_closed callback call in an asynchronous adapter) conn.close() if __name__ == '__main__': main() pika-0.11.0/docs/examples/tls_mutual_authentication.rst000066400000000000000000000042311315131611700233040ustar00rootroot00000000000000TLS parameters example ============================= This examples demonstrates a TLS session with RabbitMQ using mutual authentication. It was tested against RabbitMQ 3.6.10, using Python 3.6.1 and pre-release Pika `0.11.0` Note the use of `ssl_version=ssl.PROTOCOL_TLSv1`. The recent verions of RabbitMQ disable older versions of SSL due to security vulnerabilities. See https://www.rabbitmq.com/ssl.html for certificate creation and rabbitmq SSL configuration instructions. tls_example.py:: import ssl import pika import logging logging.basicConfig(level=logging.INFO) cp = pika.ConnectionParameters( ssl=True, ssl_options=dict( ssl_version=ssl.PROTOCOL_TLSv1, ca_certs="/Users/me/tls-gen/basic/testca/cacert.pem", keyfile="/Users/me/tls-gen/basic/client/key.pem", certfile="/Users/me/tls-gen/basic/client/cert.pem", cert_reqs=ssl.CERT_REQUIRED)) conn = pika.BlockingConnection(cp) ch = conn.channel() print(ch.queue_declare("sslq")) ch.publish("", "sslq", "abc") print(ch.basic_get("sslq")) rabbitmq.config:: %% Both the client and rabbitmq server were running on the same machine, a MacBookPro laptop. %% %% rabbitmq.config was created in its default location for OS X: /usr/local/etc/rabbitmq/rabbitmq.config. %% %% The contents of the example rabbitmq.config are for demonstration purposes only. See https://www.rabbitmq.com/ssl.html for instructions about creating the test certificates and the contents of rabbitmq.config. [ {rabbit, [ {ssl_listeners, [{"127.0.0.1", 5671}]}, %% Configuring SSL. %% See http://www.rabbitmq.com/ssl.html for full documentation. %% {ssl_options, [{cacertfile, "/Users/me/tls-gen/basic/testca/cacert.pem"}, {certfile, "/Users/me/tls-gen/basic/server/cert.pem"}, {keyfile, "/Users/me/tls-gen/basic/server/key.pem"}, {verify, verify_peer}, {fail_if_no_peer_cert, true}]} ] } ]. pika-0.11.0/docs/examples/tls_server_uathentication.rst000066400000000000000000000045511315131611700233100ustar00rootroot00000000000000TLS parameters example ============================= This examples demonstrates a TLS session with RabbitMQ using server authentication. It was tested against RabbitMQ 3.6.10, using Python 3.6.1 and pre-release Pika `0.11.0` Note the use of `ssl_version=ssl.PROTOCOL_TLSv1`. The recent verions of RabbitMQ disable older versions of SSL due to security vulnerabilities. See https://www.rabbitmq.com/ssl.html for certificate creation and rabbitmq SSL configuration instructions. tls_example.py:: import ssl import pika import logging logging.basicConfig(level=logging.INFO) cp = pika.ConnectionParameters( ssl=True, ssl_options=dict( ssl_version=ssl.PROTOCOL_TLSv1, ca_certs="/Users/me/tls-gen/basic/testca/cacert.pem", cert_reqs=ssl.CERT_REQUIRED)) conn = pika.BlockingConnection(cp) ch = conn.channel() print(ch.queue_declare("sslq")) ch.publish("", "sslq", "abc") print(ch.basic_get("sslq")) rabbitmq.config:: %% Both the client and rabbitmq server were running on the same machine, a MacBookPro laptop. %% %% rabbitmq.config was created in its default location for OS X: /usr/local/etc/rabbitmq/rabbitmq.config. %% %% The contents of the example rabbitmq.config are for demonstration purposes only. See https://www.rabbitmq.com/ssl.html for instructions about creating the test certificates and the contents of rabbitmq.config. %% %% Note that the {fail_if_no_peer_cert,false} option, states that RabbitMQ should accept clients that don't have a certificate to send to the broker, but through the {verify,verify_peer} option, we state that if the client does send a certificate to the broker, the broker must be able to establish a chain of trust to it. [ {rabbit, [ {ssl_listeners, [{"127.0.0.1", 5671}]}, %% Configuring SSL. %% See http://www.rabbitmq.com/ssl.html for full documentation. %% {ssl_options, [{cacertfile, "/Users/me/tls-gen/basic/testca/cacert.pem"}, {certfile, "/Users/me/tls-gen/basic/server/cert.pem"}, {keyfile, "/Users/me/tls-gen/basic/server/key.pem"}, {verify, verify_peer}, {fail_if_no_peer_cert, false}]} ] } ]. pika-0.11.0/docs/examples/tornado_consumer.rst000066400000000000000000000352331315131611700214030ustar00rootroot00000000000000Tornado Consumer ================ The following example implements a consumer using the :class:`Tornado adapter ` for the `Tornado framework `_ that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: from pika import adapters import pika import logging LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return adapters.TornadoConnection(pika.URLParameters(self._url), self.on_connection_open) def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ if not self._closing: # Create a new connection self._connection = self.connect() def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel %i was closed: (%s) %s', channel, reply_code, reply_text) self._connection.close() def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param str|unicode body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume(self.on_message, self.QUEUE) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.11.0/docs/examples/twisted_example.rst000066400000000000000000000027001315131611700212110ustar00rootroot00000000000000Twisted Consumer Example ======================== Example of writing a consumer using the :py:class:`Twisted connection adapter `:: # -*- coding:utf-8 -*- import pika from pika import exceptions from pika.adapters import twisted_connection from twisted.internet import defer, reactor, protocol,task @defer.inlineCallbacks def run(connection): channel = yield connection.channel() exchange = yield channel.exchange_declare(exchange='topic_link',type='topic') queue = yield channel.queue_declare(queue='hello', auto_delete=False, exclusive=False) yield channel.queue_bind(exchange='topic_link',queue='hello',routing_key='hello.world') yield channel.basic_qos(prefetch_count=1) queue_object, consumer_tag = yield channel.basic_consume(queue='hello',no_ack=False) l = task.LoopingCall(read, queue_object) l.start(0.01) @defer.inlineCallbacks def read(queue_object): ch,method,properties,body = yield queue_object.get() if body: print(body) yield ch.basic_ack(delivery_tag=method.delivery_tag) parameters = pika.ConnectionParameters() cc = protocol.ClientCreator(reactor, twisted_connection.TwistedProtocolConnection, parameters) d = cc.connectTCP('hostname', 5672) d.addCallback(lambda protocol: protocol.ready) d.addCallback(run) reactor.run() pika-0.11.0/docs/examples/using_urlparameters.rst000066400000000000000000000077001315131611700221130ustar00rootroot00000000000000Using URLParameters =================== Pika has two methods of encapsulating the data that lets it know how to connect to RabbitMQ, :py:class:`pika.connection.ConnectionParameters` and :py:class:`pika.connection.URLParameters`. .. note:: If you're connecting to RabbitMQ on localhost on port 5672, with the default virtual host of */* and the default username and password of *guest* and *guest*, you do not need to specify connection parameters when connecting. Using :py:class:`pika.connection.URLParameters` is an easy way to minimize the variables required to connect to RabbitMQ and supports all of the directives that :py:class:`pika.connection.ConnectionParameters` supports. The following is the format for the URLParameters connection value:: scheme://username:password@host:port/virtual_host?key=value&key=value As you can see, by default, the scheme (amqp, amqps), username, password, host, port and virtual host make up the core of the URL and any other parameter is passed in as query string values. Example Connection URLS ----------------------- The default connection URL connects to the / virtual host as guest using the guest password on localhost port 5672. Note the forwardslash in the URL is encoded to %2F:: amqp://guest:guest@localhost:5672/%2F Connect to a host *rabbit1* as the user *www-data* using the password *rabbit_pwd* on the virtual host *web_messages*:: amqp://www-data:rabbit_pwd@rabbit1/web_messages Connecting via SSL is pretty easy too. To connect via SSL for the previous example, simply change the scheme to *amqps*. If you do not specify a port, Pika will use the default SSL port of 5671:: amqps://www-data:rabbit_pwd@rabbit1/web_messages If you're looking to tweak other parameters, such as enabling heartbeats, simply add the key/value pair as a query string value. The following builds upon the SSL connection, enabling heartbeats every 30 seconds:: amqps://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat=30 Options that are available as query string values: - backpressure_detection: Pass in a value of *t* to enable backpressure detection, it is disabled by default. - channel_max: Alter the default channel maximum by passing in a 32-bit integer value here. - connection_attempts: Alter the default of 1 connection attempt by passing in an integer value here. - frame_max: Alter the default frame maximum size value by passing in a long integer value [#f1]_. - heartbeat: Pass a value greater than zero to enable heartbeats between the server and your application. The integer value you pass here will be the number of seconds between heartbeats. - locale: Set the locale of the client using underscore delimited posix Locale code in ll_CC format (en_US, pt_BR, de_DE). - retry_delay: The number of seconds to wait before attempting to reconnect on a failed connection, if connection_attempts is > 0. - socket_timeout: Change the default socket timeout duration from 0.25 seconds to another integer or float value. Adjust with caution. - ssl_options: A url encoded dict of values for the SSL connection. The available keys are: - ca_certs - cert_reqs - certfile - keyfile - ssl_version For an information on what the ssl_options can be set to reference the `official Python documentation `_. Here is an example of setting the client certificate and key:: amqp://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat=30&ssl_options=%7B%27keyfile%27%3A+%27%2Fetc%2Fssl%2Fmykey.pem%27%2C+%27certfile%27%3A+%27%2Fetc%2Fssl%2Fmycert.pem%27%7D The following example demonstrates how to generate the ssl_options string with `Python's urllib `_:: import urllib urllib.urlencode({'ssl_options': {'certfile': '/etc/ssl/mycert.pem', 'keyfile': '/etc/ssl/mykey.pem'}}) .. rubric:: Footnotes .. [#f1] The AMQP specification states that a server can reject a request for a frame size larger than the value it passes during content negotiation. pika-0.11.0/docs/faq.rst000066400000000000000000000022721315131611700147500ustar00rootroot00000000000000Frequently Asked Questions -------------------------- - Is Pika thread safe? Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads. - How do I report a bug with Pika? The `main Pika repository `_ is hosted on `Github `_ and we use the Issue tracker at `https://github.com/pika/pika/issues `_. - Is there a mailing list for Pika? Yes, Pika's mailing list is available `on Google Groups `_ and the email address is pika-python@googlegroups.com, though traditionally questions about Pika have been asked on the `RabbitMQ-Discuss mailing list `_. - How can I contribute to Pika? You can `fork the project on Github `_ and issue `Pull Requests `_ when you believe you have something solid to be added to the main repository. pika-0.11.0/docs/index.rst000066400000000000000000000014401315131611700153040ustar00rootroot00000000000000Introduction to Pika ==================== Pika is a pure-Python implementation of the AMQP 0-9-1 protocol that tries to stay fairly independent of the underlying network support library. If you have not developed with Pika or RabbitMQ before, the :doc:`intro` documentation is a good place to get started. Installing Pika --------------- Pika is available for download via PyPI and may be installed using easy_install or pip:: pip install pika or:: easy_install pika To install from source, run "python setup.py install" in the root source directory. Using Pika ---------- .. toctree:: :glob: :maxdepth: 1 intro modules/index examples faq contributors version_history Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search` pika-0.11.0/docs/intro.rst000066400000000000000000000142771315131611700153440ustar00rootroot00000000000000Introduction to Pika ==================== IO and Event Looping -------------------- As AMQP is a two-way RPC protocol where the client can send requests to the server and the server can send requests to a client, Pika implements or extends IO loops in each of its asynchronous connection adapters. These IO loops are blocking methods which loop and listen for events. Each asynchronous adapter follows the same standard for invoking the IO loop. The IO loop is created when the connection adapter is created. To start an IO loop for any given adapter, call the ``connection.ioloop.start()`` method. If you are using an external IO loop such as Tornado's :class:`~tornado.ioloop.IOLoop` you invoke it normally and then add the Pika Tornado adapter to it. Example:: import pika def on_open(connection): # Invoked when the connection is open pass # Create our connection object, passing in the on_open method connection = pika.SelectConnection(on_open_callback=on_open) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() .. _intro_to_cps: Continuation-Passing Style -------------------------- Interfacing with Pika asynchronously is done by passing in callback methods you would like to have invoked when a certain event completes. For example, if you are going to declare a queue, you pass in a method that will be called when the RabbitMQ server returns a `Queue.DeclareOk `_ response. In our example below we use the following four easy steps: #. We start by creating our connection object, then starting our event loop. #. When we are connected, the *on_connected* method is called. In that method we create a channel. #. When the channel is created, the *on_channel_open* method is called. In that method we declare a queue. #. When the queue is declared successfully, *on_queue_declared* is called. In that method we call :py:meth:`channel.basic_consume ` telling it to call the handle_delivery for each message RabbitMQ delivers to us. #. When RabbitMQ has a message to send us, it calls the handle_delivery method passing the AMQP Method frame, Header frame, and Body. .. NOTE:: Step #1 is on line #28 and Step #2 is on line #6. This is so that Python knows about the functions we'll call in Steps #2 through #5. .. _cps_example: Example:: import pika # Create a global channel variable to hold our channel object in channel = None # Step #2 def on_connected(connection): """Called when we are fully connected to RabbitMQ""" # Open a channel connection.channel(on_channel_open) # Step #3 def on_channel_open(new_channel): """Called when our channel has opened""" global channel channel = new_channel channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False, callback=on_queue_declared) # Step #4 def on_queue_declared(frame): """Called when RabbitMQ has told us our Queue has been declared, frame is the response from RabbitMQ""" channel.basic_consume(handle_delivery, queue='test') # Step #5 def handle_delivery(channel, method, header, body): """Called when we receive a message from RabbitMQ""" print(body) # Step #1: Connect to RabbitMQ using the default parameters parameters = pika.ConnectionParameters() connection = pika.SelectConnection(parameters, on_connected) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() Credentials ----------- The :mod:`pika.credentials` module provides the mechanism by which you pass the username and password to the :py:class:`ConnectionParameters ` class when it is created. Example:: import pika credentials = pika.PlainCredentials('username', 'password') parameters = pika.ConnectionParameters(credentials=credentials) .. _connection_parameters: Connection Parameters --------------------- There are two types of connection parameter classes in Pika to allow you to pass the connection information into a connection adapter, :class:`ConnectionParameters ` and :class:`URLParameters `. Both classes share the same default connection values. .. _intro_to_backpressure: TCP Backpressure ---------------- As of RabbitMQ 2.0, client side `Channel.Flow `_ has been removed [#f1]_. Instead, the RabbitMQ broker uses TCP Backpressure to slow your client if it is delivering messages too fast. If you pass in backpressure_detection into your connection parameters, Pika attempts to help you handle this situation by providing a mechanism by which you may be notified if Pika has noticed too many frames have yet to be delivered. By registering a callback function with the :py:meth:`add_backpressure_callback ` method of any connection adapter, your function will be called when Pika sees that a backlog of 10 times the average frame size you have been sending has been exceeded. You may tweak the notification multiplier value by calling the :py:meth:`set_backpressure_multiplier ` method passing any integer value. Example:: import pika parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F?backpressure_detection=t') .. rubric:: Footnotes .. [#f1] "more effective flow control mechanism that does not require cooperation from clients and reacts quickly to prevent the broker from exhausing memory - see http://www.rabbitmq.com/extensions.html#memsup" from http://lists.rabbitmq.com/pipermail/rabbitmq-announce/attachments/20100825/2c672695/attachment.txt pika-0.11.0/docs/modules/000077500000000000000000000000001315131611700151145ustar00rootroot00000000000000pika-0.11.0/docs/modules/adapters/000077500000000000000000000000001315131611700167175ustar00rootroot00000000000000pika-0.11.0/docs/modules/adapters/asyncio.rst000066400000000000000000000005441315131611700211210ustar00rootroot00000000000000asyncio Connection Adapter ========================== .. automodule:: pika.adapters.asyncio_connection Be sure to check out the :doc:`asynchronous examples ` including the asyncio specific :doc:`consumer ` example. .. autoclass:: pika.adapters.asyncio_connection.AsyncioConnection :members: :inherited-members: pika-0.11.0/docs/modules/adapters/blocking.rst000066400000000000000000000005271315131611700212450ustar00rootroot00000000000000BlockingConnection ------------------ .. automodule:: pika.adapters.blocking_connection Be sure to check out examples in :doc:`/examples`. .. autoclass:: pika.adapters.blocking_connection.BlockingConnection :members: :inherited-members: .. autoclass:: pika.adapters.blocking_connection.BlockingChannel :members: :inherited-members: pika-0.11.0/docs/modules/adapters/index.rst000066400000000000000000000007451315131611700205660ustar00rootroot00000000000000Connection Adapters =================== Pika uses connection adapters to provide a flexible method for adapting pika's core communication to different IOLoop implementations. In addition to asynchronous adapters, there is the :class:`BlockingConnection ` adapter that provides a more idomatic procedural approach to using Pika. Adapters -------- .. toctree:: :glob: :maxdepth: 1 blocking select tornado twisted pika-0.11.0/docs/modules/adapters/select.rst000066400000000000000000000003101315131611700207220ustar00rootroot00000000000000Select Connection Adapter ========================== .. automodule:: pika.adapters.select_connection .. autoclass:: pika.adapters.select_connection.SelectConnection :members: :inherited-members: pika-0.11.0/docs/modules/adapters/tornado.rst000066400000000000000000000005441315131611700211220ustar00rootroot00000000000000Tornado Connection Adapter ========================== .. automodule:: pika.adapters.tornado_connection Be sure to check out the :doc:`asynchronous examples ` including the Tornado specific :doc:`consumer ` example. .. autoclass:: pika.adapters.tornado_connection.TornadoConnection :members: :inherited-members: pika-0.11.0/docs/modules/adapters/twisted.rst000066400000000000000000000006331315131611700211360ustar00rootroot00000000000000Twisted Connection Adapter ========================== .. automodule:: pika.adapters.twisted_connection .. autoclass:: pika.adapters.twisted_connection.TwistedConnection :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.TwistedProtocolConnection :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.TwistedChannel :members: :inherited-members: pika-0.11.0/docs/modules/channel.rst000066400000000000000000000002241315131611700172540ustar00rootroot00000000000000Channel ======= .. automodule:: pika.channel Channel ------- .. autoclass:: Channel :members: :inherited-members: :member-order: bysource pika-0.11.0/docs/modules/connection.rst000066400000000000000000000002771315131611700200130ustar00rootroot00000000000000Connection ---------- The :class:`~pika.connection.Connection` class implements the base behavior that all connection adapters extend. .. autoclass:: pika.connection.Connection :members: pika-0.11.0/docs/modules/credentials.rst000066400000000000000000000005111315131611700201400ustar00rootroot00000000000000Authentication Credentials ========================== .. automodule:: pika.credentials PlainCredentials ---------------- .. autoclass:: PlainCredentials :members: :inherited-members: :noindex: ExternalCredentials ------------------- .. autoclass:: ExternalCredentials :members: :inherited-members: :noindex: pika-0.11.0/docs/modules/exceptions.rst000066400000000000000000000001261315131611700200260ustar00rootroot00000000000000Exceptions ========== .. automodule:: pika.exceptions :members: :undoc-members: pika-0.11.0/docs/modules/index.rst000066400000000000000000000015751315131611700167650ustar00rootroot00000000000000Core Class and Module Documentation =================================== For the end user, Pika is organized into a small set of objects for all communication with RabbitMQ. - A :doc:`connection adapter ` is used to connect to RabbitMQ and manages the connection. - :doc:`Connection parameters ` are used to instruct the :class:`~pika.connection.Connection` object how to connect to RabbitMQ. - :doc:`credentials` are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. - A :class:`~pika.channel.Channel` object is used to communicate with RabbitMQ via the AMQP RPC methods. - :doc:`exceptions` are raised at various points when using Pika when something goes wrong. .. toctree:: :hidden: :maxdepth: 1 adapters/index channel connection credentials exceptions parameters spec pika-0.11.0/docs/modules/parameters.rst000066400000000000000000000034261315131611700200160ustar00rootroot00000000000000Connection Parameters ===================== To maintain flexibility in how you specify the connection information required for your applications to properly connect to RabbitMQ, pika implements two classes for encapsulating the information, :class:`~pika.connection.ConnectionParameters` and :class:`~pika.connection.URLParameters`. ConnectionParameters -------------------- The classic object for specifying all of the connection parameters required to connect to RabbitMQ, :class:`~pika.connection.ConnectionParameters` provides attributes for tweaking every possible connection option. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('rabbit-server1', 5672, '/', credentials) .. autoclass:: pika.connection.ConnectionParameters :members: :inherited-members: :member-order: bysource URLParameters ------------- The :class:`~pika.connection.URLParameters` class allows you to pass in an AMQP URL when creating the object and supports the host, port, virtual host, ssl, username and password in the base URL and other options are passed in via query parameters. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F') .. autoclass:: pika.connection.URLParameters :members: :inherited-members: :member-order: bysource pika-0.11.0/docs/modules/spec.rst000066400000000000000000000002011315131611700165710ustar00rootroot00000000000000pika.spec ========= .. automodule:: pika.spec :members: :inherited-members: :member-order: bysource :undoc-members: pika-0.11.0/docs/version_history.rst000066400000000000000000001217701315131611700174540ustar00rootroot00000000000000Version History =============== Next Release ------------ 0.11.0 2017-07-29 ----------------- `0.11.0 `_ `GitHub milestone `_ - Simplify Travis CI configuration for OS X. - Add `asyncio` connection adapter for Python 3.4 and newer. - Connection failures that occur after the socket is opened and before the AMQP connection is ready to go are now reported by calling the connection error callback. Previously these were not consistently reported. - In BaseConnection.close, call _handle_ioloop_stop only if the connection is already closed to allow the asynchronous close operation to complete gracefully. - Pass error information from failed socket connection to user callbacks on_open_error_callback and on_close_callback with result_code=-1. - ValueError is raised when a completion callback is passed to an asynchronous (nowait) Channel operation. It's an application error to pass a non-None completion callback with an asynchronous request, because this callback can never be serviced in the asynchronous scenario. - `Channel.basic_reject` fixed to allow `delivery_tag` to be of type `long` as well as `int`. (by quantum5) - Implemented support for blocked connection timeouts in `pika.connection.Connection`. This feature is available to all pika adapters. See `pika.connection.ConnectionParameters` docstring to learn more about `blocked_connection_timeout` configuration. - Deprecated the `heartbeat_interval` arg in `pika.ConnectionParameters` in favor of the `heartbeat` arg for consistency with the other connection parameters classes `pika.connection.Parameters` and `pika.URLParameters`. - When the `port` arg is not set explicitly in `ConnectionParameters` constructor, but the `ssl` arg is set explicitly, then set the port value to to the default AMQP SSL port if SSL is enabled, otherwise to the default AMQP plaintext port. - `URLParameters` will raise ValueError if a non-empty URL scheme other than {amqp | amqps | http | https} is specified. - `InvalidMinimumFrameSize` and `InvalidMaximumFrameSize` exceptions are deprecated. pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. - Removed deprecated parameter `type` in `Channel.exchange_declare` and `BlockingChannel.exchnage_declare` in favor of the `exchange_type` arg that doesn't overshadow the builtin `type` keyword. - Channel.close() on OPENING channel transitions it to CLOSING instead of raising ChannelClosed. - Channel.close() on CLOSING channel raises `ChannelAlreadyClosing`; used to raise `ChannelClosed`. - Connection.channel() raises `ConnectionClosed` if connection is not in OPEN state. - When performing graceful close on a channel and `Channel.Close` from broker arrives while waiting for CloseOk, don't release the channel number until CloseOk arrives to avoid race condition that may lead to a new channel receiving the CloseOk that was destined for the closing channel. - The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. 0.10.0 2015-09-02 ----------------- `0.10.0 `_ - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 ------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsynchoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsynchoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as ehnanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.confirm_delivery`: raises UnroutableError when unroutable messages that were sent prior to this call are returned before we receive Confirm.Select-ok. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd98e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- `0.9.14 `_ - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- `0.9.13 `_ **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- `0.9.12 `_ **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- `0.9.11 `_ **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- `0.9.10 `_ **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ `0.9.9 `_ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ `0.9.8 `_ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ `0.9.7 `_ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ `0.9.6 `_ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ `0.9.5 `_ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-0.11.0/examples/000077500000000000000000000000001315131611700143325ustar00rootroot00000000000000pika-0.11.0/examples/asynchronous_publisher_example.py000066400000000000000000000333111315131611700232300ustar00rootroot00000000000000# -*- coding: utf-8 -*- import logging import pika import json LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExamplePublisher(object): """This is an example publisher that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. It uses delivery confirmations and illustrates one way to keep track of messages that have been sent and if they've been confirmed by RabbitMQ. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' PUBLISH_INTERVAL = 1 QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Setup the example publisher object, passing in the URL we will use to connect to RabbitMQ. :param str amqp_url: The URL for connecting to RabbitMQ """ self._connection = None self._channel = None self._deliveries = None self._acked = None self._nacked = None self._message_number = None self._stopping = False self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. If you want the reconnection to work, make sure you set stop_ioloop_on_close to False, which is not the default behavior of this adapter. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection(pika.URLParameters(self._url), on_open_callback=self.on_connection_open, on_close_callback=self.on_connection_closed, stop_ioloop_on_close=False) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.open_channel() def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._stopping: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self._connection.ioloop.stop) def open_channel(self): """This method will open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ confirms the channel is open by sending the Channel.OpenOK RPC reply, the on_channel_open method will be invoked. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel was closed: (%s) %s', reply_code, reply_text) self._channel = None if not self._stopping: self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_bindok(self, unused_frame): """This method is invoked by pika when it receives the Queue.BindOk response from RabbitMQ. Since we know we're now setup and bound, it's time to start publishing.""" LOGGER.info('Queue bound') self.start_publishing() def start_publishing(self): """This method will enable delivery confirmations and schedule the first message to be sent to RabbitMQ """ LOGGER.info('Issuing consumer related RPC commands') self.enable_delivery_confirmations() self.schedule_next_message() def enable_delivery_confirmations(self): """Send the Confirm.Select RPC method to RabbitMQ to enable delivery confirmations on the channel. The only way to turn this off is to close the channel and create a new one. When the message is confirmed from RabbitMQ, the on_delivery_confirmation method will be invoked passing in a Basic.Ack or Basic.Nack method from RabbitMQ that will indicate which messages it is confirming or rejecting. """ LOGGER.info('Issuing Confirm.Select RPC command') self._channel.confirm_delivery(self.on_delivery_confirmation) def on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ confirmation_type = method_frame.method.NAME.split('.')[1].lower() LOGGER.info('Received %s for delivery tag: %i', confirmation_type, method_frame.method.delivery_tag) if confirmation_type == 'ack': self._acked += 1 elif confirmation_type == 'nack': self._nacked += 1 self._deliveries.remove(method_frame.method.delivery_tag) LOGGER.info('Published %i messages, %i have yet to be confirmed, ' '%i were acked and %i were nacked', self._message_number, len(self._deliveries), self._acked, self._nacked) def schedule_next_message(self): """If we are not closing our connection to RabbitMQ, schedule another message to be delivered in PUBLISH_INTERVAL seconds. """ LOGGER.info('Scheduling next message for %0.1f seconds', self.PUBLISH_INTERVAL) self._connection.add_timeout(self.PUBLISH_INTERVAL, self.publish_message) def publish_message(self): """If the class is not stopping, publish a message to RabbitMQ, appending a list of deliveries with the message number that was sent. This list will be used to check for delivery confirmations in the on_delivery_confirmations method. Once the message has been sent, schedule another message to be sent. The main reason I put scheduling in was just so you can get a good idea of how the process is flowing by slowing down and speeding up the delivery intervals by changing the PUBLISH_INTERVAL constant in the class. """ if self._channel is None or not self._channel.is_open: return hdrs = {u'مفتاح': u' قيمة', u'键': u'值', u'キー': u'値'} properties = pika.BasicProperties(app_id='example-publisher', content_type='application/json', headers=hdrs) message = u'مفتاح قيمة 键 值 キー 値' self._channel.basic_publish(self.EXCHANGE, self.ROUTING_KEY, json.dumps(message, ensure_ascii=False), properties) self._message_number += 1 self._deliveries.append(self._message_number) LOGGER.info('Published message # %i', self._message_number) self.schedule_next_message() def run(self): """Run the example code by connecting and then starting the IOLoop. """ while not self._stopping: self._connection = None self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 try: self._connection = self.connect() self._connection.ioloop.start() except KeyboardInterrupt: self.stop() if (self._connection is not None and not self._connection.is_closed): # Finish closing self._connection.ioloop.start() LOGGER.info('Stopped') def stop(self): """Stop the example by closing the channel and connection. We set a flag here so that we stop scheduling new messages to be published. The IOLoop is started because this method is invoked by the Try/Catch below when KeyboardInterrupt is caught. Starting the IOLoop again will allow the publisher to cleanly disconnect from RabbitMQ. """ LOGGER.info('Stopping') self._stopping = True self.close_channel() self.close_connection() def close_channel(self): """Invoke this command to close the channel with RabbitMQ by sending the Channel.Close RPC command. """ if self._channel is not None: LOGGER.info('Closing the channel') self._channel.close() def close_connection(self): """This method closes the connection to RabbitMQ.""" if self._connection is not None: LOGGER.info('Closing connection') self._connection.close() def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) # Connect to localhost:5672 as guest with the password guest and virtual host "/" (%2F) example = ExamplePublisher('amqp://guest:guest@localhost:5672/%2F?connection_attempts=3&heartbeat_interval=3600') example.run() if __name__ == '__main__': main() pika-0.11.0/examples/confirmation.py000066400000000000000000000027041315131611700173770ustar00rootroot00000000000000import pika from pika import spec import logging ITERATIONS = 100 logging.basicConfig(level=logging.INFO) confirmed = 0 errors = 0 published = 0 def on_open(connection): connection.channel(on_channel_open) def on_channel_open(channel): global published channel.confirm_delivery(on_delivery_confirmation) for iteration in xrange(0, ITERATIONS): channel.basic_publish('test', 'test.confirm', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) published += 1 def on_delivery_confirmation(frame): global confirmed, errors if isinstance(frame.method, spec.Basic.Ack): confirmed += 1 logging.info('Received confirmation: %r', frame.method) else: logging.error('Received negative confirmation: %r', frame.method) errors += 1 if (confirmed + errors) == ITERATIONS: logging.info('All confirmations received, published %i, confirmed %i with %i errors', published, confirmed, errors) connection.close() parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F?connection_attempts=50') connection = pika.SelectConnection(parameters=parameters, on_open_callback=on_open) try: connection.ioloop.start() except KeyboardInterrupt: connection.close() connection.ioloop.start() pika-0.11.0/examples/consume.py000066400000000000000000000023231315131611700163550ustar00rootroot00000000000000import pika def on_message(channel, method_frame, header_frame, body): channel.queue_declare(queue=body, auto_delete=True) if body.startswith("queue:"): queue = body.replace("queue:", "") key = body + "_key" print("Declaring queue %s bound with key %s" %(queue, key)) channel.queue_declare(queue=queue, auto_delete=True) channel.queue_bind(queue=queue, exchange="test_exchange", routing_key=key) else: print("Message body", body) channel.basic_ack(delivery_tag=method_frame.delivery_tag) credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare(exchange="test_exchange", exchange_type="direct", passive=False, durable=True, auto_delete=False) channel.queue_declare(queue="standard", auto_delete=True) channel.queue_bind(queue="standard", exchange="test_exchange", routing_key="standard_key") channel.basic_qos(prefetch_count=1) channel.basic_consume(on_message, 'standard') try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() pika-0.11.0/examples/consumer_queued.py000066400000000000000000000036101315131611700201070ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json import threading buffer = [] lock = threading.Lock() print('pika version: %s' % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') queue = main_channel.queue_declare(exclusive=True).method.queue queue_tickers = main_channel.queue_declare(exclusive=True).method.queue main_channel.queue_bind(exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def process_buffer(): if not lock.acquire(False): print('locked!') return try: while len(buffer): body = buffer.pop(0) ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: continue print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind(exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) finally: lock.release() def callback(ch, method, properties, body): body = json.loads(body)['order.stop.create'] buffer.append(body) process_buffer() consumer_channel.basic_consume(callback, queue=queue, no_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-0.11.0/examples/consumer_simple.py000066400000000000000000000032551315131611700201150ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json print(('pika version: %s') % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') queue = main_channel.queue_declare(exclusive=True).method.queue queue_tickers = main_channel.queue_declare(exclusive=True).method.queue main_channel.queue_bind(exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def hello(): print('Hello world') connection.add_timeout(5, hello) def callback(ch, method, properties, body): body = json.loads(body)['order.stop.create'] ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: return print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind(exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) import logging logging.basicConfig(level=logging.INFO) consumer_channel.basic_consume(callback, queue=queue, no_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-0.11.0/examples/direct_reply_to.py000066400000000000000000000045701315131611700201010ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This example demonstrates RabbitMQ's "Direct reply-to" usage via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. """ import pika SERVER_QUEUE = 'rpc.server.queue' def main(): """ Here, Client sends "Marco" to RPC Server, and RPC Server replies with "Polo". NOTE Normally, the server would be running separately from the client, but in this very simple example both are running in the same thread and sharing connection and channel. """ with pika.BlockingConnection() as conn: channel = conn.channel() # Set up server channel.queue_declare(queue=SERVER_QUEUE, exclusive=True, auto_delete=True) channel.basic_consume(on_server_rx_rpc_request, queue=SERVER_QUEUE) # Set up client # NOTE Client must create its consumer and publish RPC requests on the # same channel to enable the RabbitMQ broker to make the necessary # associations. # # Also, client must create the consumer *before* starting to publish the # RPC requests. # # Client must create its consumer with no_ack=True, because the reply-to # queue isn't real. channel.basic_consume(on_client_rx_reply_from_server, queue='amq.rabbitmq.reply-to', no_ack=True) channel.basic_publish( exchange='', routing_key=SERVER_QUEUE, body='Marco', properties=pika.BasicProperties(reply_to='amq.rabbitmq.reply-to')) channel.start_consuming() def on_server_rx_rpc_request(ch, method_frame, properties, body): print 'RPC Server got request:', body ch.basic_publish('', routing_key=properties.reply_to, body='Polo') ch.basic_ack(delivery_tag=method_frame.delivery_tag) print 'RPC Server says good bye' def on_client_rx_reply_from_server(ch, method_frame, properties, body): print 'RPC Client got reply:', body # NOTE A real client might want to make additional RPC requests, but in this # simple example we're closing the channel after getting our first reply # to force control to return from channel.start_consuming() print 'RPC Client says bye' ch.close() if __name__ == '__main__': main() pika-0.11.0/examples/heatbeat_and_blocked_timeouts.py000066400000000000000000000036531315131611700227260ustar00rootroot00000000000000""" This example demonstrates explicit setting of heartbeat and blocked connection timeouts. Starting with RabbitMQ 3.5.5, the broker's default hearbeat timeout decreased from 580 seconds to 60 seconds. As a result, applications that perform lengthy processing in the same thread that also runs their Pika connection may experience unexpected dropped connections due to heartbeat timeout. Here, we specify an explicit lower bound for heartbeat timeout. When RabbitMQ broker is running out of certain resources, such as memory and disk space, it may block connections that are performing resource-consuming operations, such as publishing messages. Once a connection is blocked, RabbiMQ stops reading from that connection's socket, so no commands from the client will get through to te broker on that connection until the broker unblocks it. A blocked connection may last for an indefinite period of time, stalling the connection and possibly resulting in a hang (e.g., in BlockingConnection) until the connection is unblocked. Blocked Connectin Timeout is intended to interrupt (i.e., drop) a connection that has been blocked longer than the given timeout value. """ import pika def main(): # NOTE: These paramerers work with all Pika connection types params = pika.ConnectionParameters(heartbeat_interval=600, blocked_connection_timeout=300) conn = pika.BlockingConnection(params) chan = conn.channel() chan.basic_publish('', 'my-alphabet-queue', "abc") # If publish causes the connection to become blocked, then this conn.close() # would hang until the connection is unblocked, if ever. However, the # blocked_connection_timeout connection parameter would interrupt the wait, # resulting in ConnectionClosed exception from BlockingConnection (or the # on_connection_closed callback call in an asynchronous adapter) conn.close() if __name__ == '__main__': main() pika-0.11.0/examples/producer.py000066400000000000000000000027751315131611700165420ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json import random print(('pika version: %s') % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') tickers = {} tickers['MXSE.EQBR.LKOH'] = (1933,1940) tickers['MXSE.EQBR.MSNG'] = (1.35,1.45) tickers['MXSE.EQBR.SBER'] = (90,92) tickers['MXSE.EQNE.GAZP'] = (156,162) tickers['MXSE.EQNE.PLZL'] = (1025,1040) tickers['MXSE.EQNL.VTBR'] = (0.05,0.06) def getticker(): return list(tickers.keys())[random.randrange(0,len(tickers)-1)] _COUNT_ = 10 for i in range(0,_COUNT_): ticker = getticker() msg = {'order.stop.create':{'data':{'params':{'condition':{'ticker':ticker}}}}} main_channel.basic_publish(exchange='com.micex.sten', routing_key='order.stop.create', body=json.dumps(msg), properties=pika.BasicProperties(content_type='application/json') ) print('send ticker %s' % ticker) connection.close() pika-0.11.0/examples/publish.py000066400000000000000000000023141315131611700163520ustar00rootroot00000000000000import pika import logging logging.basicConfig(level=logging.DEBUG) credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare(exchange="test_exchange", exchange_type="direct", passive=False, durable=True, auto_delete=False) print("Sending message to create a queue") channel.basic_publish('test_exchange', 'standard_key', 'queue:group', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message to group") channel.basic_publish('test_exchange', 'group_key', 'Message to group_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message") channel.basic_publish('test_exchange', 'standard_key', 'Message to standard_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() pika-0.11.0/examples/send.py000066400000000000000000000021761315131611700156430ustar00rootroot00000000000000import pika import time import logging logging.basicConfig(level=logging.DEBUG) ITERATIONS = 100 connection = pika.BlockingConnection(pika.URLParameters('amqp://guest:guest@localhost:5672/%2F?heartbeat_interval=1')) channel = connection.channel() def closeit(): print('Close it') connection.close() connection.add_timeout(5, closeit) connection.sleep(100) """ channel.confirm_delivery() start_time = time.time() for x in range(0, ITERATIONS): if not channel.basic_publish(exchange='test', routing_key='', body='Test 123', properties=pika.BasicProperties(content_type='text/plain', app_id='test', delivery_mode=1)): print('Delivery not confirmed') else: print('Confirmed delivery') channel.close() connection.close() duration = time.time() - start_time print("Published %i messages in %.4f seconds (%.2f messages per second)" % (ITERATIONS, duration, (ITERATIONS/duration))) """ pika-0.11.0/examples/twisted_service.py000066400000000000000000000173731315131611700201220ustar00rootroot00000000000000""" # -*- coding:utf-8 -*- # based on: # - txamqp-helpers by Dan Siemon (March 2010) # http://git.coverfire.com/?p=txamqp-twistd.git;a=tree # - Post by Brian Chandler # https://groups.google.com/forum/#!topic/pika-python/o_deVmGondk # - Pika Documentation # https://pika.readthedocs.io/en/latest/examples/twisted_example.html Fire up this test application via `twistd -ny twisted_service.py` The application will answer to requests to exchange "foobar" and any of the routing_key values: "request1", "request2", or "request3" with messages to the same exchange, but with routing_key "response" When a routing_key of "task" is used on the exchange "foobar", the application can asynchronously run a maximum of 2 tasks at once as defined by PREFETCH_COUNT """ import pika from pika import spec from pika import exceptions from pika.adapters import twisted_connection from twisted.internet import protocol from twisted.application import internet from twisted.application import service from twisted.internet.defer import inlineCallbacks from twisted.internet import ssl, defer, task from twisted.python import log from twisted.internet import reactor PREFETCH_COUNT = 2 class PikaService(service.MultiService): name = 'amqp' def __init__(self, parameter): service.MultiService.__init__(self) self.parameters = parameter def startService(self): self.connect() service.MultiService.startService(self) def getFactory(self): if len(self.services) > 0: return self.services[0].factory def connect(self): f = PikaFactory(self.parameters) if self.parameters.ssl: s = ssl.ClientContextFactory() serv = internet.SSLClient(host=self.parameters.host, port=self.parameters.port, factory=f, contextFactory=s) else: serv = internet.TCPClient(host=self.parameters.host, port=self.parameters.port, factory=f) serv.factory = f f.service = serv name = '%s%s:%d' % ('ssl:' if self.parameters.ssl else '', self.parameters.host, self.parameters.port) serv.__repr__ = lambda : '' % name serv.setName(name) serv.parent = self self.addService(serv) class PikaProtocol(twisted_connection.TwistedProtocolConnection): connected = False name = 'AMQP:Protocol' @inlineCallbacks def connected(self, connection): self.channel = yield connection.channel() yield self.channel.basic_qos(prefetch_count=PREFETCH_COUNT) self.connected = True for (exchange, routing_key, callback,) in self.factory.read_list: yield self.setup_read(exchange, routing_key, callback) self.send() @inlineCallbacks def read(self, exchange, routing_key, callback): """Add an exchange to the list of exchanges to read from.""" if self.connected: yield self.setup_read(exchange, routing_key, callback) @inlineCallbacks def setup_read(self, exchange, routing_key, callback): """This function does the work to read from an exchange.""" if not exchange == '': yield self.channel.exchange_declare(exchange=exchange, type='topic', durable=True, auto_delete=False) self.channel.queue_declare(queue=routing_key, durable=True) (queue, consumer_tag,) = yield self.channel.basic_consume(queue=routing_key, no_ack=False) d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) def _read_item(self, item, queue, callback): """Callback function which is called when an item is read.""" d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) (channel, deliver, props, msg,) = item log.msg('%s (%s): %s' % (deliver.exchange, deliver.routing_key, repr(msg)), system='Pika:<=') d = defer.maybeDeferred(callback, item) d.addCallbacks( lambda _: channel.basic_ack(deliver.delivery_tag), lambda _: channel.basic_nack(deliver.delivery_tag) ) def _read_item_err(self, error): print(error) def send(self): """If connected, send all waiting messages.""" if self.connected: while len(self.factory.queued_messages) > 0: (exchange, r_key, message,) = self.factory.queued_messages.pop(0) self.send_message(exchange, r_key, message) @inlineCallbacks def send_message(self, exchange, routing_key, msg): """Send a single message.""" log.msg('%s (%s): %s' % (exchange, routing_key, repr(msg)), system='Pika:=>') yield self.channel.exchange_declare(exchange=exchange, type='topic', durable=True, auto_delete=False) prop = spec.BasicProperties(delivery_mode=2) try: yield self.channel.basic_publish(exchange=exchange, routing_key=routing_key, body=msg, properties=prop) except Exception as error: log.msg('Error while sending message: %s' % error, system=self.name) class PikaFactory(protocol.ReconnectingClientFactory): name = 'AMQP:Factory' def __init__(self, parameters): self.parameters = parameters self.client = None self.queued_messages = [] self.read_list = [] def startedConnecting(self, connector): log.msg('Started to connect.', system=self.name) def buildProtocol(self, addr): self.resetDelay() log.msg('Connected', system=self.name) self.client = PikaProtocol(self.parameters) self.client.factory = self self.client.ready.addCallback(self.client.connected) return self.client def clientConnectionLost(self, connector, reason): log.msg('Lost connection. Reason: %s' % reason, system=self.name) protocol.ReconnectingClientFactory.clientConnectionLost(self, connector, reason) def clientConnectionFailed(self, connector, reason): log.msg('Connection failed. Reason: %s' % reason, system=self.name) protocol.ReconnectingClientFactory.clientConnectionFailed(self, connector, reason) def send_message(self, exchange = None, routing_key = None, message = None): self.queued_messages.append((exchange, routing_key, message)) if self.client is not None: self.client.send() def read_messages(self, exchange, routing_key, callback): """Configure an exchange to be read from.""" self.read_list.append((exchange, routing_key, callback)) if self.client is not None: self.client.read(exchange, routing_key, callback) application = service.Application("pikaapplication") ps = PikaService(pika.ConnectionParameters(host="localhost", virtual_host="/", credentials=pika.PlainCredentials("guest", "guest"))) ps.setServiceParent(application) class TestService(service.Service): def task(self, msg): """ Method for a time consuming task. This function must return a deferred. If it is successfull, a `basic.ack` will be sent to AMQP. If the task was not completed a `basic.nack` will be sent. In this example it will always return successfully after a 2 second pause. """ return task.deferLater(reactor, 2, lambda: log.msg("task completed")) def respond(self, msg): self.amqp.send_message('foobar', 'response', msg[3]) def startService(self): self.amqp = self.parent.getServiceNamed("amqp").getFactory() self.amqp.read_messages("foobar", "request1", self.respond) self.amqp.read_messages("foobar", "request2", self.respond) self.amqp.read_messages("foobar", "request3", self.respond) self.amqp.read_messages("foobar", "task", self.task) ts = TestService() ts.setServiceParent(application) pika-0.11.0/pika/000077500000000000000000000000001315131611700134405ustar00rootroot00000000000000pika-0.11.0/pika/__init__.py000066400000000000000000000013721315131611700155540ustar00rootroot00000000000000__version__ = '0.11.0' import logging try: # not available in python 2.6 from logging import NullHandler except ImportError: class NullHandler(logging.Handler): def emit(self, record): pass # Add NullHandler to prevent logging warnings logging.getLogger(__name__).addHandler(NullHandler()) from pika.connection import ConnectionParameters from pika.connection import URLParameters from pika.credentials import PlainCredentials from pika.spec import BasicProperties from pika.adapters import BaseConnection from pika.adapters import BlockingConnection from pika.adapters import SelectConnection from pika.adapters import TornadoConnection from pika.adapters import TwistedConnection from pika.adapters import LibevConnection pika-0.11.0/pika/adapters/000077500000000000000000000000001315131611700152435ustar00rootroot00000000000000pika-0.11.0/pika/adapters/__init__.py000066400000000000000000000031221315131611700173520ustar00rootroot00000000000000""" Connection Adapters =================== Pika provides multiple adapters to connect to RabbitMQ: - adapters.select_connection.SelectConnection: A native event based connection adapter that implements select, kqueue, poll and epoll. - adapters.tornado_connection.TornadoConnection: Connection adapter for use with the Tornado web framework. - adapters.blocking_connection.BlockingConnection: Enables blocking, synchronous operation on top of library for simple uses. - adapters.twisted_connection.TwistedConnection: Connection adapter for use with the Twisted framework - adapters.libev_connection.LibevConnection: Connection adapter for use with the libev event loop and employing nonblocking IO """ from pika.adapters.base_connection import BaseConnection from pika.adapters.blocking_connection import BlockingConnection from pika.adapters.select_connection import SelectConnection from pika.adapters.select_connection import IOLoop # Dynamically handle 3rd party library dependencies for optional imports try: from pika.adapters.tornado_connection import TornadoConnection except ImportError: TornadoConnection = None try: from pika.adapters.twisted_connection import TwistedConnection from pika.adapters.twisted_connection import TwistedProtocolConnection except ImportError: TwistedConnection = None TwistedProtocolConnection = None try: from pika.adapters.libev_connection import LibevConnection except ImportError: LibevConnection = None try: from pika.adapters.asyncio_connection import AsyncioConnection except ImportError: AsyncioConnection = None pika-0.11.0/pika/adapters/asyncio_connection.py000066400000000000000000000150731315131611700215070ustar00rootroot00000000000000"""Use pika with the Asyncio EventLoop""" import asyncio from functools import partial from pika.adapters import base_connection class IOLoopAdapter: def __init__(self, loop): """ Basic adapter for asyncio event loop :type loop: asyncio.AbstractEventLoop :param loop: Asyncio Loop """ self.loop = loop self.handlers = {} self.readers = set() self.writers = set() def add_timeout(self, deadline, callback_method): """Add the callback_method to the EventLoop timer to fire after deadline seconds. Returns a Handle to the timeout. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: asyncio.Handle """ return self.loop.call_later(deadline, callback_method) @staticmethod def remove_timeout(handle): """ Cancel asyncio.Handle :type handle: asyncio.Handle :rtype: bool """ return handle.cancel() def add_handler(self, fd, cb, event_state): """ Registers the given handler to receive the given events for ``fd``. The ``fd`` argument is an integer file descriptor. The ``event_state`` argument is a bitwise or of the constants ``base_connection.BaseConnection.READ``, ``base_connection.BaseConnection.WRITE``, and ``base_connection.BaseConnection.ERROR``. """ if fd in self.handlers: raise ValueError("fd {} added twice".format(fd)) self.handlers[fd] = cb if event_state & base_connection.BaseConnection.READ: self.loop.add_reader( fd, partial( cb, fd=fd, events=base_connection.BaseConnection.READ ) ) self.readers.add(fd) if event_state & base_connection.BaseConnection.WRITE: self.loop.add_writer( fd, partial( cb, fd=fd, events=base_connection.BaseConnection.WRITE ) ) self.writers.add(fd) def remove_handler(self, fd): """ Stop listening for events on ``fd``. """ if fd not in self.handlers: return if fd in self.readers: self.loop.remove_reader(fd) self.readers.remove(fd) if fd in self.writers: self.loop.remove_writer(fd) self.writers.remove(fd) del self.handlers[fd] def update_handler(self, fd, event_state): if event_state & base_connection.BaseConnection.READ: if fd not in self.readers: self.loop.add_reader( fd, partial( self.handlers[fd], fd=fd, events=base_connection.BaseConnection.READ ) ) self.readers.add(fd) else: if fd in self.readers: self.loop.remove_reader(fd) self.readers.remove(fd) if event_state & base_connection.BaseConnection.WRITE: if fd not in self.writers: self.loop.add_writer( fd, partial( self.handlers[fd], fd=fd, events=base_connection.BaseConnection.WRITE ) ) self.writers.add(fd) else: if fd in self.writers: self.loop.remove_writer(fd) self.writers.remove(fd) def start(self): """ Start Event Loop """ if self.loop.is_running(): return self.loop.run_forever() def stop(self): """ Stop Event Loop """ if self.loop.is_closed(): return self.loop.stop() class AsyncioConnection(base_connection.BaseConnection): """ The AsyncioConnection runs on the Asyncio EventLoop. :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param asyncio.AbstractEventLoop loop: By default asyncio.get_event_loop() """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False, custom_ioloop=None): """ Create a new instance of the AsyncioConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param asyncio.AbstractEventLoop loop: By default asyncio.get_event_loop() """ self.sleep_counter = 0 self.loop = custom_ioloop or asyncio.get_event_loop() self.ioloop = IOLoopAdapter(self.loop) super().__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, self.ioloop, stop_ioloop_on_close=stop_ioloop_on_close, ) def _adapter_connect(self): """Connect to the remote socket, adding the socket to the EventLoop if connected. :rtype: bool """ error = super()._adapter_connect() if not error: self.ioloop.add_handler( self.socket.fileno(), self._handle_events, self.event_state, ) return error def _adapter_disconnect(self): """Disconnect from the RabbitMQ broker""" if self.socket: self.ioloop.remove_handler( self.socket.fileno() ) super()._adapter_disconnect() def _handle_disconnect(self): # No other way to handle exceptions.ProbableAuthenticationError try: super()._handle_disconnect() super()._handle_write() except Exception as e: # FIXME: Pass None or other constant instead "-1" self._on_disconnect(-1, e) pika-0.11.0/pika/adapters/base_connection.py000066400000000000000000000463211315131611700207540ustar00rootroot00000000000000"""Base class extended by connection adapters. This extends the connection.Connection class to encapsulate connection behavior but still isolate socket and low level communication. """ import errno import logging import socket import ssl import pika.compat from pika import connection try: SOL_TCP = socket.SOL_TCP except AttributeError: SOL_TCP = 6 if pika.compat.PY2: _SOCKET_ERROR = socket.error else: # socket.error was deprecated and replaced by OSError in python 3.3 _SOCKET_ERROR = OSError LOGGER = logging.getLogger(__name__) class BaseConnection(connection.Connection): """BaseConnection class that should be extended by connection adapters""" # Use epoll's constants to keep life easy READ = 0x0001 WRITE = 0x0004 ERROR = 0x0008 ERRORS_TO_ABORT = [errno.EBADF, errno.ECONNABORTED, errno.EPIPE, errno.ETIMEDOUT] ERRORS_TO_IGNORE = [errno.EWOULDBLOCK, errno.EAGAIN, errno.EINTR] DO_HANDSHAKE = True WARN_ABOUT_IOLOOP = False def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, ioloop=None, stop_ioloop_on_close=True): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Method to call on connection open :param method on_open_error_callback: Called if the connection can't be established: on_open_error_callback(connection, str|exception) :param method on_close_callback: Called when the connection is closed: on_close_callback(connection, reason_code, reason_text) :param object ioloop: IOLoop object to use :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :raises: RuntimeError :raises: ValueError """ if parameters and not isinstance(parameters, connection.Parameters): raise ValueError('Expected instance of Parameters, not %r' % parameters) # Let the developer know we could not import SSL if parameters and parameters.ssl and not ssl: raise RuntimeError("SSL specified but it is not available") self.base_events = self.READ | self.ERROR self.event_state = self.base_events self.ioloop = ioloop self.socket = None self.stop_ioloop_on_close = stop_ioloop_on_close self.write_buffer = None super(BaseConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback) def __repr__(self): def get_socket_repr(sock): """Return socket info suitable for use in repr""" if sock is None: return None sockname = None peername = None try: sockname = sock.getsockname() except socket.error: # closed? pass else: try: peername = sock.getpeername() except socket.error: # not connected? pass return '%s->%s' % (sockname, peername) return ( '<%s %s socket=%s params=%s>' % (self.__class__.__name__, self._STATE_NAMES[self.connection_state], get_socket_repr(self.socket), self.params)) def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ return self.ioloop.add_timeout(deadline, callback_method) def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ try: super(BaseConnection, self).close(reply_code, reply_text) finally: if self.is_closed: self._handle_ioloop_stop() def remove_timeout(self, timeout_id): """Remove the timeout from the IOLoop by the ID returned from add_timeout. :rtype: str """ self.ioloop.remove_timeout(timeout_id) def _adapter_connect(self): """Connect to the RabbitMQ broker, returning True if connected. :returns: error string or exception instance on error; None on success """ # Get the addresses for the socket, supporting IPv4 & IPv6 while True: try: addresses = self._getaddrinfo(self.params.host, self.params.port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue LOGGER.critical('Could not get addresses to use: %s (%s)', error, self.params.host) return error # If the socket is created and connected, continue on error = "No socket addresses available" for sock_addr in addresses: error = self._create_and_connect_to_socket(sock_addr) if not error: # Make the socket non-blocking after the connect self.socket.setblocking(0) return None self._cleanup_socket() # Failed to connect return error def _adapter_disconnect(self): """Invoked if the connection is being told to disconnect""" try: self._cleanup_socket() finally: self._handle_ioloop_stop() def _cleanup_socket(self): """Close the socket cleanly""" if self.socket: try: self.socket.shutdown(socket.SHUT_RDWR) except _SOCKET_ERROR: pass self.socket.close() self.socket = None def _create_and_connect_to_socket(self, sock_addr_tuple): """Create socket and connect to it, using SSL if enabled. :returns: error string on failure; None on success """ self.socket = self._create_tcp_connection_socket( sock_addr_tuple[0], sock_addr_tuple[1], sock_addr_tuple[2]) self.socket.setsockopt(SOL_TCP, socket.TCP_NODELAY, 1) self.socket.settimeout(self.params.socket_timeout) # Wrap socket if using SSL if self.params.ssl: self.socket = self._wrap_socket(self.socket) ssl_text = " with SSL" else: ssl_text = "" LOGGER.info('Connecting to %s:%s%s', sock_addr_tuple[4][0], sock_addr_tuple[4][1], ssl_text) # Connect to the socket try: self.socket.connect(sock_addr_tuple[4]) except socket.timeout: error = 'Connection to %s:%s failed: timeout' % ( sock_addr_tuple[4][0], sock_addr_tuple[4][1] ) LOGGER.error(error) return error except _SOCKET_ERROR as error: error = 'Connection to %s:%s failed: %s' % (sock_addr_tuple[4][0], sock_addr_tuple[4][1], error) LOGGER.warning(error) return error # Handle SSL Connection Negotiation if self.params.ssl and self.DO_HANDSHAKE: try: self._do_ssl_handshake() except ssl.SSLError as error: error = 'SSL connection to %s:%s failed: %s' % ( sock_addr_tuple[4][0], sock_addr_tuple[4][1], error ) LOGGER.error(error) return error # Made it this far return None @staticmethod def _create_tcp_connection_socket(sock_family, sock_type, sock_proto): """ Create TCP/IP stream socket for AMQP connection :param int sock_family: socket family :param int sock_type: socket type :param int sock_proto: socket protocol number NOTE We break this out to make it easier to patch in mock tests """ return socket.socket(sock_family, sock_type, sock_proto) def _do_ssl_handshake(self): """Perform SSL handshaking, copied from python stdlib test_ssl.py. """ if not self.DO_HANDSHAKE: return while True: try: self.socket.do_handshake() break # TODO should be using SSLWantReadError, etc. directly except ssl.SSLError as err: # TODO these exc are for non-blocking sockets, but ours isn't # at this stage, so it's not clear why we have this. if err.args[0] == ssl.SSL_ERROR_WANT_READ: self.event_state = self.READ elif err.args[0] == ssl.SSL_ERROR_WANT_WRITE: self.event_state = self.WRITE else: raise self._manage_event_state() @staticmethod def _getaddrinfo(host, port, family, socktype, proto): """Wrap `socket.getaddrinfo` to make it easier to patch for unit tests """ return socket.getaddrinfo(host, port, family, socktype, proto) @staticmethod def _get_error_code(error_value): """Get the error code from the error_value accounting for Python version differences. :rtype: int """ if not error_value: return None if hasattr(error_value, 'errno'): # Python >= 2.6 return error_value.errno else: # TODO this doesn't look right; error_value.args[0] ??? Could # probably remove this code path since pika doesn't test against # Python 2.5 return error_value[0] # Python <= 2.5 def _flush_outbound(self): """Have the state manager schedule the necessary I/O. """ # NOTE: We don't call _handle_write() from this context, because pika # code was not designed to be writing to (or reading from) the socket # from any methods, except from ioloop handler callbacks. Many methods # in pika core and adapters do not deal gracefully with connection # errors occurring in their context; e.g., Connection.channel (pika # issue #659), Connection._on_connection_tune (if connection loss is # detected in _send_connection_tune_ok, before _send_connection_open is # called), etc., etc., etc. self._manage_event_state() def _handle_ioloop_stop(self): """Invoked when the connection is closed to determine if the IOLoop should be stopped or not. """ if self.stop_ioloop_on_close and self.ioloop: self.ioloop.stop() elif self.WARN_ABOUT_IOLOOP: LOGGER.warning('Connection is closed but not stopping IOLoop') def _handle_error(self, error_value): """Internal error handling method. Here we expect a socket.error coming in and will handle different socket errors differently. :param int|object error_value: The inbound error """ # TODO doesn't seem right: docstring defines error_value as int|object, # but _get_error_code expects a falsie or an exception-like object error_code = self._get_error_code(error_value) if not error_code: LOGGER.critical("Tried to handle an error where no error existed") return # Ok errors, just continue what we were doing before if error_code in self.ERRORS_TO_IGNORE: LOGGER.debug("Ignoring %s", error_code) return # Socket is no longer connected, abort elif error_code in self.ERRORS_TO_ABORT: LOGGER.error("Fatal Socket Error: %r", error_value) elif self.params.ssl and isinstance(error_value, ssl.SSLError): if error_value.args[0] == ssl.SSL_ERROR_WANT_READ: # TODO doesn't seem right: this logic updates event state, but # the logic at the bottom unconditionaly disconnects anyway. self.event_state = self.READ elif error_value.args[0] == ssl.SSL_ERROR_WANT_WRITE: self.event_state = self.WRITE else: LOGGER.error("SSL Socket error: %r", error_value) else: # Haven't run into this one yet, log it. LOGGER.error("Socket Error: %s", error_code) # Disconnect from our IOLoop and let Connection know what's up self._on_terminate(connection.InternalCloseReasons.SOCKET_ERROR, repr(error_value)) def _handle_timeout(self): """Handle a socket timeout in read or write. We don't do anything in the non-blocking handlers because we only have the socket in a blocking state during connect.""" LOGGER.warning("Unexpected socket timeout") def _handle_events(self, fd, events, error=None, write_only=False): """Handle IO/Event loop events, processing them. :param int fd: The file descriptor for the events :param int events: Events from the IO/Event loop :param int error: Was an error specified; TODO none of the current adapters appear to be able to pass the `error` arg - is it needed? :param bool write_only: Only handle write events """ if not self.socket: LOGGER.error('Received events on closed socket: %r', fd) return if self.socket and (events & self.WRITE): self._handle_write() self._manage_event_state() if self.socket and not write_only and (events & self.READ): self._handle_read() if (self.socket and write_only and (events & self.READ) and (events & self.ERROR)): error_msg = ('BAD libc: Write-Only but Read+Error. ' 'Assume socket disconnected.') LOGGER.error(error_msg) self._on_terminate(connection.InternalCloseReasons.SOCKET_ERROR, error_msg) if self.socket and (events & self.ERROR): LOGGER.error('Error event %r, %r', events, error) self._handle_error(error) def _handle_read(self): """Read from the socket and call our on_data_available with the data.""" try: while True: try: if self.params.ssl: data = self.socket.read(self._buffer_size) else: data = self.socket.recv(self._buffer_size) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue else: raise except socket.timeout: self._handle_timeout() return 0 except ssl.SSLError as error: if error.args[0] == ssl.SSL_ERROR_WANT_READ: # ssl wants more data but there is nothing currently # available in the socket, wait for it to become readable. return 0 return self._handle_error(error) except _SOCKET_ERROR as error: if error.errno in (errno.EAGAIN, errno.EWOULDBLOCK): return 0 return self._handle_error(error) # Empty data, should disconnect if not data or data == 0: LOGGER.error('Read empty data, calling disconnect') return self._on_terminate( connection.InternalCloseReasons.SOCKET_ERROR, "EOF") # Pass the data into our top level frame dispatching method self._on_data_available(data) return len(data) def _handle_write(self): """Try and write as much as we can, if we get blocked requeue what's left""" total_bytes_sent = 0 try: while self.outbound_buffer: frame = self.outbound_buffer.popleft() while True: try: num_bytes_sent = self.socket.send(frame) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue else: raise total_bytes_sent += num_bytes_sent if num_bytes_sent < len(frame): LOGGER.debug("Partial write, requeing remaining data") self.outbound_buffer.appendleft(frame[num_bytes_sent:]) break except socket.timeout: # Will only come here if the socket is blocking LOGGER.debug("socket timeout, requeuing frame") self.outbound_buffer.appendleft(frame) self._handle_timeout() except _SOCKET_ERROR as error: if error.errno in (errno.EAGAIN, errno.EWOULDBLOCK): LOGGER.debug("Would block, requeuing frame") self.outbound_buffer.appendleft(frame) else: return self._handle_error(error) return total_bytes_sent def _init_connection_state(self): """Initialize or reset all of our internal state variables for a given connection. If we disconnect and reconnect, all of our state needs to be wiped. """ super(BaseConnection, self)._init_connection_state() self.base_events = self.READ | self.ERROR self.event_state = self.base_events self.socket = None def _manage_event_state(self): """Manage the bitmask for reading/writing/error which is used by the io/event handler to specify when there is an event such as a read or write. """ if self.outbound_buffer: if not self.event_state & self.WRITE: self.event_state |= self.WRITE self.ioloop.update_handler(self.socket.fileno(), self.event_state) elif self.event_state & self.WRITE: self.event_state = self.base_events self.ioloop.update_handler(self.socket.fileno(), self.event_state) def _wrap_socket(self, sock): """Wrap the socket for connecting over SSL. :rtype: ssl.SSLSocket """ ssl_options = self.params.ssl_options or {} return ssl.wrap_socket(sock, do_handshake_on_connect=self.DO_HANDSHAKE, **ssl_options) pika-0.11.0/pika/adapters/blocking_connection.py000066400000000000000000003134651315131611700216400ustar00rootroot00000000000000"""The blocking connection adapter module implements blocking semantics on top of Pika's core AMQP driver. While most of the asynchronous expectations are removed when using the blocking connection adapter, it attempts to remain true to the asynchronous RPC nature of the AMQP protocol, supporting server sent RPC commands. The user facing classes in the module consist of the :py:class:`~pika.adapters.blocking_connection.BlockingConnection` and the :class:`~pika.adapters.blocking_connection.BlockingChannel` classes. """ # Disable "access to protected member warnings: this wrapper implementation is # a friend of those instances # pylint: disable=W0212 from collections import namedtuple, deque import contextlib import functools import logging import time import pika.channel from pika import compat from pika import exceptions import pika.spec # NOTE: import SelectConnection after others to avoid circular depenency from pika.adapters.select_connection import SelectConnection LOGGER = logging.getLogger(__name__) class _CallbackResult(object): """ CallbackResult is a non-thread-safe implementation for receiving callback results; INTERNAL USE ONLY! """ __slots__ = ('_value_class', '_ready', '_values') def __init__(self, value_class=None): """ :param callable value_class: only needed if the CallbackResult instance will be used with `set_value_once` and `append_element`. *args and **kwargs of the value setter methods will be passed to this class. """ self._value_class = value_class self._ready = None self._values = None self.reset() def reset(self): """Reset value, but not _value_class""" self._ready = False self._values = None def __bool__(self): """ Called by python runtime to implement truth value testing and the built-in operation bool(); NOTE: python 3.x """ return self.is_ready() # python 2.x version of __bool__ __nonzero__ = __bool__ def __enter__(self): """ Entry into context manager that automatically resets the object on exit; this usage pattern helps garbage-collection by eliminating potential circular references. """ return self def __exit__(self, *args, **kwargs): """Reset value""" self.reset() def is_ready(self): """ :returns: True if the object is in a signaled state """ return self._ready @property def ready(self): """True if the object is in a signaled state""" return self._ready def signal_once(self, *_args, **_kwargs): """ Set as ready :raises AssertionError: if result was already signalled """ assert not self._ready, '_CallbackResult was already set' self._ready = True def set_value_once(self, *args, **kwargs): """ Set as ready with value; the value may be retrieved via the `value` property getter :raises AssertionError: if result was already set """ self.signal_once() try: self._values = (self._value_class(*args, **kwargs),) except Exception: LOGGER.error( "set_value_once failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise def append_element(self, *args, **kwargs): """Append an element to values""" assert not self._ready or isinstance(self._values, list), ( '_CallbackResult state is incompatible with append_element: ' 'ready=%r; values=%r' % (self._ready, self._values)) try: value = self._value_class(*args, **kwargs) except Exception: LOGGER.error( "append_element failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise if self._values is None: self._values = [value] else: self._values.append(value) self._ready = True @property def value(self): """ :returns: a reference to the value that was set via `set_value_once` :raises AssertionError: if result was not set or value is incompatible with `set_value_once` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, tuple) and len(self._values) == 1, ( '_CallbackResult value is incompatible with set_value_once: %r' % (self._values,)) return self._values[0] @property def elements(self): """ :returns: a reference to the list containing one or more elements that were added via `append_element` :raises AssertionError: if result was not set or value is incompatible with `append_element` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, list) and len(self._values) > 0, ( '_CallbackResult value is incompatible with append_element: %r' % (self._values,)) return self._values class _IoloopTimerContext(object): """Context manager for registering and safely unregistering a SelectConnection ioloop-based timer """ def __init__(self, duration, connection): """ :param float duration: non-negative timer duration in seconds :param SelectConnection connection: """ assert hasattr(connection, 'add_timeout'), connection self._duration = duration self._connection = connection self._callback_result = _CallbackResult() self._timer_id = None def __enter__(self): """Register a timer""" self._timer_id = self._connection.add_timeout( self._duration, self._callback_result.signal_once) return self def __exit__(self, *_args, **_kwargs): """Unregister timer if it hasn't fired yet""" if not self._callback_result: self._connection.remove_timeout(self._timer_id) def is_ready(self): """ :returns: True if timer has fired, False otherwise """ return self._callback_result.is_ready() class _TimerEvt(object): """Represents a timer created via `BlockingConnection.add_timeout`""" __slots__ = ('timer_id', '_callback') def __init__(self, callback): """ :param callback: see callback_method in `BlockingConnection.add_timeout` """ self._callback = callback # Will be set to timer id returned from the underlying implementation's # `add_timeout` method self.timer_id = None def __repr__(self): return '<%s timer_id=%s callback=%s>' % (self.__class__.__name__, self.timer_id, self._callback) def dispatch(self): """Dispatch the user's callback method""" self._callback() class _ConnectionBlockedUnblockedEvtBase(object): """Base class for `_ConnectionBlockedEvt` and `_ConnectionUnblockedEvt`""" __slots__ = ('_callback', '_method_frame') def __init__(self, callback, method_frame): """ :param callback: see callback_method parameter in `BlockingConnection.add_on_connection_blocked_callback` and `BlockingConnection.add_on_connection_unblocked_callback` :param pika.frame.Method method_frame: with method_frame.method of type `pika.spec.Connection.Blocked` or `pika.spec.Connection.Unblocked` """ self._callback = callback self._method_frame = method_frame def __repr__(self): return '<%s callback=%s, frame=%s>' % (self.__class__.__name__, self._callback, self._method_frame) def dispatch(self): """Dispatch the user's callback method""" self._callback(self._method_frame) class _ConnectionBlockedEvt(_ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Blocked notification from RabbitMQ broker`""" pass class _ConnectionUnblockedEvt(_ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Unblocked notification from RabbitMQ broker`""" pass class BlockingConnection(object): """The BlockingConnection creates a layer on top of Pika's asynchronous core providing methods that will block until their expected response has returned. Due to the asynchronous nature of the `Basic.Deliver` and `Basic.Return` calls from RabbitMQ to your application, you can still implement continuation-passing style asynchronous methods if you'd like to receive messages from RabbitMQ using :meth:`basic_consume ` or if you want to be notified of a delivery failure when using :meth:`basic_publish `. For more information about communicating with the blocking_connection adapter, be sure to check out the :class:`BlockingChannel ` class which implements the :class:`Channel ` based communication for the blocking_connection adapter. To prevent recursion/reentrancy, the blocking connection and channel implementations queue asynchronously-delivered events received in nested context (e.g., while waiting for `BlockingConnection.channel` or `BlockingChannel.queue_declare` to complete), dispatching them synchronously once nesting returns to the desired context. This concerns all callbacks, such as those registered via `BlockingConnection.add_timeout`, `BlockingConnection.add_on_connection_blocked_callback`, `BlockingConnection.add_on_connection_unblocked_callback`, `BlockingChannel.basic_consume`, etc. Blocked Connection deadlock avoidance: when RabbitMQ becomes low on resources, it emits Connection.Blocked (AMQP extension) to the client connection when client makes a resource-consuming request on that connection or its channel (e.g., `Basic.Publish`); subsequently, RabbitMQ suspsends processing requests from that connection until the affected resources are restored. See http://www.rabbitmq.com/connection-blocked.html. This may impact `BlockingConnection` and `BlockingChannel` operations in a way that users might not be expecting. For example, if the user dispatches `BlockingChannel.basic_publish` in non-publisher-confirmation mode while RabbitMQ is in this low-resource state followed by a synchronous request (e.g., `BlockingConnection.channel`, `BlockingChannel.consume`, `BlockingChannel.basic_consume`, etc.), the synchronous request will block indefinitely (until Connection.Unblocked) waiting for RabbitMQ to reply. If the blocked state persists for a long time, the blocking operation will appear to hang. In this state, `BlockingConnection` instance and its channels will not dispatch user callbacks. SOLUTION: To break this potential deadlock, applications may configure the `blocked_connection_timeout` connection parameter when instantiating `BlockingConnection`. Upon blocked connection timeout, this adapter will raise ConnectionClosed exception with first exception arg of `pika.connection.InternalCloseReasons.BLOCKED_CONNECTION_TIMEOUT`. See `pika.connection.ConnectionParameters` documentation to learn more about `blocked_connection_timeout` configuration. """ # Connection-opened callback args _OnOpenedArgs = namedtuple('BlockingConnection__OnOpenedArgs', 'connection') # Connection-establishment error callback args _OnOpenErrorArgs = namedtuple('BlockingConnection__OnOpenErrorArgs', 'connection error') # Connection-closing callback args _OnClosedArgs = namedtuple('BlockingConnection__OnClosedArgs', 'connection reason_code reason_text') # Channel-opened callback args _OnChannelOpenedArgs = namedtuple( 'BlockingConnection__OnChannelOpenedArgs', 'channel') def __init__(self, parameters=None, _impl_class=None): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param _impl_class: for tests/debugging only; implementation class; None=default :raises RuntimeError: """ # Used by the _acquire_event_dispatch decorator; when already greater # than 0, event dispatch is already acquired higher up the call stack self._event_dispatch_suspend_depth = 0 # Connection-specific events that are ready for dispatch: _TimerEvt, # _ConnectionBlockedEvt, _ConnectionUnblockedEvt self._ready_events = deque() # Channel numbers of channels that are requesting a call to their # BlockingChannel._dispatch_events method; See # `_request_channel_dispatch` self._channels_pending_dispatch = set() # Receives on_open_callback args from Connection self._opened_result = _CallbackResult(self._OnOpenedArgs) # Receives on_open_error_callback args from Connection self._open_error_result = _CallbackResult(self._OnOpenErrorArgs) # Receives on_close_callback args from Connection self._closed_result = _CallbackResult(self._OnClosedArgs) # Set to True when when user calls close() on the connection # NOTE: this is a workaround to detect socket error because # on_close_callback passes reason_code=0 when called due to socket error self._user_initiated_close = False impl_class = _impl_class or SelectConnection self._impl = impl_class( parameters=parameters, on_open_callback=self._opened_result.set_value_once, on_open_error_callback=self._open_error_result.set_value_once, on_close_callback=self._closed_result.set_value_once, stop_ioloop_on_close=False) self._impl.ioloop.activate_poller() self._process_io_for_connection_setup() def __repr__(self): return '<%s impl=%r>' % (self.__class__.__name__, self._impl) def _cleanup(self): """Clean up members that might inhibit garbage collection""" self._impl.ioloop.deactivate_poller() self._ready_events.clear() self._opened_result.reset() self._open_error_result.reset() self._closed_result.reset() @contextlib.contextmanager def _acquire_event_dispatch(self): """ Context manager that controls access to event dispatcher for preventing reentrancy. The "as" value is True if the managed code block owns the event dispatcher and False if caller higher up in the call stack already owns it. Only managed code that gets ownership (got True) is permitted to dispatch """ try: # __enter__ part self._event_dispatch_suspend_depth += 1 yield self._event_dispatch_suspend_depth == 1 finally: # __exit__ part self._event_dispatch_suspend_depth -= 1 def _process_io_for_connection_setup(self): """ Perform follow-up processing for connection setup request: flush connection output and process input while waiting for connection-open or connection-error. :raises AMQPConnectionError: on connection open error """ if not self._open_error_result.ready: self._flush_output(self._opened_result.is_ready, self._open_error_result.is_ready) if self._open_error_result.ready: try: exception_or_message = self._open_error_result.value.error if isinstance(exception_or_message, Exception): raise exception_or_message raise exceptions.AMQPConnectionError(exception_or_message) finally: self._cleanup() assert self._opened_result.ready assert self._opened_result.value.connection is self._impl def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. """ if self.is_closed: raise exceptions.ConnectionClosed() # Conditions for terminating the processing loop: # connection closed # OR # empty outbound buffer and no waiters # OR # empty outbound buffer and any waiter is ready is_done = (lambda: self._closed_result.ready or (not self._impl.outbound_buffer and (not waiters or any(ready() for ready in waiters)))) # Process I/O until our completion condition is satisified while not is_done(): self._impl.ioloop.poll() self._impl.ioloop.process_timeouts() if self._open_error_result.ready or self._closed_result.ready: try: if not self._user_initiated_close: if self._open_error_result.ready: maybe_exception = self._open_error_result.value.error LOGGER.error('Connection open failed - %r', maybe_exception) if isinstance(maybe_exception, Exception): raise maybe_exception else: raise exceptions.ConnectionClosed(maybe_exception) else: result = self._closed_result.value LOGGER.error('Connection close detected; result=%r', result) raise exceptions.ConnectionClosed(result.reason_code, result.reason_text) else: LOGGER.info('Connection closed; result=%r', self._closed_result.value) finally: self._cleanup() def _request_channel_dispatch(self, channel_number): """Called by BlockingChannel instances to request a call to their _dispatch_events method or to terminate `process_data_events`; BlockingConnection will honor these requests from a safe context. :param int channel_number: positive channel number to request a call to the channel's `_dispatch_events`; a negative channel number to request termination of `process_data_events` """ self._channels_pending_dispatch.add(channel_number) def _dispatch_channel_events(self): """Invoke the `_dispatch_events` method on open channels that requested it """ if not self._channels_pending_dispatch: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return candidates = list(self._channels_pending_dispatch) self._channels_pending_dispatch.clear() for channel_number in candidates: if channel_number < 0: # This was meant to terminate process_data_events continue try: impl_channel = self._impl._channels[channel_number] except KeyError: continue if impl_channel.is_open: impl_channel._get_cookie()._dispatch_events() def _on_timer_ready(self, evt): """Handle expiry of a timer that was registered via `add_timeout` :param _TimerEvt evt: """ self._ready_events.append(evt) def _on_connection_blocked(self, user_callback, method_frame): """Handle Connection.Blocked notification from RabbitMQ broker :param callable user_callback: callback_method passed to `add_on_connection_blocked_callback` :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionBlockedEvt(user_callback, method_frame)) def _on_connection_unblocked(self, user_callback, method_frame): """Handle Connection.Unblocked notification from RabbitMQ broker :param callable user_callback: callback_method passed to `add_on_connection_unblocked_callback` :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionUnblockedEvt(user_callback, method_frame)) def _dispatch_connection_events(self): """Dispatch ready connection events""" if not self._ready_events: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return # Limit dispatch to the number of currently ready events to avoid # getting stuck in this loop for _ in compat.xrange(len(self._ready_events)): try: evt = self._ready_events.popleft() except IndexError: # Some events (e.g., timers) must have been cancelled break evt.dispatch() def add_on_connection_blocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a `Connection.Blocked` frame indicating that RabbitMQ is low on resources. Publishers can use this to voluntarily suspend publishing, instead of relying on back pressure throttling. The callback will be passed the `Connection.Blocked` method frame. See also `ConnectionParameters.blocked_connection_timeout`. :param method callback_method: Callback to call on `Connection.Blocked`, having the signature `callback_method(pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Blocked` """ self._impl.add_on_connection_blocked_callback( functools.partial(self._on_connection_blocked, callback_method)) def add_on_connection_unblocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a `Connection.Unblocked` frame letting publishers know it's ok to start publishing again. The callback will be passed the `Connection.Unblocked` method frame. :param method callback_method: Callback to call on `Connection.Unblocked`, having the signature `callback_method(pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Unblocked` """ self._impl.add_on_connection_unblocked_callback( functools.partial(self._on_connection_unblocked, callback_method)) def add_timeout(self, deadline, callback_method): """Create a single-shot timer to fire after deadline seconds. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. NOTE: the timer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events` and `BlockingChannel.start_consuming`. :param float deadline: The number of seconds to wait to call callback :param callable callback_method: The callback method with the signature callback_method() :returns: opaque timer id """ if not callable(callback_method): raise ValueError( 'callback_method parameter must be callable, but got %r' % (callback_method,)) evt = _TimerEvt(callback=callback_method) timer_id = self._impl.add_timeout( deadline, functools.partial(self._on_timer_ready, evt)) evt.timer_id = timer_id return timer_id def remove_timeout(self, timeout_id): """Remove a timer if it's still in the timeout stack :param timeout_id: The opaque timer id to remove """ # Remove from the impl's timeout stack self._impl.remove_timeout(timeout_id) # Remove from ready events, if the timer fired already for i, evt in enumerate(self._ready_events): if isinstance(evt, _TimerEvt) and evt.timer_id == timeout_id: index_to_remove = i break else: # Not found return del self._ready_events[index_to_remove] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ if self.is_closed: LOGGER.debug('Close called on closed connection (%s): %s', reply_code, reply_text) return LOGGER.info('Closing connection (%s): %s', reply_code, reply_text) self._user_initiated_close = True # Close channels that remain opened for impl_channel in pika.compat.dictvalues(self._impl._channels): channel = impl_channel._get_cookie() if channel.is_open: try: channel.close(reply_code, reply_text) except exceptions.ChannelClosed as exc: # Log and suppress broker-closed channel LOGGER.warning('Got ChannelClosed while closing channel ' 'from connection.close: %r', exc) # Close the connection self._impl.close(reply_code, reply_text) self._flush_output(self._closed_result.is_ready) def process_data_events(self, time_limit=0): """Will make sure that data events are processed. Dispatches timer and channel callbacks if not called from the scope of BlockingConnection or BlockingChannel callback. Your app can block on this method. :param float time_limit: suggested upper bound on processing time in seconds. The actual blocking time depends on the granularity of the underlying ioloop. Zero means return as soon as possible. None means there is no limit on processing time and the function will block until I/O produces actionable events. Defaults to 0 for backward compatibility. This parameter is NEW in pika 0.10.0. """ with self._acquire_event_dispatch() as dispatch_acquired: # Check if we can actually process pending events common_terminator = lambda: bool(dispatch_acquired and (self._channels_pending_dispatch or self._ready_events)) if time_limit is None: self._flush_output(common_terminator) else: with _IoloopTimerContext(time_limit, self._impl) as timer: self._flush_output(timer.is_ready, common_terminator) if self._ready_events: self._dispatch_connection_events() if self._channels_pending_dispatch: self._dispatch_channel_events() def sleep(self, duration): """A safer way to sleep than calling time.sleep() directly that would keep the adapter from ignoring frames sent from the broker. The connection will "sleep" or block the number of seconds specified in duration in small intervals. :param float duration: The time to sleep in seconds """ assert duration >= 0, duration deadline = time.time() + duration time_limit = duration # Process events at least once while True: self.process_data_events(time_limit) time_limit = deadline - time.time() if time_limit <= 0: break def channel(self, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :rtype: pika.adapters.blocking_connection.BlockingChannel """ with _CallbackResult(self._OnChannelOpenedArgs) as opened_args: impl_channel = self._impl.channel( on_open_callback=opened_args.set_value_once, channel_number=channel_number) # Create our proxy channel channel = BlockingChannel(impl_channel, self) # Link implementation channel with our proxy channel impl_channel._set_cookie(channel) # Drive I/O until Channel.Open-ok channel._flush_output(opened_args.is_ready) return channel def __enter__(self): # Prepare `with` context return self def __exit__(self, exc_type, value, traceback): # Close connection after `with` context self.close() # # Connections state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_closed @property def is_closing(self): """ Returns True if connection is in the process of closing due to client-initiated `close` request, but closing is not yet complete. """ return self._impl.is_closing @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_open # # Properties that reflect server capabilities for the current connection # @property def basic_nack_supported(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self._impl.basic_nack @property def consumer_cancel_notify_supported(self): """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self._impl.consumer_cancel_notify @property def exchange_exchange_bindings_supported(self): """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self._impl.exchange_exchange_bindings @property def publisher_confirms_supported(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self._impl.publisher_confirms # Legacy property names for backward compatibility basic_nack = basic_nack_supported consumer_cancel_notify = consumer_cancel_notify_supported exchange_exchange_bindings = exchange_exchange_bindings_supported publisher_confirms = publisher_confirms_supported class _ChannelPendingEvt(object): """Base class for BlockingChannel pending events""" pass class _ConsumerDeliveryEvt(_ChannelPendingEvt): """This event represents consumer message delivery `Basic.Deliver`; it contains method, properties, and body of the delivered message. """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Deliver method: NOTE: consumer_tag and delivery_tag are valid only within source channel :param spec.BasicProperties properties: message properties :param body: message body; empty string if no body :type body: str or unicode """ self.method = method self.properties = properties self.body = body class _ConsumerCancellationEvt(_ChannelPendingEvt): """This event represents server-initiated consumer cancellation delivered to client via Basic.Cancel. After receiving Basic.Cancel, there will be no further deliveries for the consumer identified by `consumer_tag` in `Basic.Cancel` """ __slots__ = ('method_frame') def __init__(self, method_frame): """ :param pika.frame.Method method_frame: method frame with method of type `spec.Basic.Cancel` """ self.method_frame = method_frame def __repr__(self): return '<%s method_frame=%r>' % (self.__class__.__name__, self.method_frame) @property def method(self): """method of type spec.Basic.Cancel""" return self.method_frame.method class _ReturnedMessageEvt(_ChannelPendingEvt): """This event represents a message returned by broker via `Basic.Return`""" __slots__ = ('callback', 'channel', 'method', 'properties', 'body') def __init__(self, callback, channel, method, properties, body): """ :param callable callback: user's callback, having the signature callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param pika.Channel channel: :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: :param body: str, unicode, or bytes (python 3.x) """ self.callback = callback self.channel = channel self.method = method self.properties = properties self.body = body def __repr__(self): return ('<%s callback=%r channel=%r method=%r properties=%r ' 'body=%.300r>') % (self.__class__.__name__, self.callback, self.channel, self.method, self.properties, self.body) def dispatch(self): """Dispatch user's callback""" self.callback(self.channel, self.method, self.properties, self.body) class ReturnedMessage(object): """Represents a message returned via Basic.Return in publish-acknowledgments mode """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Return method: :param spec.BasicProperties properties: message properties :param body: message body; empty string if no body :type body: str or unicode """ self.method = method self.properties = properties self.body = body class _ConsumerInfo(object): """Information about an active consumer""" __slots__ = ('consumer_tag', 'no_ack', 'consumer_cb', 'alternate_event_sink', 'state') # Consumer states SETTING_UP = 1 ACTIVE = 2 TEARING_DOWN = 3 CANCELLED_BY_BROKER = 4 def __init__(self, consumer_tag, no_ack, consumer_cb=None, alternate_event_sink=None): """ NOTE: exactly one of consumer_cb/alternate_event_sink musts be non-None. :param str consumer_tag: :param bool no_ack: the no-ack value for the consumer :param callable consumer_cb: The function for dispatching messages to user, having the signature: consumer_callback(channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) """ assert (consumer_cb is None) != (alternate_event_sink is None), ( 'exactly one of consumer_cb/alternate_event_sink must be non-None', consumer_cb, alternate_event_sink) self.consumer_tag = consumer_tag self.no_ack = no_ack self.consumer_cb = consumer_cb self.alternate_event_sink = alternate_event_sink self.state = self.SETTING_UP @property def setting_up(self): """True if in SETTING_UP state""" return self.state == self.SETTING_UP @property def active(self): """True if in ACTIVE state""" return self.state == self.ACTIVE @property def tearing_down(self): """True if in TEARING_DOWN state""" return self.state == self.TEARING_DOWN @property def cancelled_by_broker(self): """True if in CANCELLED_BY_BROKER state""" return self.state == self.CANCELLED_BY_BROKER class _QueueConsumerGeneratorInfo(object): """Container for information about the active queue consumer generator """ __slots__ = ('params', 'consumer_tag', 'pending_events') def __init__(self, params, consumer_tag): """ :params tuple params: a three-tuple (queue, no_ack, exclusive) that were used to create the queue consumer :param str consumer_tag: consumer tag """ self.params = params self.consumer_tag = consumer_tag #self.messages = deque() # Holds pending events of types _ConsumerDeliveryEvt and # _ConsumerCancellationEvt self.pending_events = deque() def __repr__(self): return '<%s params=%r consumer_tag=%r>' % ( self.__class__.__name__, self.params, self.consumer_tag) class BlockingChannel(object): """The BlockingChannel implements blocking semantics for most things that one would use callback-passing-style for with the :py:class:`~pika.channel.Channel` class. In addition, the `BlockingChannel` class implements a :term:`generator` that allows you to :doc:`consume messages ` without using callbacks. Example of creating a BlockingChannel:: import pika # Create our connection object connection = pika.BlockingConnection() # The returned object will be a synchronous channel channel = connection.channel() """ # Used as value_class with _CallbackResult for receiving Basic.GetOk args _RxMessageArgs = namedtuple( 'BlockingChannel__RxMessageArgs', [ 'channel', # implementation pika.Channel instance 'method', # Basic.GetOk 'properties', # pika.spec.BasicProperties 'body' # str, unicode, or bytes (python 3.x) ]) # For use as value_class with any _CallbackResult that expects method_frame # as the only arg _MethodFrameCallbackResultArgs = namedtuple( 'BlockingChannel__MethodFrameCallbackResultArgs', 'method_frame') # Broker's basic-ack/basic-nack args when delivery confirmation is enabled; # may concern a single or multiple messages _OnMessageConfirmationReportArgs = namedtuple( 'BlockingChannel__OnMessageConfirmationReportArgs', 'method_frame') # Parameters for broker-initiated Channel.Close request: reply_code # holds the broker's non-zero error code and reply_text holds the # corresponding error message text. _OnChannelClosedByBrokerArgs = namedtuple( 'BlockingChannel__OnChannelClosedByBrokerArgs', 'method_frame') # For use as value_class with _CallbackResult expecting Channel.Flow # confirmation. _FlowOkCallbackResultArgs = namedtuple( 'BlockingChannel__FlowOkCallbackResultArgs', 'active' # True if broker will start or continue sending; False if not ) _CONSUMER_CANCELLED_CB_KEY = 'blocking_channel_consumer_cancelled' def __init__(self, channel_impl, connection): """Create a new instance of the Channel :param channel_impl: Channel implementation object as returned from SelectConnection.channel() :param BlockingConnection connection: The connection object """ self._impl = channel_impl self._connection = connection # A mapping of consumer tags to _ConsumerInfo for active consumers self._consumer_infos = dict() # Queue consumer generator generator info of type # _QueueConsumerGeneratorInfo created by BlockingChannel.consume self._queue_consumer_generator = None # Whether RabbitMQ delivery confirmation has been enabled self._delivery_confirmation = False # Receives message delivery confirmation report (Basic.ack or # Basic.nack) from broker when delivery confirmations are enabled self._message_confirmation_result = _CallbackResult( self._OnMessageConfirmationReportArgs) # deque of pending events: _ConsumerDeliveryEvt and # _ConsumerCancellationEvt objects that will be returned by # `BlockingChannel.get_event()` self._pending_events = deque() # Holds a ReturnedMessage object representing a message received via # Basic.Return in publisher-acknowledgments mode. self._puback_return = None # Receives Basic.ConsumeOk reply from server self._basic_consume_ok_result = _CallbackResult() # Receives the broker-inititated Channel.Close parameters self._channel_closed_by_broker_result = _CallbackResult( self._OnChannelClosedByBrokerArgs) # Receives args from Basic.GetEmpty response # http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get self._basic_getempty_result = _CallbackResult( self._MethodFrameCallbackResultArgs) self._impl.add_on_cancel_callback(self._on_consumer_cancelled_by_broker) self._impl.add_callback( self._basic_consume_ok_result.signal_once, replies=[pika.spec.Basic.ConsumeOk], one_shot=False) self._impl.add_callback( self._channel_closed_by_broker_result.set_value_once, replies=[pika.spec.Channel.Close], one_shot=True) self._impl.add_callback( self._basic_getempty_result.set_value_once, replies=[pika.spec.Basic.GetEmpty], one_shot=False) LOGGER.info("Created channel=%s", self.channel_number) def __int__(self): """Return the channel object as its channel number NOTE: inherited from legacy BlockingConnection; might be error-prone; use `channel_number` property instead. :rtype: int """ return self.channel_number def __repr__(self): return '<%s impl=%r>' % (self.__class__.__name__, self._impl) def __enter__(self): return self def __exit__(self, exc_type, value, traceback): try: self.close() except exceptions.ChannelClosed: pass def _cleanup(self): """Clean up members that might inhibit garbage collection""" self._message_confirmation_result.reset() self._pending_events = deque() self._consumer_infos = dict() @property def channel_number(self): """Channel number""" return self._impl.channel_number @property def connection(self): """The channel's BlockingConnection instance""" return self._connection @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._impl.is_closed @property def is_closing(self): """Returns True if client-initiated closing of the channel is in progress. :rtype: bool """ return self._impl.is_closing @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._impl.is_open _ALWAYS_READY_WAITERS = ((lambda: True), ) def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon channel-close or connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. """ if self.is_closed: raise exceptions.ChannelClosed() if not waiters: waiters = self._ALWAYS_READY_WAITERS self._connection._flush_output( self._channel_closed_by_broker_result.is_ready, *waiters) if self._channel_closed_by_broker_result: # Channel was force-closed by broker self._cleanup() method = ( self._channel_closed_by_broker_result.value.method_frame.method) raise exceptions.ChannelClosed(method.reply_code, method.reply_text) def _on_puback_message_returned(self, channel, method, properties, body): """Called as the result of Basic.Return from broker in publisher-acknowledgements mode. Saves the info as a ReturnedMessage instance in self._puback_return. :param pika.Channel channel: our self._impl channel :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: message properties :param body: returned message body; empty string if no body :type body: str, unicode """ assert channel is self._impl, ( channel.channel_number, self.channel_number) assert isinstance(method, pika.spec.Basic.Return), method assert isinstance(properties, pika.spec.BasicProperties), ( properties) LOGGER.warn( "Published message was returned: _delivery_confirmation=%s; " "channel=%s; method=%r; properties=%r; body_size=%d; " "body_prefix=%.255r", self._delivery_confirmation, channel.channel_number, method, properties, len(body) if body is not None else None, body) self._puback_return = ReturnedMessage(method, properties, body) def _add_pending_event(self, evt): """Append an event to the channel's list of events that are ready for dispatch to user and signal our connection that this channel is ready for event dispatch :param _ChannelPendingEvt evt: an event derived from _ChannelPendingEvt """ self._pending_events.append(evt) self.connection._request_channel_dispatch(self.channel_number) def _on_consumer_cancelled_by_broker(self, method_frame): """Called by impl when broker cancels consumer via Basic.Cancel. This is a RabbitMQ-specific feature. The circumstances include deletion of queue being consumed as well as failure of a HA node responsible for the queue being consumed. :param pika.frame.Method method_frame: method frame with the `spec.Basic.Cancel` method """ evt = _ConsumerCancellationEvt(method_frame) consumer = self._consumer_infos[method_frame.method.consumer_tag] # Don't interfere with client-initiated cancellation flow if not consumer.tearing_down: consumer.state = _ConsumerInfo.CANCELLED_BY_BROKER if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_message_delivery(self, _channel, method, properties, body): """Called by impl when a message is delivered for a consumer :param Channel channel: The implementation channel object :param spec.Basic.Deliver method: :param pika.spec.BasicProperties properties: message properties :param body: delivered message body; empty string if no body :type body: str, unicode, or bytes (python 3.x) """ evt = _ConsumerDeliveryEvt(method, properties, body) consumer = self._consumer_infos[method.consumer_tag] if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_generator_event(self, evt): """Sink for the queue consumer generator's consumer events; append the event to queue consumer generator's pending events buffer. :param evt: an object of type _ConsumerDeliveryEvt or _ConsumerCancellationEvt """ self._queue_consumer_generator.pending_events.append(evt) # Schedule termination of connection.process_data_events using a # negative channel number self.connection._request_channel_dispatch(-self.channel_number) def _cancel_all_consumers(self): """Cancel all consumers. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if self._consumer_infos: LOGGER.debug('Cancelling %i consumers', len(self._consumer_infos)) if self._queue_consumer_generator is not None: # Cancel queue consumer generator self.cancel() # Cancel consumers created via basic_consume for consumer_tag in pika.compat.dictkeys(self._consumer_infos): self.basic_cancel(consumer_tag) def _dispatch_events(self): """Called by BlockingConnection to dispatch pending events. `BlockingChannel` schedules this callback via `BlockingConnection._request_channel_dispatch` """ while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: consumer_info = self._consumer_infos[evt.method.consumer_tag] consumer_info.consumer_cb(self, evt.method, evt.properties, evt.body) elif type(evt) is _ConsumerCancellationEvt: del self._consumer_infos[evt.method_frame.method.consumer_tag] self._impl.callbacks.process(self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, self, evt.method_frame) else: evt.dispatch() def close(self, reply_code=0, reply_text="Normal shutdown"): """Will invoke a clean shutdown of the channel with the AMQP Broker. :param int reply_code: The reply code to close the channel with :param str reply_text: The reply text to close the channel with """ LOGGER.debug('Channel.close(%s, %s)', reply_code, reply_text) # Cancel remaining consumers self._cancel_all_consumers() # Close the channel try: with _CallbackResult() as close_ok_result: self._impl.add_callback(callback=close_ok_result.signal_once, replies=[pika.spec.Channel.CloseOk], one_shot=True) self._impl.close(reply_code=reply_code, reply_text=reply_text) self._flush_output(close_ok_result.is_ready) finally: self._cleanup() def flow(self, active): """Turn Channel flow control off and on. NOTE: RabbitMQ doesn't support active=False; per https://www.rabbitmq.com/specification.html: "active=false is not supported by the server. Limiting prefetch with basic.qos provides much better control" For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param bool active: Turn flow on (True) or off (False) :returns: True if broker will start or continue sending; False if not :rtype: bool """ with _CallbackResult(self._FlowOkCallbackResultArgs) as flow_ok_result: self._impl.flow(callback=flow_ok_result.set_value_once, active=active) self._flush_output(flow_ok_result.is_ready) return flow_ok_result.value.active def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when Basic.Cancel is sent by the broker. The callback function should receive a method frame parameter. :param callable callback: a callable for handling broker's Basic.Cancel notification with the call signature: callback(method_frame) where method_frame is of type `pika.frame.Method` with method of type `spec.Basic.Cancel` """ self._impl.callbacks.add(self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, callback, one_shot=False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when a published message is rejected and returned by the server via `Basic.Return`. :param callable callback: The method to call on callback with the signature callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) """ self._impl.add_on_return_callback( lambda _channel, method, properties, body: ( self._add_pending_event( _ReturnedMessageEvt( callback, self, method, properties, body)))) def basic_consume(self, consumer_callback, queue, no_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Sends the AMQP command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. NOTE: the consumer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events` and `BlockingChannel.start_consuming`. For more information about Basic.Consume, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param callable consumer_callback: The function for dispatching messages to user, having the signature: consumer_callback(channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode :param queue: The queue to consume from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a response (i.e., no ack/nack) :param bool exclusive: Don't allow other consumers on the queue :param consumer_tag: You may specify your own consumer tag; if left empty, a consumer tag will be generated automatically :type consumer_tag: str or unicode :param dict arguments: Custom key/value pair arguments for the consumer :returns: consumer tag :rtype: str :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ if not callable(consumer_callback): raise ValueError('consumer callback must be callable; got %r' % consumer_callback) return self._basic_consume_impl( queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, consumer_callback=consumer_callback) def _basic_consume_impl(self, queue, no_ack, exclusive, consumer_tag, arguments=None, consumer_callback=None, alternate_event_sink=None): """The low-level implementation used by `basic_consume` and `consume`. See `basic_consume` docstring for more info. NOTE: exactly one of consumer_callback/alternate_event_sink musts be non-None. This method has one additional parameter alternate_event_sink over the args described in `basic_consume`. :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ if (consumer_callback is None) == (alternate_event_sink is None): raise ValueError( ('exactly one of consumer_callback/alternate_event_sink must ' 'be non-None', consumer_callback, alternate_event_sink)) if not consumer_tag: # Need a consumer tag to register consumer info before sending # request to broker, because I/O might dispatch incoming messages # immediately following Basic.Consume-ok before _flush_output # returns consumer_tag = self._impl._generate_consumer_tag() if consumer_tag in self._consumer_infos: raise exceptions.DuplicateConsumerTag(consumer_tag) # Create new consumer self._consumer_infos[consumer_tag] = _ConsumerInfo( consumer_tag, no_ack=no_ack, consumer_cb=consumer_callback, alternate_event_sink=alternate_event_sink) try: with self._basic_consume_ok_result as ok_result: tag = self._impl.basic_consume( consumer_callback=self._on_consumer_message_delivery, queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments) assert tag == consumer_tag, (tag, consumer_tag) self._flush_output(ok_result.is_ready) except Exception: # If channel was closed, self._consumer_infos will be empty if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] raise # NOTE: Consumer could get cancelled by broker immediately after opening # (e.g., queue getting deleted externally) if self._consumer_infos[consumer_tag].setting_up: self._consumer_infos[consumer_tag].state = _ConsumerInfo.ACTIVE return consumer_tag def basic_cancel(self, consumer_tag): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. NOTE: When cancelling a no_ack=False consumer, this implementation automatically Nacks and suppresses any incoming messages that have not yet been dispatched to the consumer's callback. However, when cancelling a no_ack=True consumer, this method will return any pending messages that arrived before broker confirmed the cancellation. :param str consumer_tag: Identifier for the consumer; the result of passing a consumer_tag that was created on another channel is undefined (bad things will happen) :returns: (NEW IN pika 0.10.0) empty sequence for a no_ack=False consumer; for a no_ack=True consumer, returns a (possibly empty) sequence of pending messages that arrived before broker confirmed the cancellation (this is done instead of via consumer's callback in order to prevent reentrancy/recursion. Each message is four-tuple: (channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode """ try: consumer_info = self._consumer_infos[consumer_tag] except KeyError: LOGGER.warn("User is attempting to cancel an unknown consumer=%s; " "already cancelled by user or broker?", consumer_tag) return [] try: # Assertion failure here is most likely due to reentrance assert consumer_info.active or consumer_info.cancelled_by_broker, ( consumer_info.state) # Assertion failure here signals disconnect between consumer state # in BlockingChannel and Channel assert (consumer_info.cancelled_by_broker or consumer_tag in self._impl._consumers), consumer_tag no_ack = consumer_info.no_ack consumer_info.state = _ConsumerInfo.TEARING_DOWN with _CallbackResult() as cancel_ok_result: # Nack pending messages for no_ack=False consumer if not no_ack: pending_messages = self._remove_pending_deliveries( consumer_tag) if pending_messages: # NOTE: we use impl's basic_reject to avoid the # possibility of redelivery before basic_cancel takes # control of nacking. # NOTE: we can't use basic_nack with the multiple option # to avoid nacking messages already held by our client. for message in pending_messages: self._impl.basic_reject(message.method.delivery_tag, requeue=True) # Cancel the consumer; impl takes care of rejecting any # additional deliveries that arrive for a no_ack=False # consumer self._impl.basic_cancel( callback=cancel_ok_result.signal_once, consumer_tag=consumer_tag, nowait=False) # Flush output and wait for Basic.Cancel-ok or # broker-initiated Basic.Cancel self._flush_output( cancel_ok_result.is_ready, lambda: consumer_tag not in self._impl._consumers) if no_ack: # Return pending messages for no_ack=True consumer return [ (evt.method, evt.properties, evt.body) for evt in self._remove_pending_deliveries(consumer_tag)] else: # impl takes care of rejecting any incoming deliveries during # cancellation messages = self._remove_pending_deliveries(consumer_tag) assert not messages, messages return [] finally: # NOTE: The entry could be purged if channel or connection closes if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] def _remove_pending_deliveries(self, consumer_tag): """Extract _ConsumerDeliveryEvt objects destined for the given consumer from pending events, discarding the _ConsumerCancellationEvt, if any :param str consumer_tag: :returns: a (possibly empty) sequence of _ConsumerDeliveryEvt destined for the given consumer tag """ remaining_events = deque() unprocessed_messages = [] while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: if evt.method.consumer_tag == consumer_tag: unprocessed_messages.append(evt) continue if type(evt) is _ConsumerCancellationEvt: if evt.method_frame.method.consumer_tag == consumer_tag: # A broker-initiated Basic.Cancel must have arrived # before our cancel request completed continue remaining_events.append(evt) self._pending_events = remaining_events return unprocessed_messages def start_consuming(self): """Processes I/O events and dispatches timers and `basic_consume` callbacks until all consumers are cancelled. NOTE: this blocking function may not be called from the scope of a pika callback, because dispatching `basic_consume` callbacks from this context would constitute recursion. :raises pika.exceptions.RecursionError: if called from the scope of a `BlockingConnection` or `BlockingChannel` callback """ # Check if called from the scope of an event dispatch callback with self.connection._acquire_event_dispatch() as dispatch_allowed: if not dispatch_allowed: raise exceptions.RecursionError( 'start_consuming may not be called from the scope of ' 'another BlockingConnection or BlockingChannel callback') # Process events as long as consumers exist on this channel while self._consumer_infos: self.connection.process_data_events(time_limit=None) def stop_consuming(self, consumer_tag=None): """ Cancels all consumers, signalling the `start_consuming` loop to exit. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if consumer_tag: self.basic_cancel(consumer_tag) else: self._cancel_all_consumers() def consume(self, queue, no_ack=False, exclusive=False, arguments=None, inactivity_timeout=None): """Blocking consumption of a queue instead of via a callback. This method is a generator that yields each message as a tuple of method, properties, and body. The active generator iterator terminates when the consumer is cancelled by client or broker. Example: for method, properties, body in channel.consume('queue'): print body channel.basic_ack(method.delivery_tag) You should call `BlockingChannel.cancel()` when you escape out of the generator loop. If you don't cancel this consumer, then next call on the same channel to `consume()` with the exact same (queue, no_ack, exclusive) parameters will resume the existing consumer generator; however, calling with different parameters will result in an exception. :param queue: The queue name to consume :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a ack/nack response :param bool exclusive: Don't allow other consumers on the queue :param dict arguments: Custom key/value pair arguments for the consumer :param float inactivity_timeout: if a number is given (in seconds), will cause the method to yield None after the given period of inactivity; this permits for pseudo-regular maintenance activities to be carried out by the user while waiting for messages to arrive. If None is given (default), then the method blocks until the next event arrives. NOTE that timing granularity is limited by the timer resolution of the underlying implementation. NEW in pika 0.10.0. :yields: tuple(spec.Basic.Deliver, spec.BasicProperties, str or unicode) :raises ValueError: if consumer-creation parameters don't match those of the existing queue consumer generator, if any. NEW in pika 0.10.0 """ params = (queue, no_ack, exclusive) if self._queue_consumer_generator is not None: if params != self._queue_consumer_generator.params: raise ValueError( 'Consume with different params not allowed on existing ' 'queue consumer generator; previous params: %r; ' 'new params: %r' % (self._queue_consumer_generator.params, (queue, no_ack, exclusive))) else: LOGGER.debug('Creating new queue consumer generator; params: %r', params) # Need a consumer tag to register consumer info before sending # request to broker, because I/O might pick up incoming messages # in addition to Basic.Consume-ok consumer_tag = self._impl._generate_consumer_tag() self._queue_consumer_generator = _QueueConsumerGeneratorInfo( params, consumer_tag) try: self._basic_consume_impl( queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, alternate_event_sink=self._on_consumer_generator_event) except Exception: self._queue_consumer_generator = None raise LOGGER.info('Created new queue consumer generator %r', self._queue_consumer_generator) while self._queue_consumer_generator is not None: if self._queue_consumer_generator.pending_events: evt = self._queue_consumer_generator.pending_events.popleft() if type(evt) is _ConsumerCancellationEvt: # Consumer was cancelled by broker self._queue_consumer_generator = None break else: yield (evt.method, evt.properties, evt.body) continue # Wait for a message to arrive if inactivity_timeout is None: self.connection.process_data_events(time_limit=None) continue # Wait with inactivity timeout wait_start_time = time.time() wait_deadline = wait_start_time + inactivity_timeout delta = inactivity_timeout while (self._queue_consumer_generator is not None and not self._queue_consumer_generator.pending_events): self.connection.process_data_events(time_limit=delta) if not self._queue_consumer_generator: # Consumer was cancelled by client break if self._queue_consumer_generator.pending_events: # Got message(s) break delta = wait_deadline - time.time() if delta <= 0.0: # Signal inactivity timeout yield None break def get_waiting_message_count(self): """Returns the number of messages that may be retrieved from the current queue consumer generator via `BlockingChannel.consume` without blocking. NEW in pika 0.10.0 :rtype: int """ if self._queue_consumer_generator is not None: pending_events = self._queue_consumer_generator.pending_events count = len(pending_events) if count and type(pending_events[-1]) is _ConsumerCancellationEvt: count -= 1 else: count = 0 return count def cancel(self): """Cancel the queue consumer created by `BlockingChannel.consume`, rejecting all pending ackable messages. NOTE: If you're looking to cancel a consumer issued with BlockingChannel.basic_consume then you should call BlockingChannel.basic_cancel. :return int: The number of messages requeued by Basic.Nack. NEW in 0.10.0: returns 0 """ if self._queue_consumer_generator is None: LOGGER.warning('cancel: queue consumer generator is inactive ' '(already cancelled by client or broker?)') return 0 try: _, no_ack, _ = self._queue_consumer_generator.params if not no_ack: # Reject messages held by queue consumer generator; NOTE: we # can't use basic_nack with the multiple option to avoid nacking # messages already held by our client. pending_events = self._queue_consumer_generator.pending_events for _ in compat.xrange(self.get_waiting_message_count()): evt = pending_events.popleft() self._impl.basic_reject(evt.method.delivery_tag, requeue=True) self.basic_cancel(self._queue_consumer_generator.consumer_tag) finally: self._queue_consumer_generator = None # Return 0 for compatibility with legacy implementation; the number of # nacked messages is not meaningful since only messages consumed with # no_ack=False may be nacked, and those arriving after calling # basic_cancel will be rejected automatically by impl channel, so we'll # never know how many of those were nacked. return 0 def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ self._impl.basic_ack(delivery_tag=delivery_tag, multiple=multiple) self._flush_output() def basic_nack(self, delivery_tag=None, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_nack(delivery_tag=delivery_tag, multiple=multiple, requeue=requeue) self._flush_output() def basic_get(self, queue=None, no_ack=False): """Get a single message from the AMQP broker. Returns a sequence with the method frame, message properties, and body. :param queue: Name of queue to get a message from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a reply :returns: a three-tuple; (None, None, None) if the queue was empty; otherwise (method, properties, body); NOTE: body may be None :rtype: (None, None, None)|(spec.Basic.GetOk, spec.BasicProperties, str or unicode or None) """ assert not self._basic_getempty_result # NOTE: nested with for python 2.6 compatibility with _CallbackResult(self._RxMessageArgs) as get_ok_result: with self._basic_getempty_result: self._impl.basic_get(callback=get_ok_result.set_value_once, queue=queue, no_ack=no_ack) self._flush_output(get_ok_result.is_ready, self._basic_getempty_result.is_ready) if get_ok_result: evt = get_ok_result.value return (evt.method, evt.properties, evt.body) else: assert self._basic_getempty_result, ( "wait completed without GetOk and GetEmpty") return None, None, None def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key and body. Returns a boolean value indicating the success of the operation. This is the legacy BlockingChannel method for publishing. See also `BlockingChannel.publish` that provides more information about failures. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish NOTE: mandatory and immediate may be enabled even without delivery confirmation, but in the absence of delivery confirmation the synchronous implementation has no way to know how long to wait for the Basic.Return or lack thereof. :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body; empty string if no body :type body: str or unicode :param pika.spec.BasicProperties properties: message properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag :returns: True if delivery confirmation is not enabled (NEW in pika 0.10.0); otherwise returns False if the message could not be delivered (Basic.nack and/or Basic.Return) and True if the message was delivered (Basic.ack and no Basic.Return) """ try: self.publish(exchange, routing_key, body, properties, mandatory, immediate) except (exceptions.NackError, exceptions.UnroutableError): return False else: return True def publish(self, exchange, routing_key, body, properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key, and body. Unlike the legacy `BlockingChannel.basic_publish`, this method provides more information about failures via exceptions. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish NOTE: mandatory and immediate may be enabled even without delivery confirmation, but in the absence of delivery confirmation the synchronous implementation has no way to know how long to wait for the Basic.Return. :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body; empty string if no body :type body: str or unicode :param pika.spec.BasicProperties properties: message properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`. :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`. """ if self._delivery_confirmation: # In publisher-acknowledgments mode with self._message_confirmation_result: self._impl.basic_publish(exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory, immediate=immediate) self._flush_output(self._message_confirmation_result.is_ready) conf_method = (self._message_confirmation_result.value .method_frame .method) if isinstance(conf_method, pika.spec.Basic.Nack): # Broker was unable to process the message due to internal # error LOGGER.warn( "Message was Nack'ed by broker: nack=%r; channel=%s; " "exchange=%s; routing_key=%s; mandatory=%r; " "immediate=%r", conf_method, self.channel_number, exchange, routing_key, mandatory, immediate) if self._puback_return is not None: returned_messages = [self._puback_return] self._puback_return = None else: returned_messages = [] raise exceptions.NackError(returned_messages) else: assert isinstance(conf_method, pika.spec.Basic.Ack), ( conf_method) if self._puback_return is not None: # Unroutable message was returned messages = [self._puback_return] self._puback_return = None raise exceptions.UnroutableError(messages) else: # In non-publisher-acknowledgments mode self._impl.basic_publish(exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory, immediate=immediate) self._flush_output() def basic_qos(self, prefetch_size=0, prefetch_count=0, all_channels=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set in the consumer. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored if the no-ack option is set in the consumer. :param bool all_channels: Should the QoS apply to all channels """ with _CallbackResult() as qos_ok_result: self._impl.basic_qos(callback=qos_ok_result.signal_once, prefetch_size=prefetch_size, prefetch_count=prefetch_count, all_channels=all_channels) self._flush_output(qos_ok_result.is_ready) def basic_recover(self, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ with _CallbackResult() as recover_ok_result: self._impl.basic_recover(callback=recover_ok_result.signal_once, requeue=requeue) self._flush_output(recover_ok_result.is_ready) def basic_reject(self, delivery_tag=None, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_reject(delivery_tag=delivery_tag, requeue=requeue) self._flush_output() def confirm_delivery(self): """Turn on RabbitMQ-proprietary Confirm mode in the channel. For more information see: http://www.rabbitmq.com/extensions.html#confirms """ if self._delivery_confirmation: LOGGER.error('confirm_delivery: confirmation was already enabled ' 'on channel=%s', self.channel_number) return with _CallbackResult() as select_ok_result: self._impl.add_callback(callback=select_ok_result.signal_once, replies=[pika.spec.Confirm.SelectOk], one_shot=True) self._impl.confirm_delivery( callback=self._message_confirmation_result.set_value_once, nowait=False) self._flush_output(select_ok_result.is_ready) self._delivery_confirmation = True # Unroutable messages returned after this point will be in the context # of publisher acknowledgments self._impl.add_on_return_callback(self._on_puback_message_returned) def exchange_declare(self, exchange=None, exchange_type='direct', passive=False, durable=False, auto_delete=False, internal=False, arguments=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param exchange: The exchange name consists of a non-empty sequence of these characters: letters, digits, hyphen, underscore, period, or colon. :type exchange: str or unicode :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param dict arguments: Custom key/value pair arguments for the exchange :returns: Method frame from the Exchange.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeclareOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as declare_ok_result: self._impl.exchange_declare( callback=declare_ok_result.set_value_once, exchange=exchange, exchange_type=exchange_type, passive=passive, durable=durable, auto_delete=auto_delete, internal=internal, nowait=False, arguments=arguments) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def exchange_delete(self, exchange=None, if_unused=False): """Delete the exchange. :param exchange: The exchange name :type exchange: str or unicode :param bool if_unused: only delete if the exchange is unused :returns: Method frame from the Exchange.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeleteOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as delete_ok_result: self._impl.exchange_delete( callback=delete_ok_result.set_value_once, exchange=exchange, if_unused=if_unused, nowait=False) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def exchange_bind(self, destination=None, source=None, routing_key='', arguments=None): """Bind an exchange to another exchange. :param destination: The destination exchange to bind :type destination: str or unicode :param source: The source exchange to bind to :type source: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.BindOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ bind_ok_result: self._impl.exchange_bind( callback=bind_ok_result.set_value_once, destination=destination, source=source, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def exchange_unbind(self, destination=None, source=None, routing_key='', arguments=None): """Unbind an exchange from another exchange. :param destination: The destination exchange to unbind :type destination: str or unicode :param source: The source exchange to unbind from :type source: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.UnbindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as unbind_ok_result: self._impl.exchange_unbind( callback=unbind_ok_result.set_value_once, destination=destination, source=source, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def queue_declare(self, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Leave the queue name empty for a auto-named queue in RabbitMQ :param queue: The queue name :type queue: str or unicode; if empty string, the broker will create a unique queue name; :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param dict arguments: Custom key/value arguments for the queue :returns: Method frame from the Queue.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeclareOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ declare_ok_result: self._impl.queue_declare( callback=declare_ok_result.set_value_once, queue=queue, passive=passive, durable=durable, exclusive=exclusive, auto_delete=auto_delete, nowait=False, arguments=arguments) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def queue_delete(self, queue='', if_unused=False, if_empty=False): """Delete a queue from the broker. :param queue: The queue to delete :type queue: str or unicode :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :returns: Method frame from the Queue.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeleteOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ delete_ok_result: self._impl.queue_delete(callback=delete_ok_result.set_value_once, queue=queue, if_unused=if_unused, if_empty=if_empty, nowait=False) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def queue_purge(self, queue=''): """Purge all of the messages from the specified queue :param queue: The queue to purge :type queue: str or unicode :returns: Method frame from the Queue.Purge-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.PurgeOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ purge_ok_result: self._impl.queue_purge(callback=purge_ok_result.set_value_once, queue=queue, nowait=False) self._flush_output(purge_ok_result.is_ready) return purge_ok_result.value.method_frame def queue_bind(self, queue, exchange, routing_key=None, arguments=None): """Bind the queue to the specified exchange :param queue: The queue to bind to the exchange :type queue: str or unicode :param exchange: The source exchange to bind to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.BindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as bind_ok_result: self._impl.queue_bind(callback=bind_ok_result.set_value_once, queue=queue, exchange=exchange, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def queue_unbind(self, queue='', exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param queue: The queue to unbind from the exchange :type queue: str or unicode :param exchange: The source exchange to bind from :type exchange: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.UnbindOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ unbind_ok_result: self._impl.queue_unbind(callback=unbind_ok_result.set_value_once, queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def tx_select(self): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :returns: Method frame from the Tx.Select-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.SelectOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ select_ok_result: self._impl.tx_select(select_ok_result.set_value_once) self._flush_output(select_ok_result.is_ready) return select_ok_result.value.method_frame def tx_commit(self): """Commit a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ commit_ok_result: self._impl.tx_commit(commit_ok_result.set_value_once) self._flush_output(commit_ok_result.is_ready) return commit_ok_result.value.method_frame def tx_rollback(self): """Rollback a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ rollback_ok_result: self._impl.tx_rollback(rollback_ok_result.set_value_once) self._flush_output(rollback_ok_result.is_ready) return rollback_ok_result.value.method_frame pika-0.11.0/pika/adapters/libev_connection.py000066400000000000000000000266261315131611700211510ustar00rootroot00000000000000"""Use pika with the libev IOLoop via pyev""" import pyev import signal import array import logging import warnings from collections import deque from pika.adapters.base_connection import BaseConnection LOGGER = logging.getLogger(__name__) global_sigint_watcher, global_sigterm_watcher = None, None class LibevConnection(BaseConnection): """The LibevConnection runs on the libev IOLoop. If you're running the connection in a web app, make sure you set stop_ioloop_on_close to False, which is the default behavior for this adapter, otherwise the web app will stop taking requests. You should be familiar with pyev and libev to use this adapter, esp. with regard to the use of libev ioloops. If an on_signal_callback method is provided, the adapter creates signal watchers the first time; subsequent instantiations with a provided method reuse the same watchers but will call the new method upon receiving a signal. See pyev/libev signal handling to understand why this is done. :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection can't be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the default_loop in libev :param on_signal_callback: Method to call if SIGINT or SIGTERM occur :type on_signal_callback: method """ WARN_ABOUT_IOLOOP = True # use static arrays to translate masks between pika and libev _PIKA_TO_LIBEV_ARRAY = array.array('i', [0] * ( (BaseConnection.READ | BaseConnection.WRITE | BaseConnection.ERROR) + 1 )) _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ] = pyev.EV_READ _PIKA_TO_LIBEV_ARRAY[BaseConnection.WRITE] = pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.WRITE] = pyev.EV_READ | pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.ERROR] = pyev.EV_READ _PIKA_TO_LIBEV_ARRAY[BaseConnection.WRITE | BaseConnection.ERROR] = pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.WRITE | BaseConnection.ERROR] = pyev.EV_READ | pyev.EV_WRITE _LIBEV_TO_PIKA_ARRAY = array.array('i', [0] * ((pyev.EV_READ | pyev.EV_WRITE) + 1)) _LIBEV_TO_PIKA_ARRAY[pyev.EV_READ] = BaseConnection.READ _LIBEV_TO_PIKA_ARRAY[pyev.EV_WRITE] = BaseConnection.WRITE _LIBEV_TO_PIKA_ARRAY[pyev.EV_READ | pyev.EV_WRITE] = \ BaseConnection.READ | BaseConnection.WRITE def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False, custom_ioloop=None, on_signal_callback=None): """Create a new instance of the LibevConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param method on_open_error_callback: Called if the connection can't be established: on_open_error_callback(connection, str|exception) :param method on_close_callback: Called when the connection is closed: on_close_callback(connection, reason_code, reason_text) :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the default IOLoop in libev :param on_signal_callback: Method to call if SIGINT or SIGTERM occur :type on_signal_callback: method """ if custom_ioloop: self.ioloop = custom_ioloop else: with warnings.catch_warnings(): warnings.simplefilter("ignore", RuntimeWarning) self.ioloop = pyev.default_loop() self.ioloop.update() self.async = None self._on_signal_callback = on_signal_callback self._io_watcher = None self._active_timers = {} self._stopped_timers = deque() super(LibevConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, self.ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the remote socket, adding the socket to the IOLoop if connected :rtype: bool """ LOGGER.debug('init io and signal watchers if any') # reuse existing signal watchers, can only be declared for 1 ioloop global global_sigint_watcher, global_sigterm_watcher error = super(LibevConnection, self)._adapter_connect() if not error: if self._on_signal_callback and not global_sigterm_watcher: global_sigterm_watcher = \ self.ioloop.signal(signal.SIGTERM, self._handle_sigterm) if self._on_signal_callback and not global_sigint_watcher: global_sigint_watcher = self.ioloop.signal(signal.SIGINT, self._handle_sigint) if not self._io_watcher: self._io_watcher = \ self.ioloop.io(self.socket.fileno(), self._PIKA_TO_LIBEV_ARRAY[self.event_state], self._handle_events) # NOTE: if someone knows why this async is needed here, please add # a comment in the code that explains it. self.async = pyev.Async(self.ioloop, self._noop_callable) self.async.start() if self._on_signal_callback: global_sigterm_watcher.start() if self._on_signal_callback: global_sigint_watcher.start() self._io_watcher.start() return error def _noop_callable(self, *args, **kwargs): pass def _init_connection_state(self): """Initialize or reset all of our internal state variables for a given connection. If we disconnect and reconnect, all of our state needs to be wiped. """ active_timers = list(self._active_timers.keys()) for timer in active_timers: self.remove_timeout(timer) if global_sigint_watcher: global_sigint_watcher.stop() if global_sigterm_watcher: global_sigterm_watcher.stop() if self._io_watcher: self._io_watcher.stop() super(LibevConnection, self)._init_connection_state() def _handle_sigint(self, signal_watcher, libev_events): """If an on_signal_callback has been defined, call it returning the string 'SIGINT'. """ LOGGER.debug('SIGINT') self._on_signal_callback('SIGINT') def _handle_sigterm(self, signal_watcher, libev_events): """If an on_signal_callback has been defined, call it returning the string 'SIGTERM'. """ LOGGER.debug('SIGTERM') self._on_signal_callback('SIGTERM') def _handle_events(self, io_watcher, libev_events, **kwargs): """Handle IO events by efficiently translating to BaseConnection events and calling super. """ super(LibevConnection, self)._handle_events(io_watcher.fd, self._LIBEV_TO_PIKA_ARRAY[libev_events], **kwargs) def _reset_io_watcher(self): """Reset the IO watcher; retry as necessary """ self._io_watcher.stop() retries = 0 while True: try: self._io_watcher.set( self._io_watcher.fd, self._PIKA_TO_LIBEV_ARRAY[self.event_state]) break except Exception: # sometimes the stop() doesn't complete in time if retries > 5: raise self._io_watcher.stop() # so try it again retries += 1 self._io_watcher.start() def _manage_event_state(self): """Manage the bitmask for reading/writing/error which is used by the io/event handler to specify when there is an event such as a read or write. """ if self.outbound_buffer: if not self.event_state & self.WRITE: self.event_state |= self.WRITE self._reset_io_watcher() elif self.event_state & self.WRITE: self.event_state = self.base_events self._reset_io_watcher() def _timer_callback(self, timer, libev_events): """Manage timer callbacks indirectly.""" if timer in self._active_timers: (callback_method, callback_timeout, kwargs) = self._active_timers[timer] self.remove_timeout(timer) if callback_timeout: callback_method(timeout=timer, **kwargs) else: callback_method(**kwargs) else: LOGGER.warning('Timer callback_method not found') def _get_timer(self, deadline): """Get a timer from the pool or allocate a new one.""" if self._stopped_timers: timer = self._stopped_timers.pop() timer.set(deadline, 0.0) else: timer = self.ioloop.timer(deadline, 0.0, self._timer_callback) return timer def add_timeout(self, deadline, callback_method, callback_timeout=False, **callback_kwargs): """Add the callback_method indirectly to the IOLoop timer to fire after deadline seconds. Returns the timer handle. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :param callback_timeout: Whether timeout kwarg is passed on callback :type callback_timeout: boolean :param kwargs callback_kwargs: additional kwargs to pass on callback :rtype: timer instance handle. """ LOGGER.debug('deadline: %s', deadline) timer = self._get_timer(deadline) self._active_timers[timer] = (callback_method, callback_timeout, callback_kwargs) timer.start() return timer def remove_timeout(self, timer): """Remove the timer from the IOLoop using the handle returned from add_timeout. param: timer instance handle """ LOGGER.debug('stop') try: self._active_timers.pop(timer) except KeyError: LOGGER.warning("Attempted to remove inactive timer %s", timer) else: timer.stop() self._stopped_timers.append(timer) def _create_and_connect_to_socket(self, sock_addr_tuple): """Call super and then set the socket to nonblocking.""" result = super(LibevConnection, self)._create_and_connect_to_socket(sock_addr_tuple) if result: self.socket.setblocking(0) return result pika-0.11.0/pika/adapters/select_connection.py000066400000000000000000001061361315131611700213220ustar00rootroot00000000000000"""A connection adapter that tries to use the best polling method for the platform pika is running on. """ import abc import os import logging import socket import select import errno import time from collections import defaultdict import threading import pika.compat from pika.compat import dictkeys from pika.adapters.base_connection import BaseConnection LOGGER = logging.getLogger(__name__) # One of select, epoll, kqueue or poll SELECT_TYPE = None # Use epoll's constants to keep life easy READ = 0x0001 WRITE = 0x0004 ERROR = 0x0008 # Reason for this unconventional dict initialization is the fact that on some # platforms select.error is an aliases for OSError. We don't want the lambda # for select.error to win over one for OSError. _SELECT_ERROR_CHECKERS = {} if pika.compat.PY3: #InterruptedError is undefined in PY2 #pylint: disable=E0602 _SELECT_ERROR_CHECKERS[InterruptedError] = lambda e: True _SELECT_ERROR_CHECKERS[select.error] = lambda e: e.args[0] == errno.EINTR _SELECT_ERROR_CHECKERS[IOError] = lambda e: e.errno == errno.EINTR _SELECT_ERROR_CHECKERS[OSError] = lambda e: e.errno == errno.EINTR # We can reduce the number of elements in the list by looking at super-sub # class relationship because only the most generic ones needs to be caught. # For now the optimization is left out. # Following is better but still incomplete. #_SELECT_ERRORS = tuple(filter(lambda e: not isinstance(e, OSError), # _SELECT_ERROR_CHECKERS.keys()) # + [OSError]) _SELECT_ERRORS = tuple(_SELECT_ERROR_CHECKERS.keys()) def _is_resumable(exc): ''' Check if caught exception represents EINTR error. :param exc: exception; must be one of classes in _SELECT_ERRORS ''' checker = _SELECT_ERROR_CHECKERS.get(exc.__class__, None) if checker is not None: return checker(exc) else: return False class SelectConnection(BaseConnection): """An asynchronous connection adapter that attempts to use the fastest event loop adapter for the given platform. """ def __init__(self, # pylint: disable=R0913 parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=True, custom_ioloop=None): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Method to call on connection open :param method on_open_error_callback: Called if the connection can't be established: on_open_error_callback(connection, str|exception) :param method on_close_callback: Called when the connection is closed: on_close_callback(connection, reason_code, reason_text) :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado :raises: RuntimeError """ ioloop = custom_ioloop or IOLoop() super(SelectConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the RabbitMQ broker, returning True on success, False on failure. :rtype: bool """ error = super(SelectConnection, self)._adapter_connect() if not error: self.ioloop.add_handler(self.socket.fileno(), self._handle_events, self.event_state) return error def _adapter_disconnect(self): """Disconnect from the RabbitMQ broker""" if self.socket: self.ioloop.remove_handler(self.socket.fileno()) super(SelectConnection, self)._adapter_disconnect() class IOLoop(object): """Singleton wrapper that decides which type of poller to use, creates an instance of it in start_poller and keeps the invoking application in a blocking state by calling the pollers start method. Poller should keep looping until IOLoop.instance().stop() is called or there is a socket error. Passes through all operations to the loaded poller object. """ def __init__(self): self._poller = self._get_poller() @staticmethod def _get_poller(): """Determine the best poller to use for this enviroment.""" poller = None if hasattr(select, 'epoll'): if not SELECT_TYPE or SELECT_TYPE == 'epoll': LOGGER.debug('Using EPollPoller') poller = EPollPoller() if not poller and hasattr(select, 'kqueue'): if not SELECT_TYPE or SELECT_TYPE == 'kqueue': LOGGER.debug('Using KQueuePoller') poller = KQueuePoller() if (not poller and hasattr(select, 'poll') and hasattr(select.poll(), 'modify')): # pylint: disable=E1101 if not SELECT_TYPE or SELECT_TYPE == 'poll': LOGGER.debug('Using PollPoller') poller = PollPoller() if not poller: LOGGER.debug('Using SelectPoller') poller = SelectPoller() return poller def add_timeout(self, deadline, callback_method): """[API] Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ return self._poller.add_timeout(deadline, callback_method) def remove_timeout(self, timeout_id): """[API] Remove a timeout :param str timeout_id: The timeout id to remove """ self._poller.remove_timeout(timeout_id) def add_handler(self, fileno, handler, events): """[API] Add a new fileno to the set to be monitored :param int fileno: The file descriptor :param method handler: What is called when an event happens :param int events: The event mask using READ, WRITE, ERROR """ self._poller.add_handler(fileno, handler, events) def update_handler(self, fileno, events): """[API] Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ self._poller.update_handler(fileno, events) def remove_handler(self, fileno): """[API] Remove a file descriptor from the set :param int fileno: The file descriptor """ self._poller.remove_handler(fileno) def start(self): """[API] Start the main poller loop. It will loop until requested to exit. See `IOLoop.stop`. """ self._poller.start() def stop(self): """[API] Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. This is the only method that may be called from another thread. """ self._poller.stop() def process_timeouts(self): """[Extension] Process pending timeouts, invoking callbacks for those whose time has come """ self._poller.process_timeouts() def activate_poller(self): """[Extension] Activate the poller """ self._poller.activate_poller() def deactivate_poller(self): """[Extension] Deactivate the poller """ self._poller.deactivate_poller() def poll(self): """[Extension] Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or `_PollerBase._MAX_POLL_TIMEOUT`, whichever is sooner, and dispatch the corresponding event handlers. """ self._poller.poll() _AbstractBase = abc.ABCMeta('_AbstractBase', (object,), {}) class _PollerBase(_AbstractBase): # pylint: disable=R0902 """Base class for select-based IOLoop implementations""" # Drop out of the poll loop every _MAX_POLL_TIMEOUT secs as a worst case; # this is only a backstop value; we will run timeouts when they are # scheduled. _MAX_POLL_TIMEOUT = 5 # if the poller uses MS override with 1000 POLL_TIMEOUT_MULT = 1 def __init__(self): # fd-to-handler function mappings self._fd_handlers = dict() # event-to-fdset mappings self._fd_events = {READ: set(), WRITE: set(), ERROR: set()} self._processing_fd_event_map = {} # Reentrancy tracker of the `start` method self._start_nesting_levels = 0 self._timeouts = {} self._next_timeout = None self._stopping = False # Mutex for controlling critical sections where ioloop-interrupt sockets # are created, used, and destroyed. Needed in case `stop()` is called # from a thread. self._mutex = threading.Lock() # ioloop-interrupt socket pair; initialized in start() self._r_interrupt = None self._w_interrupt = None def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ timeout_at = time.time() + deadline value = {'deadline': timeout_at, 'callback': callback_method} # TODO when timer resolution is low (e.g., windows), we get id collision # when retrying failing connection with tiny (e.g., 0) retry interval timeout_id = hash(frozenset(value.items())) self._timeouts[timeout_id] = value if not self._next_timeout or timeout_at < self._next_timeout: self._next_timeout = timeout_at LOGGER.debug('add_timeout: added timeout %s; deadline=%s at %s', timeout_id, deadline, timeout_at) return timeout_id def remove_timeout(self, timeout_id): """Remove a timeout if it's still in the timeout stack :param str timeout_id: The timeout id to remove """ try: timeout = self._timeouts.pop(timeout_id) except KeyError: LOGGER.warning('remove_timeout: %s not found', timeout_id) else: if timeout['deadline'] == self._next_timeout: self._next_timeout = None LOGGER.debug('remove_timeout: removed %s', timeout_id) def _get_next_deadline(self): """Get the interval to the next timeout event, or a default interval """ if self._next_timeout: timeout = max(self._next_timeout - time.time(), 0) elif self._timeouts: deadlines = [t['deadline'] for t in self._timeouts.values()] self._next_timeout = min(deadlines) timeout = max((self._next_timeout - time.time(), 0)) else: timeout = self._MAX_POLL_TIMEOUT timeout = min(timeout, self._MAX_POLL_TIMEOUT) return timeout * self.POLL_TIMEOUT_MULT def process_timeouts(self): """Process pending timeouts, invoking callbacks for those whose time has come """ now = time.time() # Run the timeouts in order of deadlines. Although this shouldn't # be strictly necessary it preserves old behaviour when timeouts # were only run periodically. to_run = sorted([(k, timer) for (k, timer) in self._timeouts.items() if timer['deadline'] <= now], key=lambda item: item[1]['deadline']) for k, timer in to_run: if k not in self._timeouts: # Previous invocation(s) should have deleted the timer. continue try: timer['callback']() finally: # Don't do 'del self._timeout[k]' as the key might # have been deleted just now. if self._timeouts.pop(k, None) is not None: self._next_timeout = None def add_handler(self, fileno, handler, events): """Add a new fileno to the set to be monitored :param int fileno: The file descriptor :param method handler: What is called when an event happens :param int events: The event mask using READ, WRITE, ERROR """ self._fd_handlers[fileno] = handler self._set_handler_events(fileno, events) # Inform the derived class self._register_fd(fileno, events) def update_handler(self, fileno, events): """Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ # Record the change events_cleared, events_set = self._set_handler_events(fileno, events) # Inform the derived class self._modify_fd_events(fileno, events=events, events_to_clear=events_cleared, events_to_set=events_set) def remove_handler(self, fileno): """Remove a file descriptor from the set :param int fileno: The file descriptor """ try: del self._processing_fd_event_map[fileno] except KeyError: pass events_cleared, _ = self._set_handler_events(fileno, 0) del self._fd_handlers[fileno] # Inform the derived class self._unregister_fd(fileno, events_to_clear=events_cleared) def _set_handler_events(self, fileno, events): """Set the handler's events to the given events; internal to `_PollerBase`. :param int fileno: The file descriptor :param int events: The event mask (READ, WRITE, ERROR) :returns: a 2-tuple (events_cleared, events_set) """ events_cleared = 0 events_set = 0 for evt in (READ, WRITE, ERROR): if events & evt: if fileno not in self._fd_events[evt]: self._fd_events[evt].add(fileno) events_set |= evt else: if fileno in self._fd_events[evt]: self._fd_events[evt].discard(fileno) events_cleared |= evt return events_cleared, events_set def activate_poller(self): """Activate the poller """ # Activate the underlying poller and register current events self._init_poller() fd_to_events = defaultdict(int) for event, file_descriptors in self._fd_events.items(): for fileno in file_descriptors: fd_to_events[fileno] |= event for fileno, events in fd_to_events.items(): self._register_fd(fileno, events) def deactivate_poller(self): """Deactivate the poller """ self._uninit_poller() def start(self): """Start the main poller loop. It will loop until requested to exit """ self._start_nesting_levels += 1 if self._start_nesting_levels == 1: LOGGER.debug('Entering IOLoop') self._stopping = False # Activate the underlying poller and register current events self.activate_poller() # Create ioloop-interrupt socket pair and register read handler. # NOTE: we defer their creation because some users (e.g., # BlockingConnection adapter) don't use the event loop and these # sockets would get reported as leaks with self._mutex: assert self._r_interrupt is None self._r_interrupt, self._w_interrupt = self._get_interrupt_pair() self.add_handler(self._r_interrupt.fileno(), self._read_interrupt, READ) else: LOGGER.debug('Reentering IOLoop at nesting level=%s', self._start_nesting_levels) try: # Run event loop while not self._stopping: self.poll() self.process_timeouts() finally: self._start_nesting_levels -= 1 if self._start_nesting_levels == 0: LOGGER.debug('Cleaning up IOLoop') # Unregister and close ioloop-interrupt socket pair with self._mutex: self.remove_handler(self._r_interrupt.fileno()) self._r_interrupt.close() self._r_interrupt = None self._w_interrupt.close() self._w_interrupt = None # Deactivate the underlying poller self.deactivate_poller() else: LOGGER.debug('Leaving IOLoop with %s nesting levels remaining', self._start_nesting_levels) def stop(self): """Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. This is the only method that may be called from another thread. """ LOGGER.debug('Stopping IOLoop') self._stopping = True with self._mutex: if self._w_interrupt is None: return try: # Send byte to interrupt the poll loop, use send() instead of # os.write for Windows compatibility self._w_interrupt.send(b'X') except OSError as err: if err.errno != errno.EWOULDBLOCK: raise except Exception as err: # There's nothing sensible to do here, we'll exit the interrupt # loop after POLL_TIMEOUT secs in worst case anyway. LOGGER.warning("Failed to send ioloop interrupt: %s", err) raise @abc.abstractmethod def poll(self): """Wait for events on interested filedescriptors. """ raise NotImplementedError @abc.abstractmethod def _init_poller(self): """Notify the implementation to allocate the poller resource""" raise NotImplementedError @abc.abstractmethod def _uninit_poller(self): """Notify the implementation to release the poller resource""" raise NotImplementedError @abc.abstractmethod def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask (READ, WRITE, ERROR) """ raise NotImplementedError @abc.abstractmethod def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ raise NotImplementedError @abc.abstractmethod def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ raise NotImplementedError def _dispatch_fd_events(self, fd_event_map): """ Helper to dispatch callbacks for file descriptors that received events. Before doing so we re-calculate the event mask based on what is currently set in case it has been changed under our feet by a previous callback. We also take a store a refernce to the fd_event_map so that we can detect removal of an fileno during processing of another callback and not generate spurious callbacks on it. :param dict fd_event_map: Map of fds to events received on them. """ # Reset the prior map; if the call is nested, this will suppress the # remaining dispatch in the earlier call. self._processing_fd_event_map.clear() self._processing_fd_event_map = fd_event_map for fileno in dictkeys(fd_event_map): if fileno not in fd_event_map: # the fileno has been removed from the map under our feet. continue events = fd_event_map[fileno] for evt in [READ, WRITE, ERROR]: if fileno not in self._fd_events[evt]: events &= ~evt if events: handler = self._fd_handlers[fileno] handler(fileno, events) @staticmethod def _get_interrupt_pair(): """ Use a socketpair to be able to interrupt the ioloop if called from another thread. Socketpair() is not supported on some OS (Win) so use a pair of simple UDP sockets instead. The sockets will be closed and garbage collected by python when the ioloop itself is. """ try: read_sock, write_sock = socket.socketpair() except AttributeError: LOGGER.debug("Using custom socketpair for interrupt") read_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) read_sock.bind(('localhost', 0)) write_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) write_sock.connect(read_sock.getsockname()) read_sock.setblocking(0) write_sock.setblocking(0) return read_sock, write_sock def _read_interrupt(self, interrupt_fd, events): # pylint: disable=W0613 """ Read the interrupt byte(s). We ignore the event mask as we can ony get here if there's data to be read on our fd. :param int interrupt_fd: The file descriptor to read from :param int events: (unused) The events generated for this fd """ try: # NOTE Use recv instead of os.read for windows compatibility # TODO _r_interrupt is a DGRAM sock, so attempted reading of 512 # bytes will not have the desired effect in case stop was called # multiple times self._r_interrupt.recv(512) except OSError as err: if err.errno != errno.EAGAIN: raise class SelectPoller(_PollerBase): """Default behavior is to use Select since it's the widest supported and has all of the methods we need for child classes as well. One should only need to override the update_handler and start methods for additional types. """ # if the poller uses MS specify 1000 POLL_TIMEOUT_MULT = 1 def __init__(self): """Create an instance of the SelectPoller """ super(SelectPoller, self).__init__() def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: if (self._fd_events[READ] or self._fd_events[WRITE] or self._fd_events[ERROR]): read, write, error = select.select( self._fd_events[READ], self._fd_events[WRITE], self._fd_events[ERROR], self._get_next_deadline()) else: # NOTE When called without any FDs, select fails on # Windows with error 10022, 'An invalid argument was # supplied'. time.sleep(self._get_next_deadline()) read, write, error = [], [], [] break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise # Build an event bit mask for each fileno we've received an event for fd_event_map = defaultdict(int) for fd_set, evt in zip((read, write, error), (READ, WRITE, ERROR)): for fileno in fd_set: fd_event_map[fileno] |= evt self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" # It's a no op in SelectPoller pass def _uninit_poller(self): """Notify the implementation to release the poller resource""" # It's a no op in SelectPoller pass def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ # It's a no op in SelectPoller pass def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ # It's a no op in SelectPoller pass def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ # It's a no op in SelectPoller pass class KQueuePoller(_PollerBase): """KQueuePoller works on BSD based systems and is faster than select""" def __init__(self): """Create an instance of the KQueuePoller :param int fileno: The file descriptor to check events for :param method handler: What is called when an event happens :param int events: The events to look for """ super(KQueuePoller, self).__init__() self._kqueue = None @staticmethod def _map_event(kevent): """return the event type associated with a kevent object :param kevent kevent: a kevent object as returned by kqueue.control() """ if kevent.filter == select.KQ_FILTER_READ: return READ elif kevent.filter == select.KQ_FILTER_WRITE: return WRITE elif kevent.flags & select.KQ_EV_ERROR: return ERROR def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: kevents = self._kqueue.control(None, 1000, self._get_next_deadline()) break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise fd_event_map = defaultdict(int) for event in kevents: fd_event_map[event.ident] |= self._map_event(event) self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" assert self._kqueue is None self._kqueue = select.kqueue() def _uninit_poller(self): """Notify the implementation to release the poller resource""" self._kqueue.close() self._kqueue = None def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ self._modify_fd_events(fileno, events=events, events_to_clear=0, events_to_set=events) def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ if self._kqueue is None: return kevents = list() if events_to_clear & READ: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_DELETE)) if events_to_set & READ: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD)) if events_to_clear & WRITE: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_DELETE)) if events_to_set & WRITE: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_ADD)) self._kqueue.control(kevents, 0) def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ self._modify_fd_events(fileno, events=0, events_to_clear=events_to_clear, events_to_set=0) class PollPoller(_PollerBase): """Poll works on Linux and can have better performance than EPoll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1000 def __init__(self): """Create an instance of the KQueuePoller :param int fileno: The file descriptor to check events for :param method handler: What is called when an event happens :param int events: The events to look for """ self._poll = None super(PollPoller, self).__init__() @staticmethod def _create_poller(): """ :rtype: `select.poll` """ return select.poll() # pylint: disable=E1101 def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: events = self._poll.poll(self._get_next_deadline()) break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise fd_event_map = defaultdict(int) for fileno, event in events: fd_event_map[fileno] |= event self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" assert self._poll is None self._poll = self._create_poller() def _uninit_poller(self): """Notify the implementation to release the poller resource""" if hasattr(self._poll, "close"): self._poll.close() self._poll = None def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ if self._poll is not None: self._poll.register(fileno, events) def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ if self._poll is not None: self._poll.modify(fileno, events) def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ if self._poll is not None: self._poll.unregister(fileno) class EPollPoller(PollPoller): """EPoll works on Linux and can have better performance than Poll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1 @staticmethod def _create_poller(): """ :rtype: `select.poll` """ return select.epoll() # pylint: disable=E1101 pika-0.11.0/pika/adapters/tornado_connection.py000066400000000000000000000075131315131611700215100ustar00rootroot00000000000000"""Use pika with the Tornado IOLoop""" from tornado import ioloop import logging import time from pika.adapters import base_connection LOGGER = logging.getLogger(__name__) class TornadoConnection(base_connection.BaseConnection): """The TornadoConnection runs on the Tornado IOLoop. If you're running the connection in a web app, make sure you set stop_ioloop_on_close to False, which is the default behavior for this adapter, otherwise the web app will stop taking requests. :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado """ WARN_ABOUT_IOLOOP = True def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False, custom_ioloop=None): """Create a new instance of the TornadoConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param method on_open_error_callback: Called if the connection can't be established: on_open_error_callback(connection, str|exception) :param method on_close_callback: Called when the connection is closed: on_close_callback(connection, reason_code, reason_text) :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado """ self.sleep_counter = 0 self.ioloop = custom_ioloop or ioloop.IOLoop.instance() super(TornadoConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, self.ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the remote socket, adding the socket to the IOLoop if connected. :rtype: bool """ error = super(TornadoConnection, self)._adapter_connect() if not error: self.ioloop.add_handler(self.socket.fileno(), self._handle_events, self.event_state) return error def _adapter_disconnect(self): """Disconnect from the RabbitMQ broker""" if self.socket: self.ioloop.remove_handler(self.socket.fileno()) super(TornadoConnection, self)._adapter_disconnect() def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ return self.ioloop.add_timeout(time.time() + deadline, callback_method) def remove_timeout(self, timeout_id): """Remove the timeout from the IOLoop by the ID returned from add_timeout. :rtype: str """ return self.ioloop.remove_timeout(timeout_id) pika-0.11.0/pika/adapters/twisted_connection.py000066400000000000000000000405651315131611700215310ustar00rootroot00000000000000"""Using Pika with a Twisted reactor. Supports two methods of establishing the connection, using TwistedConnection or TwistedProtocolConnection. For details about each method, see the docstrings of the corresponding classes. The interfaces in this module are Deferred-based when possible. This means that the connection.channel() method and most of the channel methods return Deferreds instead of taking a callback argument and that basic_consume() returns a Twisted DeferredQueue where messages from the server will be stored. Refer to the docstrings for TwistedConnection.channel() and the TwistedChannel class for details. """ import functools from twisted.internet import defer, error, reactor from twisted.python import log from pika import connection from pika import exceptions from pika.adapters import base_connection class ClosableDeferredQueue(defer.DeferredQueue): """ Like the normal Twisted DeferredQueue, but after close() is called with an Exception instance all pending Deferreds are errbacked and further attempts to call get() or put() return a Failure wrapping that exception. """ def __init__(self, size=None, backlog=None): self.closed = None super(ClosableDeferredQueue, self).__init__(size, backlog) def put(self, obj): if self.closed: return defer.fail(self.closed) return defer.DeferredQueue.put(self, obj) def get(self): if self.closed: return defer.fail(self.closed) return defer.DeferredQueue.get(self) def close(self, reason): self.closed = reason while self.waiting: self.waiting.pop().errback(reason) self.pending = [] class TwistedChannel(object): """A wrapper wround Pika's Channel. Channel methods that normally take a callback argument are wrapped to return a Deferred that fires with whatever would be passed to the callback. If the channel gets closed, all pending Deferreds are errbacked with a ChannelClosed exception. The returned Deferreds fire with whatever arguments the callback to the original method would receive. The basic_consume method is wrapped in a special way, see its docstring for details. """ WRAPPED_METHODS = ('exchange_declare', 'exchange_delete', 'queue_declare', 'queue_bind', 'queue_purge', 'queue_unbind', 'basic_qos', 'basic_get', 'basic_recover', 'tx_select', 'tx_commit', 'tx_rollback', 'flow', 'basic_cancel') def __init__(self, channel): self.__channel = channel self.__closed = None self.__calls = set() self.__consumers = {} channel.add_on_close_callback(self.channel_closed) def channel_closed(self, channel, reply_code, reply_text): # enter the closed state self.__closed = exceptions.ChannelClosed(reply_code, reply_text) # errback all pending calls for d in self.__calls: d.errback(self.__closed) # close all open queues for consumers in self.__consumers.values(): for c in consumers: c.close(self.__closed) # release references to stored objects self.__calls = set() self.__consumers = {} def basic_consume(self, *args, **kwargs): """Consume from a server queue. Returns a Deferred that fires with a tuple: (queue_object, consumer_tag). The queue object is an instance of ClosableDeferredQueue, where data received from the queue will be stored. Clients should use its get() method to fetch individual message. """ if self.__closed: return defer.fail(self.__closed) queue = ClosableDeferredQueue() queue_name = kwargs['queue'] kwargs['consumer_callback'] = lambda *args: queue.put(args) self.__consumers.setdefault(queue_name, set()).add(queue) try: consumer_tag = self.__channel.basic_consume(*args, **kwargs) # TODO this except without types would suppress system-exiting # exceptions, such as SystemExit and KeyboardInterrupt. It should be at # least `except Exception` and preferably more specific. except: return defer.fail() return defer.succeed((queue, consumer_tag)) def queue_delete(self, *args, **kwargs): """Wraps the method the same way all the others are wrapped, but removes the reference to the queue object after it gets deleted on the server. """ wrapped = self.__wrap_channel_method('queue_delete') queue_name = kwargs['queue'] d = wrapped(*args, **kwargs) return d.addCallback(self.__clear_consumer, queue_name) def basic_publish(self, *args, **kwargs): """Make sure the channel is not closed and then publish. Return a Deferred that fires with the result of the channel's basic_publish. """ if self.__closed: return defer.fail(self.__closed) return defer.succeed(self.__channel.basic_publish(*args, **kwargs)) def __wrap_channel_method(self, name): """Wrap Pika's Channel method to make it return a Deferred that fires when the method completes and errbacks if the channel gets closed. If the original method's callback would receive more than one argument, the Deferred fires with a tuple of argument values. """ method = getattr(self.__channel, name) @functools.wraps(method) def wrapped(*args, **kwargs): if self.__closed: return defer.fail(self.__closed) d = defer.Deferred() self.__calls.add(d) d.addCallback(self.__clear_call, d) def single_argument(*args): """ Make sure that the deferred is called with a single argument. In case the original callback fires with more than one, convert to a tuple. """ if len(args) > 1: d.callback(tuple(args)) else: d.callback(*args) kwargs['callback'] = single_argument try: method(*args, **kwargs) # TODO this except without types would suppress system-exiting # exceptions, such as SystemExit and KeyboardInterrupt. It should be # at least `except Exception` and preferably more specific. except: return defer.fail() return d return wrapped def __clear_consumer(self, ret, queue_name): self.__consumers.pop(queue_name, None) return ret def __clear_call(self, ret, d): self.__calls.discard(d) return ret def __getattr__(self, name): # Wrap methods defined in WRAPPED_METHODS, forward the rest of accesses # to the channel. if name in self.WRAPPED_METHODS: return self.__wrap_channel_method(name) return getattr(self.__channel, name) class IOLoopReactorAdapter(object): """An adapter providing Pika's IOLoop interface using a Twisted reactor. Accepts a TwistedConnection object and a Twisted reactor object. """ def __init__(self, connection, reactor): self.connection = connection self.reactor = reactor self.started = False def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: twisted.internet.interfaces.IDelayedCall """ return self.reactor.callLater(deadline, callback_method) def remove_timeout(self, call): """Remove a call :param twisted.internet.interfaces.IDelayedCall call: The call to cancel """ call.cancel() def stop(self): # Guard against stopping the reactor multiple times if not self.started: return self.started = False self.reactor.stop() def start(self): # Guard against starting the reactor multiple times if self.started: return self.started = True self.reactor.run() def remove_handler(self, _): # The fileno is irrelevant, as it's the connection's job to provide it # to the reactor when asked to do so. Removing the handler from the # ioloop is removing it from the reactor in Twisted's parlance. self.reactor.removeReader(self.connection) self.reactor.removeWriter(self.connection) def update_handler(self, _, event_state): # Same as in remove_handler, the fileno is irrelevant. First remove the # connection entirely from the reactor, then add it back depending on # the event state. self.reactor.removeReader(self.connection) self.reactor.removeWriter(self.connection) if event_state & self.connection.READ: self.reactor.addReader(self.connection) if event_state & self.connection.WRITE: self.reactor.addWriter(self.connection) class TwistedConnection(base_connection.BaseConnection): """A standard Pika connection adapter. You instantiate the class passing the connection parameters and the connected callback and when it gets called you can start using it. The problem is that connection establishing is done using the blocking socket module. For instance, if the host you are connecting to is behind a misconfigured firewall that just drops packets, the whole process will freeze until the connection timeout passes. To work around that problem, use TwistedProtocolConnection, but read its docstring first. Objects of this class get put in the Twisted reactor which will notify them when the socket connection becomes readable or writable, so apart from implementing the BaseConnection interface, they also provide Twisted's IReadWriteDescriptor interface. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False): super(TwistedConnection, self).__init__( parameters=parameters, on_open_callback=on_open_callback, on_open_error_callback=on_open_error_callback, on_close_callback=on_close_callback, ioloop=IOLoopReactorAdapter(self, reactor), stop_ioloop_on_close=stop_ioloop_on_close) def _adapter_connect(self): """Connect to the RabbitMQ broker""" # Connect (blockignly!) to the server error = super(TwistedConnection, self)._adapter_connect() if not error: # Set the I/O events we're waiting for (see IOLoopReactorAdapter # docstrings for why it's OK to pass None as the file descriptor) self.ioloop.update_handler(None, self.event_state) return error def _adapter_disconnect(self): """Called when the adapter should disconnect""" self.ioloop.remove_handler(None) self._cleanup_socket() def _on_connected(self): """Call superclass and then update the event state to flush the outgoing frame out. Commit 50d842526d9f12d32ad9f3c4910ef60b8c301f59 removed a self._flush_outbound call that was in _send_frame which previously made this step unnecessary. """ super(TwistedConnection, self)._on_connected() self._manage_event_state() def channel(self, channel_number=None): """Return a Deferred that fires with an instance of a wrapper around the Pika Channel class. """ d = defer.Deferred() base_connection.BaseConnection.channel(self, d.callback, channel_number) return d.addCallback(TwistedChannel) # IReadWriteDescriptor methods def fileno(self): return self.socket.fileno() def logPrefix(self): return "twisted-pika" def connectionLost(self, reason): # If the connection was not closed cleanly, log the error if not reason.check(error.ConnectionDone): log.err(reason) self._on_terminate(connection.InternalCloseReasons.SOCKET_ERROR, str(reason)) def doRead(self): self._handle_read() def doWrite(self): self._handle_write() self._manage_event_state() class TwistedProtocolConnection(base_connection.BaseConnection): """A hybrid between a Pika Connection and a Twisted Protocol. Allows using Twisted's non-blocking connectTCP/connectSSL methods for connecting to the server. It has one caveat: TwistedProtocolConnection objects have a ready instance variable that's a Deferred which fires when the connection is ready to be used (the initial AMQP handshaking has been done). You *have* to wait for this Deferred to fire before requesting a channel. Since it's Twisted handling connection establishing it does not accept connect callbacks, you have to implement that within Twisted. Also remember that the host, port and ssl values of the connection parameters are ignored because, yet again, it's Twisted who manages the connection. """ def __init__(self, parameters=None, on_close_callback=None): self.ready = defer.Deferred() super(TwistedProtocolConnection, self).__init__( parameters=parameters, on_open_callback=self.connectionReady, on_open_error_callback=self.connectionFailed, on_close_callback=on_close_callback, ioloop=IOLoopReactorAdapter(self, reactor), stop_ioloop_on_close=False) def connect(self): # The connection is open asynchronously by Twisted, so skip the whole # connect() part, except for setting the connection state self._set_connection_state(self.CONNECTION_INIT) def _adapter_connect(self): # Should never be called, as we override connect() and leave the # building of a TCP connection to Twisted, but implement anyway to keep # the interface return False def _adapter_disconnect(self): # Disconnect from the server self.transport.loseConnection() def _flush_outbound(self): """Override BaseConnection._flush_outbound to send all bufferred data the Twisted way, by writing to the transport. No need for buffering, Twisted handles that for us. """ while self.outbound_buffer: self.transport.write(self.outbound_buffer.popleft()) def channel(self, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. Return a Deferred that fires with an instance of a wrapper around the Pika Channel class. :param int channel_number: The channel number to use, defaults to the next available. """ d = defer.Deferred() base_connection.BaseConnection.channel(self, d.callback, channel_number) return d.addCallback(TwistedChannel) # IProtocol methods def dataReceived(self, data): # Pass the bytes to Pika for parsing self._on_data_available(data) def connectionLost(self, reason): # Let the caller know there's been an error d, self.ready = self.ready, None if d: d.errback(reason) def makeConnection(self, transport): self.transport = transport self.connectionMade() def connectionMade(self): # Tell everyone we're connected self._on_connected() # Our own methods def connectionReady(self, res): d, self.ready = self.ready, None if d: d.callback(res) def connectionFailed(self, connection_unused, error_message=None): d, self.ready = self.ready, None if d: attempts = self.params.connection_attempts exc = exceptions.AMQPConnectionError(attempts) d.errback(exc) pika-0.11.0/pika/amqp_object.py000066400000000000000000000031731315131611700163020ustar00rootroot00000000000000"""Base classes that are extended by low level AMQP frames and higher level AMQP classes and methods. """ class AMQPObject(object): """Base object that is extended by AMQP low level frames and AMQP classes and methods. """ NAME = 'AMQPObject' INDEX = None def __repr__(self): items = list() for key, value in self.__dict__.items(): if getattr(self.__class__, key, None) != value: items.append('%s=%s' % (key, value)) if not items: return "<%s>" % self.NAME return "<%s(%s)>" % (self.NAME, sorted(items)) class Class(AMQPObject): """Is extended by AMQP classes""" NAME = 'Unextended Class' class Method(AMQPObject): """Is extended by AMQP methods""" NAME = 'Unextended Method' synchronous = False def _set_content(self, properties, body): """If the method is a content frame, set the properties and body to be carried as attributes of the class. :param pika.frame.Properties properties: AMQP Basic Properties :param body: The message body :type body: str or unicode """ self._properties = properties self._body = body def get_properties(self): """Return the properties if they are set. :rtype: pika.frame.Properties """ return self._properties def get_body(self): """Return the message body if it is set. :rtype: str|unicode """ return self._body class Properties(AMQPObject): """Class to encompass message properties (AMQP Basic.Properties)""" NAME = 'Unextended Properties' pika-0.11.0/pika/callback.py000066400000000000000000000350611315131611700155530ustar00rootroot00000000000000"""Callback management class, common area for keeping track of all callbacks in the Pika stack. """ import functools import logging from pika import frame from pika import amqp_object from pika.compat import xrange, canonical_str LOGGER = logging.getLogger(__name__) def name_or_value(value): """Will take Frame objects, classes, etc and attempt to return a valid string identifier for them. :param value: The value to sanitize :type value: pika.amqp_object.AMQPObject|pika.frame.Frame|int|unicode|str :rtype: str """ # Is it subclass of AMQPObject try: if issubclass(value, amqp_object.AMQPObject): return value.NAME except TypeError: pass # Is it a Pika frame object? if isinstance(value, frame.Method): return value.method.NAME # Is it a Pika frame object (go after Method since Method extends this) if isinstance(value, amqp_object.AMQPObject): return value.NAME # Cast the value to a str (python 2 and python 3); encoding as UTF-8 on Python 2 return canonical_str(value) def sanitize_prefix(function): """Automatically call name_or_value on the prefix passed in.""" @functools.wraps(function) def wrapper(*args, **kwargs): args = list(args) offset = 1 if 'prefix' in kwargs: kwargs['prefix'] = name_or_value(kwargs['prefix']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) offset += 1 if 'key' in kwargs: kwargs['key'] = name_or_value(kwargs['key']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) return function(*tuple(args), **kwargs) return wrapper def check_for_prefix_and_key(function): """Automatically return false if the key or prefix is not in the callbacks for the instance. """ @functools.wraps(function) def wrapper(*args, **kwargs): offset = 1 # Sanitize the prefix if 'prefix' in kwargs: prefix = name_or_value(kwargs['prefix']) else: prefix = name_or_value(args[offset]) offset += 1 # Make sure to sanitize the key as well if 'key' in kwargs: key = name_or_value(kwargs['key']) else: key = name_or_value(args[offset]) # Make sure prefix and key are in the stack if prefix not in args[0]._stack or key not in args[0]._stack[prefix]: return False # Execute the method return function(*args, **kwargs) return wrapper class CallbackManager(object): """CallbackManager is a global callback system designed to be a single place where Pika can manage callbacks and process them. It should be referenced by the CallbackManager.instance() method instead of constructing new instances of it. """ CALLS = 'calls' ARGUMENTS = 'arguments' DUPLICATE_WARNING = 'Duplicate callback found for "%s:%s"' CALLBACK = 'callback' ONE_SHOT = 'one_shot' ONLY_CALLER = 'only' def __init__(self): """Create an instance of the CallbackManager""" self._stack = dict() @sanitize_prefix def add(self, prefix, key, callback, one_shot=True, only_caller=None, arguments=None): """Add a callback to the stack for the specified key. If the call is specified as one_shot, it will be removed after being fired The prefix is usually the channel number but the class is generic and prefix and key may be any value. If you pass in only_caller CallbackManager will restrict processing of the callback to only the calling function/object that you specify. :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :param method callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :param dict arguments: Arguments to validate when processing :rtype: tuple(prefix, key) """ # Prep the stack if prefix not in self._stack: self._stack[prefix] = dict() if key not in self._stack[prefix]: self._stack[prefix][key] = list() # Check for a duplicate for callback_dict in self._stack[prefix][key]: if (callback_dict[self.CALLBACK] == callback and callback_dict[self.ARGUMENTS] == arguments and callback_dict[self.ONLY_CALLER] == only_caller): if callback_dict[self.ONE_SHOT] is True: callback_dict[self.CALLS] += 1 LOGGER.debug('Incremented callback reference counter: %r', callback_dict) else: LOGGER.warning(self.DUPLICATE_WARNING, prefix, key) return prefix, key # Create the callback dictionary callback_dict = self._callback_dict(callback, one_shot, only_caller, arguments) self._stack[prefix][key].append(callback_dict) LOGGER.debug('Added: %r', callback_dict) return prefix, key def clear(self): """Clear all the callbacks if there are any defined.""" self._stack = dict() LOGGER.debug('Callbacks cleared') @sanitize_prefix def cleanup(self, prefix): """Remove all callbacks from the stack by a prefix. Returns True if keys were there to be removed :param str or int prefix: The prefix for keeping track of callbacks with :rtype: bool """ LOGGER.debug('Clearing out %r from the stack', prefix) if prefix not in self._stack or not self._stack[prefix]: return False del self._stack[prefix] return True @sanitize_prefix def pending(self, prefix, key): """Return count of callbacks for a given prefix or key or None :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :rtype: None or int """ if not prefix in self._stack or not key in self._stack[prefix]: return None return len(self._stack[prefix][key]) @sanitize_prefix @check_for_prefix_and_key def process(self, prefix, key, caller, *args, **keywords): """Run through and process all the callbacks for the specified keys. Caller should be specified at all times so that callbacks which require a specific function to call CallbackManager.process will not be processed. :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :param object caller: Who is firing the event :param list args: Any optional arguments :param dict keywords: Optional keyword arguments :rtype: bool """ LOGGER.debug('Processing %s:%s', prefix, key) if prefix not in self._stack or key not in self._stack[prefix]: return False callbacks = list() # Check each callback, append it to the list if it should be called for callback_dict in list(self._stack[prefix][key]): if self._should_process_callback(callback_dict, caller, list(args)): callbacks.append(callback_dict[self.CALLBACK]) if callback_dict[self.ONE_SHOT]: self._use_one_shot_callback(prefix, key, callback_dict) # Call each callback for callback in callbacks: LOGGER.debug('Calling %s for "%s:%s"', callback, prefix, key) try: callback(*args, **keywords) except: LOGGER.exception('Calling %s for "%s:%s" failed', callback, prefix, key) raise return True @sanitize_prefix @check_for_prefix_and_key def remove(self, prefix, key, callback_value=None, arguments=None): """Remove a callback from the stack by prefix, key and optionally the callback itself. If you only pass in prefix and key, all callbacks for that prefix and key will be removed. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param method callback_value: The method defined to call on callback :param dict arguments: Optional arguments to check :rtype: bool """ if callback_value: offsets_to_remove = list() for offset in xrange(len(self._stack[prefix][key]), 0, -1): callback_dict = self._stack[prefix][key][offset - 1] if (callback_dict[self.CALLBACK] == callback_value and self._arguments_match(callback_dict, [arguments])): offsets_to_remove.append(offset - 1) for offset in offsets_to_remove: try: LOGGER.debug('Removing callback #%i: %r', offset, self._stack[prefix][key][offset]) del self._stack[prefix][key][offset] except KeyError: pass self._cleanup_callback_dict(prefix, key) return True @sanitize_prefix @check_for_prefix_and_key def remove_all(self, prefix, key): """Remove all callbacks for the specified prefix and key. :param str prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ del self._stack[prefix][key] self._cleanup_callback_dict(prefix, key) def _arguments_match(self, callback_dict, args): """Validate if the arguments passed in match the expected arguments in the callback_dict. We expect this to be a frame passed in to *args for process or passed in as a list from remove. :param dict callback_dict: The callback dictionary to evaluate against :param list args: The arguments passed in as a list """ if callback_dict[self.ARGUMENTS] is None: return True if not args: return False if isinstance(args[0], dict): return self._dict_arguments_match(args[0], callback_dict[self.ARGUMENTS]) return self._obj_arguments_match(args[0].method if hasattr(args[0], 'method') else args[0], callback_dict[self.ARGUMENTS]) def _callback_dict(self, callback, one_shot, only_caller, arguments): """Return the callback dictionary. :param method callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :rtype: dict """ value = { self.CALLBACK: callback, self.ONE_SHOT: one_shot, self.ONLY_CALLER: only_caller, self.ARGUMENTS: arguments } if one_shot: value[self.CALLS] = 1 return value def _cleanup_callback_dict(self, prefix, key=None): """Remove empty dict nodes in the callback stack. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ if key and key in self._stack[prefix] and not self._stack[prefix][key]: del self._stack[prefix][key] if prefix in self._stack and not self._stack[prefix]: del self._stack[prefix] @staticmethod def _dict_arguments_match(value, expectation): """Checks an dict to see if it has attributes that meet the expectation. :param dict value: The dict to evaluate :param dict expectation: The values to check against :rtype: bool """ LOGGER.debug('Comparing %r to %r', value, expectation) for key in expectation: if value.get(key) != expectation[key]: LOGGER.debug('Values in dict do not match for %s', key) return False return True @staticmethod def _obj_arguments_match(value, expectation): """Checks an object to see if it has attributes that meet the expectation. :param object value: The object to evaluate :param dict expectation: The values to check against :rtype: bool """ for key in expectation: if not hasattr(value, key): LOGGER.debug('%r does not have required attribute: %s', type(value), key) return False if getattr(value, key) != expectation[key]: LOGGER.debug('Values in %s do not match for %s', type(value), key) return False return True def _should_process_callback(self, callback_dict, caller, args): """Returns True if the callback should be processed. :param dict callback_dict: The callback configuration :param object caller: Who is firing the event :param list args: Any optional arguments :rtype: bool """ if not self._arguments_match(callback_dict, args): LOGGER.debug('Arguments do not match for %r, %r', callback_dict, args) return False return (callback_dict[self.ONLY_CALLER] is None or (callback_dict[self.ONLY_CALLER] and callback_dict[self.ONLY_CALLER] == caller)) def _use_one_shot_callback(self, prefix, key, callback_dict): """Process the one-shot callback, decrementing the use counter and removing it from the stack if it's now been fully used. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param dict callback_dict: The callback dict to process """ LOGGER.debug('Processing use of oneshot callback') callback_dict[self.CALLS] -= 1 LOGGER.debug('%i registered uses left', callback_dict[self.CALLS]) if callback_dict[self.CALLS] <= 0: self.remove(prefix, key, callback_dict[self.CALLBACK], callback_dict[self.ARGUMENTS]) pika-0.11.0/pika/channel.py000066400000000000000000001664661315131611700154450ustar00rootroot00000000000000"""The Channel class provides a wrapper for interacting with RabbitMQ implementing the methods and behaviors for an AMQP Channel. """ import collections import logging import uuid import pika.frame as frame import pika.exceptions as exceptions import pika.spec as spec from pika.utils import is_callable from pika.compat import unicode_type, dictkeys, is_integer LOGGER = logging.getLogger(__name__) MAX_CHANNELS = 65535 # per AMQP 0.9.1 spec. class Channel(object): """A Channel is the primary communication method for interacting with RabbitMQ. It is recommended that you do not directly invoke the creation of a channel object in your application code but rather construct the a channel by calling the active connection's channel() method. """ # Disable pyling messages concerning "method could be a function" # pylint: disable=R0201 CLOSED = 0 OPENING = 1 OPEN = 2 CLOSING = 3 # client-initiated close in progress _STATE_NAMES = { CLOSED: 'CLOSED', OPENING: 'OPENING', OPEN: 'OPEN', CLOSING: 'CLOSING' } _ON_CHANNEL_CLEANUP_CB_KEY = '_on_channel_cleanup' def __init__(self, connection, channel_number, on_open_callback): """Create a new instance of the Channel :param pika.connection.Connection connection: The connection :param int channel_number: The channel number for this instance :param callable on_open_callback: The callback to call on channel open """ if not isinstance(channel_number, int): raise exceptions.InvalidChannelNumber self.channel_number = channel_number self.callbacks = connection.callbacks self.connection = connection # Initially, flow is assumed to be active self.flow_active = True self._content_assembler = ContentFrameAssembler() self._blocked = collections.deque(list()) self._blocking = None self._has_on_flow_callback = False self._cancelled = set() self._consumers = dict() self._consumers_with_noack = set() self._on_flowok_callback = None self._on_getok_callback = None self._on_openok_callback = on_open_callback self._state = self.CLOSED # We save the closing reason code and text to be passed to # on-channel-close callback at closing of the channel. Channel.close # stores the given reply_code/reply_text if the channel was in OPEN or # OPENING states. An incoming Channel.Close AMQP method from broker will # override this value. And a sudden loss of connection has the highest # prececence to override it. self._closing_code_and_text = (0, '') # opaque cookie value set by wrapper layer (e.g., BlockingConnection) # via _set_cookie self._cookie = None def __int__(self): """Return the channel object as its channel number :rtype: int """ return self.channel_number def __repr__(self): return '<%s number=%s %s conn=%r>' % (self.__class__.__name__, self.channel_number, self._STATE_NAMES[self._state], self.connection) def add_callback(self, callback, replies, one_shot=True): """Pass in a callback handler and a list replies from the RabbitMQ broker which you'd like the callback notified of. Callbacks should allow for the frame parameter to be passed in. :param callable callback: The callback to call :param list replies: The replies to get a callback for :param bool one_shot: Only handle the first type callback """ for reply in replies: self.callbacks.add(self.channel_number, reply, callback, one_shot) def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when the basic_cancel is sent by the server. The callback function should receive a frame parameter. :param callable callback: The callback to call on Basic.Cancel from broker """ self.callbacks.add(self.channel_number, spec.Basic.Cancel, callback, False) def add_on_close_callback(self, callback): """Pass a callback function that will be called when the channel is closed. The callback function will receive the channel, the reply_code (int) and the reply_text (int) describing why the channel was closed. If the channel is closed by broker via Channel.Close, the callback will receive the reply_code/reply_text provided by the broker. If channel closing is initiated by user (either directly of indirectly by closing a connection containing the channel) and closing concludes gracefully without Channel.Close from the broker and without loss of connection, the callback will receive 0 as reply_code and empty string as reply_text. If channel was closed due to loss of connection, the callback will receive reply_code and reply_text representing the loss of connection. :param callable callback: The callback, having the signature: callback(Channel, int reply_code, str reply_text) """ self.callbacks.add(self.channel_number, '_on_channel_close', callback, False, self) def add_on_flow_callback(self, callback): """Pass a callback function that will be called when Channel.Flow is called by the remote server. Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure :param callable callback: The callback function """ self._has_on_flow_callback = True self.callbacks.add(self.channel_number, spec.Channel.Flow, callback, False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when basic_publish as sent a message that has been rejected and returned by the server. :param callable callback: The function to call, having the signature callback(channel, method, properties, body) where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) """ self.callbacks.add(self.channel_number, '_on_return', callback, False) def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param integer delivery_tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ if not self.is_open: raise exceptions.ChannelClosed() return self._send_method(spec.Basic.Ack(delivery_tag, multiple)) def basic_cancel(self, callback=None, consumer_tag='', nowait=False): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. It may also be sent from the server to the client in the event of the consumer being unexpectedly cancelled (i.e. cancelled for any reason other than the server receiving the corresponding basic.cancel from the client). This allows clients to be notified of the loss of consumers due to events such as queue deletion. :param callable callback: Callback to call for a Basic.CancelOk response; MUST be None when nowait=True. MUST be callable when nowait=False. :param str consumer_tag: Identifier for the consumer :param bool nowait: Do not expect a Basic.CancelOk response :raises ValueError: """ self._validate_channel_and_callback(callback) if nowait: if callback is not None: raise ValueError( 'Completion callback must be None when nowait=True') else: if callback is None: raise ValueError( 'Must have completion callback with nowait=False') if consumer_tag in self._cancelled: # We check for cancelled first, because basic_cancel removes # consumers closed with nowait from self._consumers LOGGER.warning('basic_cancel - consumer is already cancelling: %s', consumer_tag) return if consumer_tag not in self._consumers: # Could be cancelled by user or broker earlier LOGGER.warning('basic_cancel - consumer not found: %s', consumer_tag) return LOGGER.debug('Cancelling consumer: %s (nowait=%s)', consumer_tag, nowait) if nowait: # This is our last opportunity while the channel is open to remove # this consumer callback and help gc; unfortunately, this consumer's # self._cancelled and self._consumers_with_noack (if any) entries # will persist until the channel is closed. del self._consumers[consumer_tag] if callback is not None: if nowait: raise ValueError('Cannot pass a callback if nowait is True') self.callbacks.add(self.channel_number, spec.Basic.CancelOk, callback) self._cancelled.add(consumer_tag) self._rpc(spec.Basic.Cancel(consumer_tag=consumer_tag, nowait=nowait), self._on_cancelok if not nowait else None, [(spec.Basic.CancelOk, {'consumer_tag': consumer_tag})] if nowait is False else []) def basic_consume(self, consumer_callback, queue='', no_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Sends the AMQP 0-9-1 command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. For more information on basic_consume, see: Tutorial 2 at http://www.rabbitmq.com/getstarted.html http://www.rabbitmq.com/confirms.html http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param callable consumer_callback: The function to call when consuming with the signature consumer_callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Deliver properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param queue: The queue to consume from :type queue: str or unicode :param bool no_ack: if set to True, automatic acknowledgement mode will be used (see http://www.rabbitmq.com/confirms.html) :param bool exclusive: Don't allow other consumers on the queue :param consumer_tag: Specify your own consumer tag :type consumer_tag: str or unicode :param dict arguments: Custom key/value pair arguments for the consumer :rtype: str """ self._validate_channel_and_callback(consumer_callback) # If a consumer tag was not passed, create one if not consumer_tag: consumer_tag = self._generate_consumer_tag() if consumer_tag in self._consumers or consumer_tag in self._cancelled: raise exceptions.DuplicateConsumerTag(consumer_tag) if no_ack: self._consumers_with_noack.add(consumer_tag) self._consumers[consumer_tag] = consumer_callback self._rpc(spec.Basic.Consume(queue=queue, consumer_tag=consumer_tag, no_ack=no_ack, exclusive=exclusive, arguments=arguments or dict()), self._on_eventok, [(spec.Basic.ConsumeOk, {'consumer_tag': consumer_tag})]) return consumer_tag def _generate_consumer_tag(self): """Generate a consumer tag NOTE: this protected method may be called by derived classes :returns: consumer tag :rtype: str """ return 'ctag%i.%s' % (self.channel_number, uuid.uuid4().hex) def basic_get(self, callback=None, queue='', no_ack=False): """Get a single message from the AMQP broker. If you want to be notified of Basic.GetEmpty, use the Channel.add_callback method adding your Basic.GetEmpty callback which should expect only one parameter, frame. Due to implementation details, this cannot be called a second time until the callback is executed. For more information on basic_get and its parameters, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get :param callable callback: The callback to call with a message that has the signature callback(channel, method, properties, body), where: channel: pika.Channel method: pika.spec.Basic.GetOk properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param queue: The queue to get a message from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a reply """ self._validate_channel_and_callback(callback) # TODO Is basic_get meaningful when callback is None? if self._on_getok_callback is not None: raise exceptions.DuplicateGetOkCallback() self._on_getok_callback = callback # TODO Strangely, not using _rpc for the synchronous Basic.Get. Would # need to extend _rpc to handle Basic.GetOk method, header, and body # frames (or similar) self._send_method(spec.Basic.Get(queue=queue, no_ack=no_ack)) def basic_nack(self, delivery_tag=None, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery-tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ if not self.is_open: raise exceptions.ChannelClosed() return self._send_method(spec.Basic.Nack(delivery_tag, multiple, requeue)) def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key and body. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body :type body: str or unicode :param pika.spec.BasicProperties properties: Basic.properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag """ if not self.is_open: raise exceptions.ChannelClosed() if immediate: LOGGER.warning('The immediate flag is deprecated in RabbitMQ') if isinstance(body, unicode_type): body = body.encode('utf-8') properties = properties or spec.BasicProperties() self._send_method(spec.Basic.Publish(exchange=exchange, routing_key=routing_key, mandatory=mandatory, immediate=immediate), (properties, body)) def basic_qos(self, callback=None, prefetch_size=0, prefetch_count=0, all_channels=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param callable callback: The callback to call for Basic.QosOk response :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored if the no-ack option is set. :param bool all_channels: Should the QoS apply to all channels """ self._validate_channel_and_callback(callback) return self._rpc(spec.Basic.Qos(prefetch_size, prefetch_count, all_channels), callback, [spec.Basic.QosOk]) def basic_reject(self, delivery_tag, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery-tag: int/long The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. :raises: TypeError """ if not self.is_open: raise exceptions.ChannelClosed() if not is_integer(delivery_tag): raise TypeError('delivery_tag must be an integer') return self._send_method(spec.Basic.Reject(delivery_tag, requeue)) def basic_recover(self, callback=None, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param callable callback: Callback to call when receiving Basic.RecoverOk :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ self._validate_channel_and_callback(callback) return self._rpc(spec.Basic.Recover(requeue), callback, [spec.Basic.RecoverOk]) def close(self, reply_code=0, reply_text="Normal shutdown"): """Invoke a graceful shutdown of the channel with the AMQP Broker. If channel is OPENING, transition to CLOSING and suppress the incoming Channel.OpenOk, if any. :param int reply_code: The reason code to send to broker :param str reply_text: The reason text to send to broker :raises ChannelClosed: if channel is already closed :raises ChannelAlreadyClosing: if channel is already closing """ if self.is_closed: # Whoever is calling `close` might expect the on-channel-close-cb # to be called, which won't happen when it's already closed raise exceptions.ChannelClosed('Already closed: %s' % self) if self.is_closing: # Whoever is calling `close` might expect their reply_code and # reply_text to be sent to broker, which won't happen if we're # already closing. raise exceptions.ChannelAlreadyClosing('Already closing: %s' % self) # If channel is OPENING, we will transition it to CLOSING state, # causing the _on_openok method to suppress the OPEN state transition # and the on-channel-open-callback LOGGER.info('Closing channel (%s): %r on %s', reply_code, reply_text, self) for consumer_tag in dictkeys(self._consumers): if consumer_tag not in self._cancelled: self.basic_cancel(consumer_tag=consumer_tag, nowait=True) # Change state after cancelling consumers to avoid ChannelClosed # exception from basic_cancel self._set_state(self.CLOSING) self._rpc(spec.Channel.Close(reply_code, reply_text, 0, 0), self._on_closeok, [spec.Channel.CloseOk]) def confirm_delivery(self, callback=None, nowait=False): """Turn on Confirm mode in the channel. Pass in a callback to be notified by the Broker when a message has been confirmed as received or rejected (Basic.Ack, Basic.Nack) from the broker to the publisher. For more information see: http://www.rabbitmq.com/extensions.html#confirms :param callable callback: The callback for delivery confirmations that has the following signature: callback(pika.frame.Method), where method_frame contains either method `spec.Basic.Ack` or `spec.Basic.Nack`. :param bool nowait: Do not send a reply frame (Confirm.SelectOk) """ self._validate_channel_and_callback(callback) # TODO confirm_deliver should require a callback; it's meaningless # without a user callback to receieve Basic.Ack/Basic.Nack notifications if not (self.connection.publisher_confirms and self.connection.basic_nack): raise exceptions.MethodNotImplemented('Not Supported on Server') # Add the ack and nack callbacks if callback is not None: self.callbacks.add(self.channel_number, spec.Basic.Ack, callback, False) self.callbacks.add(self.channel_number, spec.Basic.Nack, callback, False) # Send the RPC command self._rpc(spec.Confirm.Select(nowait), self._on_selectok if not nowait else None, [spec.Confirm.SelectOk] if nowait is False else []) @property def consumer_tags(self): """Property method that returns a list of currently active consumers :rtype: list """ return dictkeys(self._consumers) def exchange_bind(self, callback=None, destination=None, source=None, routing_key='', nowait=False, arguments=None): """Bind an exchange to another exchange. :param callable callback: The callback to call on Exchange.BindOk; MUST be None when nowait=True :param destination: The destination exchange to bind :type destination: str or unicode :param source: The source exchange to bind to :type source: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param bool nowait: Do not wait for an Exchange.BindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Bind(0, destination, source, routing_key, nowait, arguments or dict()), callback, [spec.Exchange.BindOk] if nowait is False else []) def exchange_declare(self, callback=None, exchange=None, exchange_type='direct', passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param callable callback: Call this method on Exchange.DeclareOk; MUST be None when nowait=True :param exchange: The exchange name consists of a non-empty :type exchange: str or unicode sequence of these characters: letters, digits, hyphen, underscore, period, or colon. :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param bool nowait: Do not expect an Exchange.DeclareOk response :param dict arguments: Custom key/value pair arguments for the exchange """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Declare(0, exchange, exchange_type, passive, durable, auto_delete, internal, nowait, arguments or dict()), callback, [spec.Exchange.DeclareOk] if nowait is False else []) def exchange_delete(self, callback=None, exchange=None, if_unused=False, nowait=False): """Delete the exchange. :param callable callback: The function to call on Exchange.DeleteOk; MUST be None when nowait=True. :param exchange: The exchange name :type exchange: str or unicode :param bool if_unused: only delete if the exchange is unused :param bool nowait: Do not wait for an Exchange.DeleteOk """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Delete(0, exchange, if_unused, nowait), callback, [spec.Exchange.DeleteOk] if nowait is False else []) def exchange_unbind(self, callback=None, destination=None, source=None, routing_key='', nowait=False, arguments=None): """Unbind an exchange from another exchange. :param callable callback: The callback to call on Exchange.UnbindOk; MUST be None when nowait=True. :param destination: The destination exchange to unbind :type destination: str or unicode :param source: The source exchange to unbind from :type source: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param bool nowait: Do not wait for an Exchange.UnbindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Unbind(0, destination, source, routing_key, nowait, arguments), callback, [spec.Exchange.UnbindOk] if nowait is False else []) def flow(self, callback, active): """Turn Channel flow control off and on. Pass a callback to be notified of the response from the server. active is a bool. Callback should expect a bool in response indicating channel flow state. For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param callable callback: The callback to call upon completion :param bool active: Turn flow on or off """ self._validate_channel_and_callback(callback) self._on_flowok_callback = callback self._rpc(spec.Channel.Flow(active), self._on_flowok, [spec.Channel.FlowOk]) @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._state == self.CLOSED @property def is_closing(self): """Returns True if client-initiated closing of the channel is in progress. :rtype: bool """ return self._state == self.CLOSING @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._state == self.OPEN def open(self): """Open the channel""" self._set_state(self.OPENING) self._add_callbacks() self._rpc(spec.Channel.Open(), self._on_openok, [spec.Channel.OpenOk]) def queue_bind(self, callback, queue, exchange, routing_key=None, nowait=False, arguments=None): """Bind the queue to the specified exchange :param callable callback: The callback to call on Queue.BindOk; MUST be None when nowait=True. :param queue: The queue to bind to the exchange :type queue: str or unicode :param exchange: The source exchange to bind to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param bool nowait: Do not wait for a Queue.BindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) replies = [spec.Queue.BindOk] if nowait is False else [] if routing_key is None: routing_key = queue return self._rpc(spec.Queue.Bind(0, queue, exchange, routing_key, nowait, arguments or dict()), callback, replies) def queue_declare(self, callback, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Leave the queue name empty for a auto-named queue in RabbitMQ :param callable callback: callback(pika.frame.Method) for method Queue.DeclareOk; MUST be None when nowait=True. :param queue: The queue name :type queue: str or unicode :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param bool nowait: Do not wait for a Queue.DeclareOk :param dict arguments: Custom key/value arguments for the queue """ if queue: condition = (spec.Queue.DeclareOk, {'queue': queue}) else: condition = spec.Queue.DeclareOk # pylint: disable=R0204 replies = [condition] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Declare(0, queue, passive, durable, exclusive, auto_delete, nowait, arguments or dict()), callback, replies) def queue_delete(self, callback=None, queue='', if_unused=False, if_empty=False, nowait=False): """Delete a queue from the broker. :param callable callback: The callback to call on Queue.DeleteOk; MUST be None when nowait=True. :param queue: The queue to delete :type queue: str or unicode :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :param bool nowait: Do not wait for a Queue.DeleteOk """ replies = [spec.Queue.DeleteOk] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Delete(0, queue, if_unused, if_empty, nowait), callback, replies) def queue_purge(self, callback=None, queue='', nowait=False): """Purge all of the messages from the specified queue :param callable callback: The callback to call on Queue.PurgeOk; MUST be None when nowait=True. :param queue: The queue to purge :type queue: str or unicode :param bool nowait: Do not expect a Queue.PurgeOk response """ replies = [spec.Queue.PurgeOk] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Purge(0, queue, nowait), callback, replies) def queue_unbind(self, callback=None, queue='', exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param callable callback: The callback to call on Queue.UnbindOk :param queue: The queue to unbind from the exchange :type queue: str or unicode :param exchange: The source exchange to bind from :type exchange: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) if routing_key is None: routing_key = queue return self._rpc(spec.Queue.Unbind(0, queue, exchange, routing_key, arguments or dict()), callback, [spec.Queue.UnbindOk]) def tx_commit(self, callback=None): """Commit a transaction :param callable callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Commit(), callback, [spec.Tx.CommitOk]) def tx_rollback(self, callback=None): """Rollback a transaction. :param callable callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Rollback(), callback, [spec.Tx.RollbackOk]) def tx_select(self, callback=None): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :param callable callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Select(), callback, [spec.Tx.SelectOk]) # Internal methods def _add_callbacks(self): """Callbacks that add the required behavior for a channel when connecting and connected to a server. """ # Add a callback for Basic.GetEmpty self.callbacks.add(self.channel_number, spec.Basic.GetEmpty, self._on_getempty, False) # Add a callback for Basic.Cancel self.callbacks.add(self.channel_number, spec.Basic.Cancel, self._on_cancel, False) # Deprecated in newer versions of RabbitMQ but still register for it self.callbacks.add(self.channel_number, spec.Channel.Flow, self._on_flow, False) # Add a callback for when the server closes our channel self.callbacks.add(self.channel_number, spec.Channel.Close, self._on_close, True) def _add_on_cleanup_callback(self, callback): """For internal use only (e.g., Connection needs to remove closed channels from its channel container). Pass a callback function that will be called when the channel is being cleaned up after all channel-close callbacks callbacks. :param callable callback: The callback to call, having the signature: callback(channel) """ self.callbacks.add(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, callback, one_shot=True, only_caller=self) def _cleanup(self): """Remove all consumers and any callbacks for the channel.""" self.callbacks.process(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, self, self) self._consumers = dict() self.callbacks.cleanup(str(self.channel_number)) self._cookie = None def _cleanup_consumer_ref(self, consumer_tag): """Remove any references to the consumer tag in internal structures for consumer state. :param str consumer_tag: The consumer tag to cleanup """ self._consumers_with_noack.discard(consumer_tag) self._consumers.pop(consumer_tag, None) self._cancelled.discard(consumer_tag) def _get_cookie(self): """Used by the wrapper implementation (e.g., `BlockingChannel`) to retrieve the cookie that it set via `_set_cookie` :returns: opaque cookie value that was set via `_set_cookie` """ return self._cookie def _handle_content_frame(self, frame_value): """This is invoked by the connection when frames that are not registered with the CallbackManager have been found. This should only be the case when the frames are related to content delivery. The _content_assembler will be invoked which will return the fully formed message in three parts when all of the body frames have been received. :param pika.amqp_object.Frame frame_value: The frame to deliver """ try: response = self._content_assembler.process(frame_value) except exceptions.UnexpectedFrameError: self._on_unexpected_frame(frame_value) return if response: if isinstance(response[0].method, spec.Basic.Deliver): self._on_deliver(*response) elif isinstance(response[0].method, spec.Basic.GetOk): self._on_getok(*response) elif isinstance(response[0].method, spec.Basic.Return): self._on_return(*response) def _on_cancel(self, method_frame): """When the broker cancels a consumer, delete it from our internal dictionary. :param pika.frame.Method method_frame: The method frame received """ if method_frame.method.consumer_tag in self._cancelled: # User-initiated cancel is waiting for Cancel-ok return self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _on_cancelok(self, method_frame): """Called in response to a frame from the Broker when the client sends Basic.Cancel :param pika.frame.Method method_frame: The method frame received """ self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _on_close(self, method_frame): """Handle the case where our channel has been closed for us :param pika.frame.Method method_frame: Method frame with Channel.Close method """ LOGGER.warning('Received remote Channel.Close (%s): %r on %s', method_frame.method.reply_code, method_frame.method.reply_text, self) # AMQP 0.9.1 requires CloseOk response to Channel.Close; Note, we should # not be called when connection is closed self._send_method(spec.Channel.CloseOk()) if self.is_closing: # Since we already sent Channel.Close, we need to wait for CloseOk # before cleaning up to avoid a race condition whereby our channel # number might get reused before our CloseOk arrives # Save the details to provide to user callback when CloseOk arrives self._closing_code_and_text = (method_frame.method.reply_code, method_frame.method.reply_text) else: self._set_state(self.CLOSED) try: self.callbacks.process(self.channel_number, '_on_channel_close', self, self, method_frame.method.reply_code, method_frame.method.reply_text) finally: self._cleanup() def _on_close_meta(self, reply_code, reply_text): """Handle meta-close request from Connection's cleanup logic after sudden connection loss. We use this opportunity to transition to CLOSED state, clean up the channel, and dispatch the on-channel-closed callbacks. :param int reply_code: The reply code to pass to on-close callback :param str reply_text: The reply text to pass to on-close callback """ LOGGER.debug('Handling meta-close on %s', self) if not self.is_closed: self._closing_code_and_text = reply_code, reply_text self._set_state(self.CLOSED) try: self.callbacks.process(self.channel_number, '_on_channel_close', self, self, reply_code, reply_text) finally: self._cleanup() def _on_closeok(self, method_frame): """Invoked when RabbitMQ replies to a Channel.Close method :param pika.frame.Method method_frame: Method frame with Channel.CloseOk method """ LOGGER.info('Received %s on %s', method_frame.method, self) self._set_state(self.CLOSED) try: self.callbacks.process(self.channel_number, '_on_channel_close', self, self, self._closing_code_and_text[0], self._closing_code_and_text[1]) finally: self._cleanup() def _on_deliver(self, method_frame, header_frame, body): """Cope with reentrancy. If a particular consumer is still active when another delivery appears for it, queue the deliveries up until it finally exits. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param body: The body received :type body: str or unicode """ consumer_tag = method_frame.method.consumer_tag if consumer_tag in self._cancelled: if self.is_open and consumer_tag not in self._consumers_with_noack: self.basic_reject(method_frame.method.delivery_tag) return if consumer_tag not in self._consumers: LOGGER.error('Unexpected delivery: %r', method_frame) return self._consumers[consumer_tag](self, method_frame.method, header_frame.properties, body) def _on_eventok(self, method_frame): """Generic events that returned ok that may have internal callbacks. We keep a list of what we've yet to implement so that we don't silently drain events that we don't support. :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Discarding frame %r', method_frame) def _on_flow(self, _method_frame_unused): """Called if the server sends a Channel.Flow frame. :param pika.frame.Method method_frame_unused: The Channel.Flow frame """ if self._has_on_flow_callback is False: LOGGER.warning('Channel.Flow received from server') def _on_flowok(self, method_frame): """Called in response to us asking the server to toggle on Channel.Flow :param pika.frame.Method method_frame: The method frame received """ self.flow_active = method_frame.method.active if self._on_flowok_callback: self._on_flowok_callback(method_frame.method.active) self._on_flowok_callback = None else: LOGGER.warning('Channel.FlowOk received with no active callbacks') def _on_getempty(self, method_frame): """When we receive an empty reply do nothing but log it :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Received Basic.GetEmpty: %r', method_frame) if self._on_getok_callback is not None: self._on_getok_callback = None def _on_getok(self, method_frame, header_frame, body): """Called in reply to a Basic.Get when there is a message. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param body: The body received :type body: str or unicode """ if self._on_getok_callback is not None: callback = self._on_getok_callback self._on_getok_callback = None callback(self, method_frame.method, header_frame.properties, body) else: LOGGER.error('Basic.GetOk received with no active callback') def _on_openok(self, method_frame): """Called by our callback handler when we receive a Channel.OpenOk and subsequently calls our _on_openok_callback which was passed into the Channel constructor. The reason we do this is because we want to make sure that the on_open_callback parameter passed into the Channel constructor is not the first callback we make. Suppress the state transition and callback if channel is already in CLOSING state. :param pika.frame.Method method_frame: Channel.OpenOk frame """ # Suppress OpenOk if the user or Connection.Close started closing it # before open completed. if self.is_closing: LOGGER.debug('Suppressing while in closing state: %s', method_frame) else: self._set_state(self.OPEN) if self._on_openok_callback is not None: self._on_openok_callback(self) def _on_return(self, method_frame, header_frame, body): """Called if the server sends a Basic.Return frame. :param pika.frame.Method method_frame: The Basic.Return frame :param pika.frame.Header header_frame: The content header frame :param body: The message body :type body: str or unicode """ if not self.callbacks.process(self.channel_number, '_on_return', self, self, method_frame.method, header_frame.properties, body): LOGGER.warning('Basic.Return received from server (%r, %r)', method_frame.method, header_frame.properties) def _on_selectok(self, method_frame): """Called when the broker sends a Confirm.SelectOk frame :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug("Confirm.SelectOk Received: %r", method_frame) def _on_synchronous_complete(self, _method_frame_unused): """This is called when a synchronous command is completed. It will undo the blocking state and send all the frames that stacked up while we were in the blocking state. :param pika.frame.Method method_frame_unused: The method frame received """ LOGGER.debug('%i blocked frames', len(self._blocked)) self._blocking = None while self._blocked and self._blocking is None: self._rpc(*self._blocked.popleft()) def _rpc(self, method, callback=None, acceptable_replies=None): """Make a syncronous channel RPC call for a synchronous method frame. If the channel is already in the blocking state, then enqueue the request, but don't send it at this time; it will be eventually sent by `_on_synchronous_complete` after the prior blocking request receives a resposne. If the channel is not in the blocking state and `acceptable_replies` is not empty, transition the channel to the blocking state and register for `_on_synchronous_complete` before sending the request. NOTE: A callback must be accompanied by non-empty acceptable_replies. :param pika.amqp_object.Method method: The AMQP method to invoke :param callable callback: The callback for the RPC response :param acceptable_replies: A (possibly empty) sequence of replies this RPC call expects or None :type acceptable_replies: list or None """ assert method.synchronous, ( 'Only synchronous-capable methods may be used with _rpc: %r' % (method,)) # Validate we got None or a list of acceptable_replies if not isinstance(acceptable_replies, (type(None), list)): raise TypeError('acceptable_replies should be list or None') if callback is not None: # Validate the callback is callable if not is_callable(callback): raise TypeError( 'callback should be None or a callable') # Make sure that callback is accompanied by acceptable replies if not acceptable_replies: raise ValueError( 'Unexpected callback for asynchronous (nowait) operation.') # Make sure the channel is not closed yet if self.is_closed: raise exceptions.ChannelClosed # If the channel is blocking, add subsequent commands to our stack if self._blocking: LOGGER.debug('Already in blocking state, so enqueueing method %s; ' 'acceptable_replies=%r', method, acceptable_replies) return self._blocked.append([method, callback, acceptable_replies]) # If acceptable replies are set, add callbacks if acceptable_replies: # Block until a response frame is received for synchronous frames self._blocking = method.NAME LOGGER.debug( 'Entering blocking state on frame %s; acceptable_replies=%r', method, acceptable_replies) for reply in acceptable_replies: if isinstance(reply, tuple): reply, arguments = reply else: arguments = None LOGGER.debug('Adding on_synchronous_complete callback') self.callbacks.add(self.channel_number, reply, self._on_synchronous_complete, arguments=arguments) if callback is not None: LOGGER.debug('Adding passed-in callback') self.callbacks.add(self.channel_number, reply, callback, arguments=arguments) self._send_method(method) def _send_method(self, method, content=None): """Shortcut wrapper to send a method through our connection, passing in the channel number :param pika.amqp_object.Method method: The method to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ # pylint: disable=W0212 self.connection._send_method(self.channel_number, method, content) def _set_cookie(self, cookie): """Used by wrapper layer (e.g., `BlockingConnection`) to link the channel implementation back to the proxy. See `_get_cookie`. :param cookie: an opaque value; typically a proxy channel implementation instance (e.g., `BlockingChannel` instance) """ self._cookie = cookie def _set_state(self, connection_state): """Set the channel connection state to the specified state value. :param int connection_state: The connection_state value """ self._state = connection_state def _on_unexpected_frame(self, frame_value): """Invoked when a frame is received that is not setup to be processed. :param pika.frame.Frame frame_value: The frame received """ LOGGER.error('Unexpected frame: %r', frame_value) def _validate_channel_and_callback(self, callback): """Verify that channel is open and callback is callable if not None :raises ChannelClosed: if channel is closed :raises ValueError: if callback is not None and is not callable """ if not self.is_open: raise exceptions.ChannelClosed() if callback is not None and not is_callable(callback): raise ValueError('callback must be a function or method') class ContentFrameAssembler(object): """Handle content related frames, building a message and return the message back in three parts upon receipt. """ def __init__(self): """Create a new instance of the conent frame assembler. """ self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() def process(self, frame_value): """Invoked by the Channel object when passed frames that are not setup in the rpc process and that don't have explicit reply types defined. This includes Basic.Publish, Basic.GetOk and Basic.Return :param Method|Header|Body frame_value: The frame to process """ if (isinstance(frame_value, frame.Method) and spec.has_content(frame_value.method.INDEX)): self._method_frame = frame_value elif isinstance(frame_value, frame.Header): self._header_frame = frame_value if frame_value.body_size == 0: return self._finish() elif isinstance(frame_value, frame.Body): return self._handle_body_frame(frame_value) else: raise exceptions.UnexpectedFrameError(frame_value) def _finish(self): """Invoked when all of the message has been received :rtype: tuple(pika.frame.Method, pika.frame.Header, str) """ content = (self._method_frame, self._header_frame, b''.join(self._body_fragments)) self._reset() return content def _handle_body_frame(self, body_frame): """Receive body frames and append them to the stack. When the body size matches, call the finish method. :param Body body_frame: The body frame :raises: pika.exceptions.BodyTooLongError :rtype: tuple(pika.frame.Method, pika.frame.Header, str)|None """ self._seen_so_far += len(body_frame.fragment) self._body_fragments.append(body_frame.fragment) if self._seen_so_far == self._header_frame.body_size: return self._finish() elif self._seen_so_far > self._header_frame.body_size: raise exceptions.BodyTooLongError(self._seen_so_far, self._header_frame.body_size) return None def _reset(self): """Reset the values for processing frames""" self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() pika-0.11.0/pika/compat.py000066400000000000000000000071601315131611700153010ustar00rootroot00000000000000import os import sys as _sys PY2 = _sys.version_info < (3,) PY3 = not PY2 if not PY2: # these were moved around for Python 3 from urllib.parse import (quote as url_quote, unquote as url_unquote, urlencode) # Python 3 does not have basestring anymore; we include # *only* the str here as this is used for textual data. basestring = (str,) # for assertions that the data is either encoded or non-encoded text str_or_bytes = (str, bytes) # xrange is gone, replace it with range xrange = range # the unicode type is str unicode_type = str def dictkeys(dct): """ Returns a list of keys of dictionary dict.keys returns a view that works like .keys in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.keys()) def dictvalues(dct): """ Returns a list of values of a dictionary dict.values returns a view that works like .values in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.values()) def dict_iteritems(dct): """ Returns an iterator of items (key/value pairs) of a dictionary dict.items returns a view that works like .items in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return dct.items() def dict_itervalues(dct): """ :param dict dct: :returns: an iterator of the values of a dictionary """ return dct.values() def byte(*args): """ This is the same as Python 2 `chr(n)` for bytes in Python 3 Returns a single byte `bytes` for the given int argument (we optimize it a bit here by passing the positional argument tuple directly to the bytes constructor. """ return bytes(args) class long(int): """ A marker class that signifies that the integer value should be serialized as `l` instead of `I` """ def __repr__(self): return str(self) + 'L' def canonical_str(value): """ Return the canonical str value for the string. In both Python 3 and Python 2 this is str. """ return str(value) def is_integer(value): return isinstance(value, int) else: from urllib import quote as url_quote, unquote as url_unquote, urlencode basestring = basestring str_or_bytes = basestring xrange = xrange unicode_type = unicode dictkeys = dict.keys dictvalues = dict.values dict_iteritems = dict.iteritems dict_itervalues = dict.itervalues byte = chr long = long def canonical_str(value): """ Returns the canonical string value of the given string. In Python 2 this is the value unchanged if it is an str, otherwise it is the unicode value encoded as UTF-8. """ try: return str(value) except UnicodeEncodeError: return str(value.encode('utf-8')) def is_integer(value): return isinstance(value, (int, long)) def as_bytes(value): if not isinstance(value, bytes): return value.encode('UTF-8') return value HAVE_SIGNAL = os.name == 'posix' EINTR_IS_EXPOSED = _sys.version_info[:2] <= (3,4) pika-0.11.0/pika/connection.py000066400000000000000000002436051315131611700161630ustar00rootroot00000000000000"""Core connection objects""" import ast import sys import collections import copy import logging import math import numbers import platform import warnings if sys.version_info > (3,): import urllib.parse as urlparse # pylint: disable=E0611,F0401 else: import urlparse from pika import __version__ from pika import callback import pika.channel from pika import credentials as pika_credentials from pika import exceptions from pika import frame from pika import heartbeat from pika import utils from pika import spec from pika.compat import (xrange, basestring, # pylint: disable=W0622 url_unquote, dictkeys, dict_itervalues, dict_iteritems) BACKPRESSURE_WARNING = ("Pika: Write buffer exceeded warning threshold at " "%i bytes and an estimated %i frames behind") PRODUCT = "Pika Python Client Library" LOGGER = logging.getLogger(__name__) class InternalCloseReasons(object): """Internal reason codes passed to the user's on_close_callback when the connection is terminated abruptly, without reply code/text from the broker. AMQP 0.9.1 specification cites IETF RFC 821 for reply codes. To avoid conflict, the `InternalCloseReasons` namespace uses negative integers. These are invalid for sending to the broker. """ SOCKET_ERROR = -1 BLOCKED_CONNECTION_TIMEOUT = -2 class Parameters(object): # pylint: disable=R0902 """Base connection parameters class definition :param bool backpressure_detection: `DEFAULT_BACKPRESSURE_DETECTION` :param float|None blocked_connection_timeout: `DEFAULT_BLOCKED_CONNECTION_TIMEOUT` :param int channel_max: `DEFAULT_CHANNEL_MAX` :param int connection_attempts: `DEFAULT_CONNECTION_ATTEMPTS` :param credentials: `DEFAULT_CREDENTIALS` :param int frame_max: `DEFAULT_FRAME_MAX` :param int heartbeat: `DEFAULT_HEARTBEAT_TIMEOUT` :param str host: `DEFAULT_HOST` :param str locale: `DEFAULT_LOCALE` :param int port: `DEFAULT_PORT` :param float retry_delay: `DEFAULT_RETRY_DELAY` :param float socket_timeout: `DEFAULT_SOCKET_TIMEOUT` :param bool ssl: `DEFAULT_SSL` :param dict ssl_options: `DEFAULT_SSL_OPTIONS` :param str virtual_host: `DEFAULT_VIRTUAL_HOST` """ # Declare slots to protect against accidental assignment of an invalid # attribute __slots__ = ( '_backpressure_detection', '_blocked_connection_timeout', '_channel_max', '_client_properties', '_connection_attempts', '_credentials', '_frame_max', '_heartbeat', '_host', '_locale', '_port', '_retry_delay', '_socket_timeout', '_ssl', '_ssl_options', '_virtual_host' ) DEFAULT_USERNAME = 'guest' DEFAULT_PASSWORD = 'guest' DEFAULT_BACKPRESSURE_DETECTION = False DEFAULT_BLOCKED_CONNECTION_TIMEOUT = None DEFAULT_CHANNEL_MAX = pika.channel.MAX_CHANNELS DEFAULT_CLIENT_PROPERTIES = None DEFAULT_CREDENTIALS = pika_credentials.PlainCredentials(DEFAULT_USERNAME, DEFAULT_PASSWORD) DEFAULT_CONNECTION_ATTEMPTS = 1 DEFAULT_FRAME_MAX = spec.FRAME_MAX_SIZE DEFAULT_HEARTBEAT_TIMEOUT = None # None accepts server's proposal DEFAULT_HOST = 'localhost' DEFAULT_LOCALE = 'en_US' DEFAULT_PORT = 5672 DEFAULT_RETRY_DELAY = 2.0 DEFAULT_SOCKET_TIMEOUT = 0.25 DEFAULT_SSL = False DEFAULT_SSL_OPTIONS = None DEFAULT_SSL_PORT = 5671 DEFAULT_VIRTUAL_HOST = '/' DEFAULT_HEARTBEAT_INTERVAL = DEFAULT_HEARTBEAT_TIMEOUT # DEPRECATED def __init__(self): self._backpressure_detection = None self.backpressure_detection = self.DEFAULT_BACKPRESSURE_DETECTION # If not None, blocked_connection_timeout is the timeout, in seconds, # for the connection to remain blocked; if the timeout expires, the # connection will be torn down, triggering the connection's # on_close_callback self._blocked_connection_timeout = None self.blocked_connection_timeout = ( self.DEFAULT_BLOCKED_CONNECTION_TIMEOUT) self._channel_max = None self.channel_max = self.DEFAULT_CHANNEL_MAX self._client_properties = None self.client_properties = self.DEFAULT_CLIENT_PROPERTIES self._connection_attempts = None self.connection_attempts = self.DEFAULT_CONNECTION_ATTEMPTS self._credentials = None self.credentials = self.DEFAULT_CREDENTIALS self._frame_max = None self.frame_max = self.DEFAULT_FRAME_MAX self._heartbeat = None self.heartbeat = self.DEFAULT_HEARTBEAT_TIMEOUT self._host = None self.host = self.DEFAULT_HOST self._locale = None self.locale = self.DEFAULT_LOCALE self._port = None self.port = self.DEFAULT_PORT self._retry_delay = None self.retry_delay = self.DEFAULT_RETRY_DELAY self._socket_timeout = None self.socket_timeout = self.DEFAULT_SOCKET_TIMEOUT self._ssl = None self.ssl = self.DEFAULT_SSL self._ssl_options = None self.ssl_options = self.DEFAULT_SSL_OPTIONS self._virtual_host = None self.virtual_host = self.DEFAULT_VIRTUAL_HOST def __repr__(self): """Represent the info about the instance. :rtype: str """ return ('<%s host=%s port=%s virtual_host=%s ssl=%s>' % (self.__class__.__name__, self.host, self.port, self.virtual_host, self.ssl)) @property def backpressure_detection(self): """ :returns: boolean indicating whether backpressure detection is enabled. Defaults to `DEFAULT_BACKPRESSURE_DETECTION`. """ return self._backpressure_detection @backpressure_detection.setter def backpressure_detection(self, value): """ :param bool value: boolean indicating whether to enable backpressure detection """ if not isinstance(value, bool): raise TypeError('backpressure_detection must be a bool, ' 'but got %r' % (value,)) self._backpressure_detection = value @property def blocked_connection_timeout(self): """ :returns: None or float blocked connection timeout. Defaults to `DEFAULT_BLOCKED_CONNECTION_TIMEOUT`. """ return self._blocked_connection_timeout @blocked_connection_timeout.setter def blocked_connection_timeout(self, value): """ :param value: If not None, blocked_connection_timeout is the timeout, in seconds, for the connection to remain blocked; if the timeout expires, the connection will be torn down, triggering the connection's on_close_callback """ if value is not None: if not isinstance(value, numbers.Real): raise TypeError('blocked_connection_timeout must be a Real ' 'number, but got %r' % (value,)) if value < 0: raise ValueError('blocked_connection_timeout must be >= 0, but ' 'got %r' % (value,)) self._blocked_connection_timeout = value @property def channel_max(self): """ :returns: max preferred number of channels. Defaults to `DEFAULT_CHANNEL_MAX`. :rtype: int """ return self._channel_max @channel_max.setter def channel_max(self, value): """ :param int value: max preferred number of channels, between 1 and `channel.MAX_CHANNELS`, inclusive """ if not isinstance(value, numbers.Integral): raise TypeError('channel_max must be an int, but got %r' % (value,)) if value < 1 or value > pika.channel.MAX_CHANNELS: raise ValueError('channel_max must be <= %i and > 0, but got %r' % (pika.channel.MAX_CHANNELS, value)) self._channel_max = value @property def client_properties(self): """ :returns: None or dict of client properties used to override the fields in the default client poperties reported to RabbitMQ via `Connection.StartOk` method. Defaults to `DEFAULT_CLIENT_PROPERTIES`. """ return self._client_properties @client_properties.setter def client_properties(self, value): """ :param value: None or dict of client properties used to override the fields in the default client poperties reported to RabbitMQ via `Connection.StartOk` method. """ if not isinstance(value, (dict, type(None),)): raise TypeError('client_properties must be dict or None, ' 'but got %r' % (value,)) # Copy the mutable object to avoid accidental side-effects self._client_properties = copy.deepcopy(value) @property def connection_attempts(self): """ :returns: number of socket connection attempts. Defaults to `DEFAULT_CONNECTION_ATTEMPTS`. """ return self._connection_attempts @connection_attempts.setter def connection_attempts(self, value): """ :param int value: number of socket connection attempts of at least 1 """ if not isinstance(value, numbers.Integral): raise TypeError('connection_attempts must be an int') if value < 1: raise ValueError('connection_attempts must be > 0, but got %r' % (value,)) self._connection_attempts = value @property def credentials(self): """ :rtype: one of the classes from `pika.credentials.VALID_TYPES`. Defaults to `DEFAULT_CREDENTIALS`. """ return self._credentials @credentials.setter def credentials(self, value): """ :param value: authentication credential object of one of the classes from `pika.credentials.VALID_TYPES` """ if not isinstance(value, tuple(pika_credentials.VALID_TYPES)): raise TypeError('Credentials must be an object of type: %r, but ' 'got %r' % (pika_credentials.VALID_TYPES, value)) # Copy the mutable object to avoid accidental side-effects self._credentials = copy.deepcopy(value) @property def frame_max(self): """ :returns: desired maximum AMQP frame size to use. Defaults to `DEFAULT_FRAME_MAX`. """ return self._frame_max @frame_max.setter def frame_max(self, value): """ :param int value: desired maximum AMQP frame size to use between `spec.FRAME_MIN_SIZE` and `spec.FRAME_MAX_SIZE`, inclusive """ if not isinstance(value, numbers.Integral): raise TypeError('frame_max must be an int, but got %r' % (value,)) if value < spec.FRAME_MIN_SIZE: raise ValueError('Min AMQP 0.9.1 Frame Size is %i, but got %r', (spec.FRAME_MIN_SIZE, value,)) elif value > spec.FRAME_MAX_SIZE: raise ValueError('Max AMQP 0.9.1 Frame Size is %i, but got %r', (spec.FRAME_MAX_SIZE, value,)) self._frame_max = value @property def heartbeat(self): """ :returns: desired connection heartbeat timeout for negotiation or None to accept broker's value. 0 turns heartbeat off. Defaults to `DEFAULT_HEARTBEAT_TIMEOUT`. :rtype: integer, float, or None """ return self._heartbeat @heartbeat.setter def heartbeat(self, value): """ :param value: desired connection heartbeat timeout for negotiation or None to accept broker's value. 0 turns heartbeat off. """ if value is not None: if not isinstance(value, numbers.Integral): raise TypeError('heartbeat must be an int, but got %r' % (value,)) if value < 0: raise ValueError('heartbeat must >= 0, but got %r' % (value,)) self._heartbeat = value @property def host(self): """ :returns: hostname or ip address of broker. Defaults to `DEFAULT_HOST`. :rtype: str """ return self._host @host.setter def host(self, value): """ :param str value: hostname or ip address of broker """ if not isinstance(value, basestring): raise TypeError('host must be a str or unicode str, but got %r' % (value,)) self._host = value @property def locale(self): """ :returns: locale value to pass to broker; e.g., 'en_US'. Defaults to `DEFAULT_LOCALE`. :rtype: str """ return self._locale @locale.setter def locale(self, value): """ :param str value: locale value to pass to broker; e.g., "en_US" """ if not isinstance(value, basestring): raise TypeError('locale must be a str, but got %r' % (value,)) self._locale = value @property def port(self): """ :returns: port number of broker's listening socket. Defaults to `DEFAULT_PORT`. :rtype: int """ return self._port @port.setter def port(self, value): """ :param int value: port number of broker's listening socket """ if not isinstance(value, numbers.Integral): raise TypeError('port must be an int, but got %r' % (value,)) self._port = value @property def retry_delay(self): """ :returns: interval between socket connection attempts; see also `connection_attempts`. Defaults to `DEFAULT_RETRY_DELAY`. :rtype: float """ return self._retry_delay @retry_delay.setter def retry_delay(self, value): """ :param float value: interval between socket connection attempts; see also `connection_attempts`. """ if not isinstance(value, numbers.Real): raise TypeError('retry_delay must be a float or int, but got %r' % (value,)) self._retry_delay = value @property def socket_timeout(self): """ :returns: socket timeout value. Defaults to `DEFAULT_SOCKET_TIMEOUT`. :rtype: float """ return self._socket_timeout @socket_timeout.setter def socket_timeout(self, value): """ :param float value: socket timeout value; NOTE: this is mostly unused now, owing to switchover to to non-blocking socket setting after initial socket connection establishment. """ if value is not None: if not isinstance(value, numbers.Real): raise TypeError('socket_timeout must be a float or int, ' 'but got %r' % (value,)) if not value > 0: raise ValueError('socket_timeout must be > 0, but got %r' % (value,)) self._socket_timeout = value @property def ssl(self): """ :returns: boolean indicating whether to connect via SSL. Defaults to `DEFAULT_SSL`. """ return self._ssl @ssl.setter def ssl(self, value): """ :param bool value: boolean indicating whether to connect via SSL """ if not isinstance(value, bool): raise TypeError('ssl must be a bool, but got %r' % (value,)) self._ssl = value @property def ssl_options(self): """ :returns: None or a dict of options to pass to `ssl.wrap_socket`. Defaults to `DEFAULT_SSL_OPTIONS`. """ return self._ssl_options @ssl_options.setter def ssl_options(self, value): """ :param value: None or a dict of options to pass to `ssl.wrap_socket`. """ if not isinstance(value, (dict, type(None))): raise TypeError('ssl_options must be a dict or None, but got %r' % (value,)) # Copy the mutable object to avoid accidental side-effects self._ssl_options = copy.deepcopy(value) @property def virtual_host(self): """ :returns: rabbitmq virtual host name. Defaults to `DEFAULT_VIRTUAL_HOST`. """ return self._virtual_host @virtual_host.setter def virtual_host(self, value): """ :param str value: rabbitmq virtual host name """ if not isinstance(value, basestring): raise TypeError('virtual_host must be a str, but got %r' % (value,)) self._virtual_host = value class ConnectionParameters(Parameters): """Connection parameters object that is passed into the connection adapter upon construction. """ # Protect against accidental assignment of an invalid attribute __slots__ = () class _DEFAULT(object): """Designates default parameter value; internal use""" pass def __init__(self, # pylint: disable=R0913,R0914,R0912 host=_DEFAULT, port=_DEFAULT, virtual_host=_DEFAULT, credentials=_DEFAULT, channel_max=_DEFAULT, frame_max=_DEFAULT, heartbeat=_DEFAULT, ssl=_DEFAULT, ssl_options=_DEFAULT, connection_attempts=_DEFAULT, retry_delay=_DEFAULT, socket_timeout=_DEFAULT, locale=_DEFAULT, backpressure_detection=_DEFAULT, blocked_connection_timeout=_DEFAULT, client_properties=_DEFAULT, **kwargs): """Create a new ConnectionParameters instance. See `Parameters` for default values. :param str host: Hostname or IP Address to connect to :param int port: TCP port to connect to :param str virtual_host: RabbitMQ virtual host to use :param pika.credentials.Credentials credentials: auth credentials :param int channel_max: Maximum number of channels to allow :param int frame_max: The maximum byte size for an AMQP frame :param int heartbeat: Heartbeat timeout. Max between this value and server's proposal will be used as the heartbeat timeout. Use 0 to deactivate heartbeats and None to accept server's proposal. :param bool ssl: Enable SSL :param dict ssl_options: None or a dict of arguments to be passed to ssl.wrap_socket :param int connection_attempts: Maximum number of retry attempts :param int|float retry_delay: Time to wait in seconds, before the next :param int|float socket_timeout: Use for high latency networks :param str locale: Set the locale value :param bool backpressure_detection: DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. :param blocked_connection_timeout: If not None, the value is a non-negative timeout, in seconds, for the connection to remain blocked (triggered by Connection.Blocked from broker); if the timeout expires before connection becomes unblocked, the connection will be torn down, triggering the adapter-specific mechanism for informing client app about the closed connection ( e.g., on_close_callback or ConnectionClosed exception) with `reason_code` of `InternalCloseReasons.BLOCKED_CONNECTION_TIMEOUT`. :type blocked_connection_timeout: None, int, float :param client_properties: None or dict of client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method. :param heartbeat_interval: DEPRECATED; use `heartbeat` instead, and don't pass both """ super(ConnectionParameters, self).__init__() if backpressure_detection is not self._DEFAULT: self.backpressure_detection = backpressure_detection if blocked_connection_timeout is not self._DEFAULT: self.blocked_connection_timeout = blocked_connection_timeout if channel_max is not self._DEFAULT: self.channel_max = channel_max if client_properties is not self._DEFAULT: self.client_properties = client_properties if connection_attempts is not self._DEFAULT: self.connection_attempts = connection_attempts if credentials is not self._DEFAULT: self.credentials = credentials if frame_max is not self._DEFAULT: self.frame_max = frame_max if heartbeat is not self._DEFAULT: self.heartbeat = heartbeat try: heartbeat_interval = kwargs.pop('heartbeat_interval') except KeyError: # Good, this one is deprecated pass else: warnings.warn('heartbeat_interval is deprecated, use heartbeat', DeprecationWarning, stacklevel=2) if heartbeat is not self._DEFAULT: raise TypeError('heartbeat and deprecated heartbeat_interval ' 'are mutually-exclusive') self.heartbeat = heartbeat_interval if host is not self._DEFAULT: self.host = host if locale is not self._DEFAULT: self.locale = locale if retry_delay is not self._DEFAULT: self.retry_delay = retry_delay if socket_timeout is not self._DEFAULT: self.socket_timeout = socket_timeout if ssl is not self._DEFAULT: self.ssl = ssl if ssl_options is not self._DEFAULT: self.ssl_options = ssl_options # Set port after SSL status is known if port is not self._DEFAULT: self.port = port elif ssl is not self._DEFAULT: self.port = self.DEFAULT_SSL_PORT if self.ssl else self.DEFAULT_PORT if virtual_host is not self._DEFAULT: self.virtual_host = virtual_host if kwargs: raise TypeError('Unexpected kwargs: %r' % (kwargs,)) class URLParameters(Parameters): """Connect to RabbitMQ via an AMQP URL in the format:: amqp://username:password@host:port/[?query-string] Ensure that the virtual host is URI encoded when specified. For example if you are using the default "/" virtual host, the value should be `%2f`. See `Parameters` for default values. Valid query string values are: - backpressure_detection: DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. - channel_max: Override the default maximum channel count value - client_properties: dict of client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method - connection_attempts: Specify how many times pika should try and reconnect before it gives up - frame_max: Override the default maximum frame size for communication - heartbeat: Specify the number of seconds between heartbeat frames to ensure that the link between RabbitMQ and your application is up - locale: Override the default `en_US` locale value - ssl: Toggle SSL, possible values are `t`, `f` - ssl_options: Arguments passed to :meth:`ssl.wrap_socket` - retry_delay: The number of seconds to sleep before attempting to connect on connection failure. - socket_timeout: Override low level socket timeout value - blocked_connection_timeout: Set the timeout, in seconds, that the connection may remain blocked (triggered by Connection.Blocked from broker); if the timeout expires before connection becomes unblocked, the connection will be torn down, triggering the connection's on_close_callback :param str url: The AMQP URL to connect to """ # Protect against accidental assignment of an invalid attribute __slots__ = ('_all_url_query_values',) # The name of the private function for parsing and setting a given URL query # arg is constructed by catenating the query arg's name to this prefix _SETTER_PREFIX = '_set_url_' def __init__(self, url): """Create a new URLParameters instance. :param str url: The URL value """ super(URLParameters, self).__init__() self._all_url_query_values = None # Handle the Protocol scheme # # Fix up scheme amqp(s) to http(s) so urlparse won't barf on python # prior to 2.7. On Python 2.6.9, # `urlparse('amqp://127.0.0.1/%2f?socket_timeout=1')` produces an # incorrect path='/%2f?socket_timeout=1' if url[0:4].lower() == 'amqp': url = 'http' + url[4:] # TODO Is support for the alternative http(s) schemes intentional? parts = urlparse.urlparse(url) if parts.scheme == 'https': self.ssl = True elif parts.scheme == 'http': self.ssl = False elif parts.scheme: raise ValueError('Unexpected URL scheme %r; supported scheme ' 'values: amqp, amqps' % (parts.scheme,)) if parts.hostname is not None: self.host = parts.hostname # Take care of port after SSL status is known if parts.port is not None: self.port = parts.port else: self.port = self.DEFAULT_SSL_PORT if self.ssl else self.DEFAULT_PORT if parts.username is not None: self.credentials = pika_credentials.PlainCredentials(url_unquote(parts.username), url_unquote(parts.password)) # Get the Virtual Host if len(parts.path) > 1: self.virtual_host = url_unquote(parts.path.split('/')[1]) # Handle query string values, validating and assigning them self._all_url_query_values = urlparse.parse_qs(parts.query) for name, value in dict_iteritems(self._all_url_query_values): try: set_value = getattr(self, self._SETTER_PREFIX + name) except AttributeError: raise ValueError('Unknown URL parameter: %r' % (name,)) try: (value,) = value except ValueError: raise ValueError('Expected exactly one value for URL parameter ' '%s, but got %i values: %s' % ( name, len(value), value)) set_value(value) def _set_url_backpressure_detection(self, value): """Deserialize and apply the corresponding query string arg""" try: backpressure_detection = {'t': True, 'f': False}[value] except KeyError: raise ValueError('Invalid backpressure_detection value: %r' % (value,)) self.backpressure_detection = backpressure_detection def _set_url_blocked_connection_timeout(self, value): """Deserialize and apply the corresponding query string arg""" try: blocked_connection_timeout = float(value) except ValueError as exc: raise ValueError('Invalid blocked_connection_timeout value %r: %r' % (value, exc,)) self.blocked_connection_timeout = blocked_connection_timeout def _set_url_channel_max(self, value): """Deserialize and apply the corresponding query string arg""" try: channel_max = int(value) except ValueError as exc: raise ValueError('Invalid channel_max value %r: %r' % (value, exc,)) self.channel_max = channel_max def _set_url_client_properties(self, value): """Deserialize and apply the corresponding query string arg""" self.client_properties = ast.literal_eval(value) def _set_url_connection_attempts(self, value): """Deserialize and apply the corresponding query string arg""" try: connection_attempts = int(value) except ValueError as exc: raise ValueError('Invalid connection_attempts value %r: %r' % (value, exc,)) self.connection_attempts = connection_attempts def _set_url_frame_max(self, value): """Deserialize and apply the corresponding query string arg""" try: frame_max = int(value) except ValueError as exc: raise ValueError('Invalid frame_max value %r: %r' % (value, exc,)) self.frame_max = frame_max def _set_url_heartbeat(self, value): """Deserialize and apply the corresponding query string arg""" if 'heartbeat_interval' in self._all_url_query_values: raise ValueError('Deprecated URL parameter heartbeat_interval must ' 'not be specified together with heartbeat') try: heartbeat_timeout = int(value) except ValueError as exc: raise ValueError('Invalid heartbeat value %r: %r' % (value, exc,)) self.heartbeat = heartbeat_timeout def _set_url_heartbeat_interval(self, value): """Deserialize and apply the corresponding query string arg""" warnings.warn('heartbeat_interval is deprecated, use heartbeat', DeprecationWarning, stacklevel=2) if 'heartbeat' in self._all_url_query_values: raise ValueError('Deprecated URL parameter heartbeat_interval must ' 'not be specified together with heartbeat') try: heartbeat_timeout = int(value) except ValueError as exc: raise ValueError('Invalid heartbeat_interval value %r: %r' % (value, exc,)) self.heartbeat = heartbeat_timeout def _set_url_locale(self, value): """Deserialize and apply the corresponding query string arg""" self.locale = value def _set_url_retry_delay(self, value): """Deserialize and apply the corresponding query string arg""" try: retry_delay = float(value) except ValueError as exc: raise ValueError('Invalid retry_delay value %r: %r' % (value, exc,)) self.retry_delay = retry_delay def _set_url_socket_timeout(self, value): """Deserialize and apply the corresponding query string arg""" try: socket_timeout = float(value) except ValueError as exc: raise ValueError('Invalid socket_timeout value %r: %r' % (value, exc,)) self.socket_timeout = socket_timeout def _set_url_ssl_options(self, value): """Deserialize and apply the corresponding query string arg""" self.ssl_options = ast.literal_eval(value) class Connection(object): """This is the core class that implements communication with RabbitMQ. This class should not be invoked directly but rather through the use of an adapter such as SelectConnection or BlockingConnection. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Called when the connection is opened :param method on_open_error_callback: Called if the connection cant be opened :param method on_close_callback: Called when the connection is closed """ # Disable pylint messages concerning "method could be a funciton" # pylint: disable=R0201 ON_CONNECTION_BACKPRESSURE = '_on_connection_backpressure' ON_CONNECTION_BLOCKED = '_on_connection_blocked' ON_CONNECTION_CLOSED = '_on_connection_closed' ON_CONNECTION_ERROR = '_on_connection_error' ON_CONNECTION_OPEN = '_on_connection_open' ON_CONNECTION_UNBLOCKED = '_on_connection_unblocked' CONNECTION_CLOSED = 0 CONNECTION_INIT = 1 CONNECTION_PROTOCOL = 2 CONNECTION_START = 3 CONNECTION_TUNE = 4 CONNECTION_OPEN = 5 CONNECTION_CLOSING = 6 # client-initiated close in progress _STATE_NAMES = { CONNECTION_CLOSED: 'CLOSED', CONNECTION_INIT: 'INIT', CONNECTION_PROTOCOL: 'PROTOCOL', CONNECTION_START: 'START', CONNECTION_TUNE: 'TUNE', CONNECTION_OPEN: 'OPEN', CONNECTION_CLOSING: 'CLOSING' } def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None): """Connection initialization expects an object that has implemented the Parameters class and a callback function to notify when we have successfully connected to the AMQP Broker. Available Parameters classes are the ConnectionParameters class and URLParameters class. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Called when the connection is opened :param method on_open_error_callback: Called if the connection can't be established: on_open_error_callback(connection, str|exception) :param method on_close_callback: Called when the connection is closed: `on_close_callback(connection, reason_code, reason_text)`, where `reason_code` is either an IETF RFC 821 reply code for AMQP-level closures or a value from `pika.connection.InternalCloseReasons` for internal causes, such as socket errors. """ self.connection_state = self.CONNECTION_CLOSED # Holds timer when the initial connect or reconnect is scheduled self._connection_attempt_timer = None # Used to hold timer if configured for Connection.Blocked timeout self._blocked_conn_timer = None self.heartbeat = None # Set our configuration options self.params = (copy.deepcopy(parameters) if parameters is not None else ConnectionParameters()) # Define our callback dictionary self.callbacks = callback.CallbackManager() # Attributes that will be properly initialized by _init_connection_state # and/or during connection handshake. self.server_capabilities = None self.server_properties = None self._body_max_length = None self.known_hosts = None self.closing = None self._frame_buffer = None self._channels = None self._backpressure_multiplier = None self.remaining_connection_attempts = None self._init_connection_state() # Add the on connection error callback self.callbacks.add(0, self.ON_CONNECTION_ERROR, on_open_error_callback or self._on_connection_error, False) # On connection callback if on_open_callback: self.add_on_open_callback(on_open_callback) # On connection callback if on_close_callback: self.add_on_close_callback(on_close_callback) self.connect() def add_backpressure_callback(self, callback_method): """Call method "callback" when pika believes backpressure is being applied. :param method callback_method: The method to call """ self.callbacks.add(0, self.ON_CONNECTION_BACKPRESSURE, callback_method, False) def add_on_close_callback(self, callback_method): """Add a callback notification when the connection has closed. The callback will be passed the connection, the reply_code (int) and the reply_text (str), if sent by the remote server. :param method callback_method: Callback to call on close """ self.callbacks.add(0, self.ON_CONNECTION_CLOSED, callback_method, False) def add_on_connection_blocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a ``Connection.Blocked`` frame indicating that RabbitMQ is low on resources. Publishers can use this to voluntarily suspend publishing, instead of relying on back pressure throttling. The callback will be passed the ``Connection.Blocked`` method frame. See also `ConnectionParameters.blocked_connection_timeout`. :param method callback_method: Callback to call on `Connection.Blocked`, having the signature `callback_method(pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Blocked` """ self.callbacks.add(0, spec.Connection.Blocked, callback_method, False) def add_on_connection_unblocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a ``Connection.Unblocked`` frame letting publishers know it's ok to start publishing again. The callback will be passed the ``Connection.Unblocked`` method frame. :param method callback_method: Callback to call on `Connection.Unblocked`, having the signature `callback_method(pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Unblocked` """ self.callbacks.add(0, spec.Connection.Unblocked, callback_method, False) def add_on_open_callback(self, callback_method): """Add a callback notification when the connection has opened. :param method callback_method: Callback to call when open """ self.callbacks.add(0, self.ON_CONNECTION_OPEN, callback_method, False) def add_on_open_error_callback(self, callback_method, remove_default=True): """Add a callback notification when the connection can not be opened. The callback method should accept the connection object that could not connect, and an optional error message. :param method callback_method: Callback to call when can't connect :param bool remove_default: Remove default exception raising callback """ if remove_default: self.callbacks.remove(0, self.ON_CONNECTION_ERROR, self._on_connection_error) self.callbacks.add(0, self.ON_CONNECTION_ERROR, callback_method, False) def add_timeout(self, deadline, callback_method): """Adapters should override to call the callback after the specified number of seconds have elapsed, using a timer, or a thread, or similar. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method """ raise NotImplementedError def channel(self, on_open_callback, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :param method on_open_callback: The callback when the channel is opened :param int channel_number: The channel number to use, defaults to the next available. :rtype: pika.channel.Channel """ if not self.is_open: # TODO if state is OPENING, then ConnectionClosed might be wrong raise exceptions.ConnectionClosed( 'Channel allocation requires an open connection: %s' % self) if not channel_number: channel_number = self._next_channel_number() self._channels[channel_number] = self._create_channel(channel_number, on_open_callback) self._add_channel_callbacks(channel_number) self._channels[channel_number].open() return self._channels[channel_number] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ if self.is_closing or self.is_closed: LOGGER.warning('Suppressing close request on %s', self) return # Initiate graceful closing of channels that are OPEN or OPENING self._close_channels(reply_code, reply_text) # Set our connection state self._set_connection_state(self.CONNECTION_CLOSING) LOGGER.info("Closing connection (%s): %s", reply_code, reply_text) self.closing = reply_code, reply_text # If there are channels that haven't finished closing yet, then # _on_close_ready will finally be called from _on_channel_cleanup once # all channels have been closed if not self._channels: # We can initiate graceful closing of the connection right away, # since no more channels remain self._on_close_ready() else: LOGGER.info('Connection.close is waiting for ' '%d channels to close: %s', len(self._channels), self) def connect(self): """Invoke if trying to reconnect to a RabbitMQ server. Constructing the Connection object should connect on its own. """ assert self._connection_attempt_timer is None, ( 'connect timer was already scheduled') assert self.is_closed, ( 'connect expected CLOSED state, but got: {}'.format( self._STATE_NAMES[self.connection_state])) self._set_connection_state(self.CONNECTION_INIT) # Schedule a timer callback to start the actual connection logic from # event loop's context, thus avoiding error callbacks in the context of # the caller, which could be the constructor. self._connection_attempt_timer = self.add_timeout( 0, self._on_connect_timer) def remove_timeout(self, timeout_id): """Adapters should override: Remove a timeout :param str timeout_id: The timeout id to remove """ raise NotImplementedError def set_backpressure_multiplier(self, value=10): """Alter the backpressure multiplier value. We set this to 10 by default. This value is used to raise warnings and trigger the backpressure callback. :param int value: The multiplier value to set """ self._backpressure_multiplier = value # # Connections state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_CLOSED @property def is_closing(self): """ Returns True if connection is in the process of closing due to client-initiated `close` request, but closing is not yet complete. """ return self.connection_state == self.CONNECTION_CLOSING @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_OPEN # # Properties that reflect server capabilities for the current connection # @property def basic_nack(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self.server_capabilities.get('basic.nack', False) @property def consumer_cancel_notify(self): """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self.server_capabilities.get('consumer_cancel_notify', False) @property def exchange_exchange_bindings(self): """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self.server_capabilities.get('exchange_exchange_bindings', False) @property def publisher_confirms(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self.server_capabilities.get('publisher_confirms', False) # # Internal methods for managing the communication process # def _adapter_connect(self): """Subclasses should override to set up the outbound socket connection. :raises: NotImplementedError """ raise NotImplementedError def _adapter_disconnect(self): """Subclasses should override this to cause the underlying transport (socket) to close. :raises: NotImplementedError """ raise NotImplementedError def _add_channel_callbacks(self, channel_number): """Add the appropriate callbacks for the specified channel number. :param int channel_number: The channel number for the callbacks """ # pylint: disable=W0212 # This permits us to garbage-collect our reference to the channel # regardless of whether it was closed by client or broker, and do so # after all channel-close callbacks. self._channels[channel_number]._add_on_cleanup_callback( self._on_channel_cleanup) def _add_connection_start_callback(self): """Add a callback for when a Connection.Start frame is received from the broker. """ self.callbacks.add(0, spec.Connection.Start, self._on_connection_start) def _add_connection_tune_callback(self): """Add a callback for when a Connection.Tune frame is received.""" self.callbacks.add(0, spec.Connection.Tune, self._on_connection_tune) def _append_frame_buffer(self, value): """Append the bytes to the frame buffer. :param str value: The bytes to append to the frame buffer """ self._frame_buffer += value @property def _buffer_size(self): """Return the suggested buffer size from the connection state/tune or the default if that is None. :rtype: int """ return self.params.frame_max or spec.FRAME_MAX_SIZE def _check_for_protocol_mismatch(self, value): """Invoked when starting a connection to make sure it's a supported protocol. :param pika.frame.Method value: The frame to check :raises: ProtocolVersionMismatch """ if (value.method.version_major, value.method.version_minor) != spec.PROTOCOL_VERSION[0:2]: # TODO This should call _on_terminate for proper callbacks and # cleanup raise exceptions.ProtocolVersionMismatch(frame.ProtocolHeader(), value) @property def _client_properties(self): """Return the client properties dictionary. :rtype: dict """ properties = { 'product': PRODUCT, 'platform': 'Python %s' % platform.python_version(), 'capabilities': { 'authentication_failure_close': True, 'basic.nack': True, 'connection.blocked': True, 'consumer_cancel_notify': True, 'publisher_confirms': True }, 'information': 'See http://pika.rtfd.org', 'version': __version__ } if self.params.client_properties: properties.update(self.params.client_properties) return properties def _close_channels(self, reply_code, reply_text): """Initiate graceful closing of channels that are in OPEN or OPENING states, passing reply_code and reply_text. :param int reply_code: The code for why the channels are being closed :param str reply_text: The text reason for why the channels are closing """ assert self.is_open, str(self) for channel_number in dictkeys(self._channels): chan = self._channels[channel_number] if not (chan.is_closing or chan.is_closed): chan.close(reply_code, reply_text) def _combine(self, val1, val2): """Pass in two values, if a is 0, return b otherwise if b is 0, return a. If neither case matches return the smallest value. :param int val1: The first value :param int val2: The second value :rtype: int """ return min(val1, val2) or (val1 or val2) def _connect(self): """Attempt to connect to RabbitMQ :rtype: bool """ warnings.warn('This method is deprecated, use Connection.connect', DeprecationWarning) def _create_channel(self, channel_number, on_open_callback): """Create a new channel using the specified channel number and calling back the method specified by on_open_callback :param int channel_number: The channel number to use :param method on_open_callback: The callback when the channel is opened """ LOGGER.debug('Creating channel %s', channel_number) return pika.channel.Channel(self, channel_number, on_open_callback) def _create_heartbeat_checker(self): """Create a heartbeat checker instance if there is a heartbeat interval set. :rtype: pika.heartbeat.Heartbeat """ if self.params.heartbeat is not None and self.params.heartbeat > 0: LOGGER.debug('Creating a HeartbeatChecker: %r', self.params.heartbeat) return heartbeat.HeartbeatChecker(self, self.params.heartbeat) def _remove_heartbeat(self): """Stop the heartbeat checker if it exists """ if self.heartbeat: self.heartbeat.stop() self.heartbeat = None def _deliver_frame_to_channel(self, value): """Deliver the frame to the channel specified in the frame. :param pika.frame.Method value: The frame to deliver """ if not value.channel_number in self._channels: # This should never happen and would constitute breach of the # protocol LOGGER.critical( 'Received %s frame for unregistered channel %i on %s', value.NAME, value.channel_number, self) return # pylint: disable=W0212 self._channels[value.channel_number]._handle_content_frame(value) def _detect_backpressure(self): """Attempt to calculate if TCP backpressure is being applied due to our outbound buffer being larger than the average frame size over a window of frames. """ avg_frame_size = self.bytes_sent / self.frames_sent buffer_size = sum([len(f) for f in self.outbound_buffer]) if buffer_size > (avg_frame_size * self._backpressure_multiplier): LOGGER.warning(BACKPRESSURE_WARNING, buffer_size, int(buffer_size / avg_frame_size)) self.callbacks.process(0, self.ON_CONNECTION_BACKPRESSURE, self) def _ensure_closed(self): """If the connection is not closed, close it.""" if self.is_open: self.close() def _flush_outbound(self): """Adapters should override to flush the contents of outbound_buffer out along the socket. :raises: NotImplementedError """ raise NotImplementedError def _get_body_frame_max_length(self): """Calculate the maximum amount of bytes that can be in a body frame. :rtype: int """ return ( self.params.frame_max - spec.FRAME_HEADER_SIZE - spec.FRAME_END_SIZE ) def _get_credentials(self, method_frame): """Get credentials for authentication. :param pika.frame.MethodFrame method_frame: The Connection.Start frame :rtype: tuple(str, str) """ (auth_type, response) = self.params.credentials.response_for(method_frame.method) if not auth_type: # TODO this should call _on_terminate for proper callbacks and # cleanup instead raise exceptions.AuthenticationError(self.params.credentials.TYPE) self.params.credentials.erase_credentials() return auth_type, response def _has_pending_callbacks(self, value): """Return true if there are any callbacks pending for the specified frame. :param pika.frame.Method value: The frame to check :rtype: bool """ return self.callbacks.pending(value.channel_number, value.method) def _init_connection_state(self): """Initialize or reset all of the internal state variables for a given connection. On disconnect or reconnect all of the state needs to be wiped. """ # Connection state self._set_connection_state(self.CONNECTION_CLOSED) # Negotiated server properties self.server_properties = None # Outbound buffer for buffering writes until we're able to send them self.outbound_buffer = collections.deque([]) # Inbound buffer for decoding frames self._frame_buffer = bytes() # Dict of open channels self._channels = dict() # Remaining connection attempts self.remaining_connection_attempts = self.params.connection_attempts # Data used for Heartbeat checking and back-pressure detection self.bytes_sent = 0 self.bytes_received = 0 self.frames_sent = 0 self.frames_received = 0 self.heartbeat = None # Default back-pressure multiplier value self._backpressure_multiplier = 10 # When closing, hold reason why self.closing = 0, 'Not specified' # Our starting point once connected, first frame received self._add_connection_start_callback() # Add a callback handler for the Broker telling us to disconnect. # NOTE: As of RabbitMQ 3.6.0, RabbitMQ broker may send Connection.Close # to signal error during connection setup (and wait a longish time # before closing the TCP/IP stream). Earlier RabbitMQ versions # simply closed the TCP/IP stream. self.callbacks.add(0, spec.Connection.Close, self._on_connection_close) if self._connection_attempt_timer is not None: # Connection attempt timer was active when teardown was initiated self.remove_timeout(self._connection_attempt_timer) self._connection_attempt_timer = None if self.params.blocked_connection_timeout is not None: if self._blocked_conn_timer is not None: # Blocked connection timer was active when teardown was # initiated self.remove_timeout(self._blocked_conn_timer) self._blocked_conn_timer = None self.add_on_connection_blocked_callback( self._on_connection_blocked) self.add_on_connection_unblocked_callback( self._on_connection_unblocked) def _is_method_frame(self, value): """Returns true if the frame is a method frame. :param pika.frame.Frame value: The frame to evaluate :rtype: bool """ return isinstance(value, frame.Method) def _is_protocol_header_frame(self, value): """Returns True if it's a protocol header frame. :rtype: bool """ return isinstance(value, frame.ProtocolHeader) def _next_channel_number(self): """Return the next available channel number or raise an exception. :rtype: int """ limit = self.params.channel_max or pika.channel.MAX_CHANNELS if len(self._channels) >= limit: raise exceptions.NoFreeChannels() for num in xrange(1, len(self._channels) + 1): if num not in self._channels: return num return len(self._channels) + 1 def _on_channel_cleanup(self, channel): """Remove the channel from the dict of channels when Channel.CloseOk is sent. If connection is closing and no more channels remain, proceed to `_on_close_ready`. :param pika.channel.Channel channel: channel instance """ try: del self._channels[channel.channel_number] LOGGER.debug('Removed channel %s', channel.channel_number) except KeyError: LOGGER.error('Channel %r not in channels', channel.channel_number) if self.is_closing: if not self._channels: # Initiate graceful closing of the connection self._on_close_ready() else: # Once Connection enters CLOSING state, all remaining channels # should also be in CLOSING state. Deviation from this would # prevent Connection from completing its closing procedure. channels_not_in_closing_state = [ chan for chan in dict_itervalues(self._channels) if not chan.is_closing] if channels_not_in_closing_state: LOGGER.critical( 'Connection in CLOSING state has non-CLOSING ' 'channels: %r', channels_not_in_closing_state) def _on_close_ready(self): """Called when the Connection is in a state that it can close after a close has been requested. This happens, for example, when all of the channels are closed that were open when the close request was made. """ if self.is_closed: LOGGER.warning('_on_close_ready invoked when already closed') return self._send_connection_close(self.closing[0], self.closing[1]) def _on_connected(self): """Invoked when the socket is connected and it's time to start speaking AMQP with the broker. """ self._set_connection_state(self.CONNECTION_PROTOCOL) # Start the communication with the RabbitMQ Broker self._send_frame(frame.ProtocolHeader()) def _on_blocked_connection_timeout(self): """ Called when the "connection blocked timeout" expires. When this happens, we tear down the connection """ self._blocked_conn_timer = None self._on_terminate(InternalCloseReasons.BLOCKED_CONNECTION_TIMEOUT, 'Blocked connection timeout expired') def _on_connection_blocked(self, method_frame): """Handle Connection.Blocked notification from RabbitMQ broker :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ LOGGER.warning('Received %s from broker', method_frame) if self._blocked_conn_timer is not None: # RabbitMQ is not supposed to repeat Connection.Blocked, but it # doesn't hurt to be careful LOGGER.warning('_blocked_conn_timer %s already set when ' '_on_connection_blocked is called', self._blocked_conn_timer) else: self._blocked_conn_timer = self.add_timeout( self.params.blocked_connection_timeout, self._on_blocked_connection_timeout) def _on_connection_unblocked(self, method_frame): """Handle Connection.Unblocked notification from RabbitMQ broker :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ LOGGER.info('Received %s from broker', method_frame) if self._blocked_conn_timer is None: # RabbitMQ is supposed to pair Connection.Blocked/Unblocked, but it # doesn't hurt to be careful LOGGER.warning('_blocked_conn_timer was not active when ' '_on_connection_unblocked called') else: self.remove_timeout(self._blocked_conn_timer) self._blocked_conn_timer = None def _on_connection_close(self, method_frame): """Called when the connection is closed remotely via Connection.Close frame from broker. :param pika.frame.Method method_frame: The Connection.Close frame """ LOGGER.debug('_on_connection_close: frame=%s', method_frame) self.closing = (method_frame.method.reply_code, method_frame.method.reply_text) self._on_terminate(self.closing[0], self.closing[1]) def _on_connection_close_ok(self, method_frame): """Called when Connection.CloseOk is received from remote. :param pika.frame.Method method_frame: The Connection.CloseOk frame """ LOGGER.debug('_on_connection_close_ok: frame=%s', method_frame) self._on_terminate(self.closing[0], self.closing[1]) def _on_connection_error(self, _connection_unused, error_message=None): """Default behavior when the connecting connection can not connect. :raises: exceptions.AMQPConnectionError """ raise exceptions.AMQPConnectionError(error_message or self.params.connection_attempts) def _on_connection_open(self, method_frame): """ This is called once we have tuned the connection with the server and called the Connection.Open on the server and it has replied with Connection.Ok. """ # TODO _on_connection_open - what if user started closing it already? # It shouldn't transition to OPEN if in closing state. Just log and skip # the rest. self.known_hosts = method_frame.method.known_hosts # We're now connected at the AMQP level self._set_connection_state(self.CONNECTION_OPEN) # Call our initial callback that we're open self.callbacks.process(0, self.ON_CONNECTION_OPEN, self, self) def _on_connection_start(self, method_frame): """This is called as a callback once we have received a Connection.Start from the server. :param pika.frame.Method method_frame: The frame received :raises: UnexpectedFrameError """ self._set_connection_state(self.CONNECTION_START) if self._is_protocol_header_frame(method_frame): raise exceptions.UnexpectedFrameError self._check_for_protocol_mismatch(method_frame) self._set_server_information(method_frame) self._add_connection_tune_callback() self._send_connection_start_ok(*self._get_credentials(method_frame)) def _on_connect_timer(self): """Callback for self._connection_attempt_timer: initiate connection attempt in the context of the event loop """ self._connection_attempt_timer = None error = self._adapter_connect() if not error: return self._on_connected() self.remaining_connection_attempts -= 1 LOGGER.warning('Could not connect, %i attempts left', self.remaining_connection_attempts) if self.remaining_connection_attempts > 0: LOGGER.info('Retrying in %i seconds', self.params.retry_delay) self._connection_attempt_timer = self.add_timeout( self.params.retry_delay, self._on_connect_timer) else: # TODO connect must not call failure callback from constructor. The # current behavior is error-prone, because the user code may get a # callback upon socket connection failure before user's other state # may be sufficiently initialized. Constructors must either succeed # or raise an exception. To be forward-compatible with failure # reporting from fully non-blocking connection establishment, # connect() should set INIT state and schedule a 0-second timer to # continue the rest of the logic in a private method. The private # method should use itself instead of connect() as the callback for # scheduling retries. # TODO This should use _on_terminate for consistent behavior/cleanup self.callbacks.process(0, self.ON_CONNECTION_ERROR, self, self, error) self.remaining_connection_attempts = self.params.connection_attempts self._set_connection_state(self.CONNECTION_CLOSED) @staticmethod def _tune_heartbeat_timeout(client_value, server_value): """ Determine heartbeat timeout per AMQP 0-9-1 rules Per https://www.rabbitmq.com/resources/specs/amqp0-9-1.pdf, > Both peers negotiate the limits to the lowest agreed value as follows: > - The server MUST tell the client what limits it proposes. > - The client responds and **MAY reduce those limits** for its connection When negotiating heartbeat timeout, the reasoning needs to be reversed. The way I think it makes sense to interpret this rule for heartbeats is that the consumable resource is the frequency of heartbeats, which is the inverse of the timeout. The more frequent heartbeats consume more resources than less frequent heartbeats. So, when both heartbeat timeouts are non-zero, we should pick the max heartbeat timeout rather than the min. The heartbeat timeout value 0 (zero) has a special meaning - it's supposed to disable the timeout. This makes zero a setting for the least frequent heartbeats (i.e., never); therefore, if any (or both) of the two is zero, then the above rules would suggest that negotiation should yield 0 value for heartbeat, effectively turning it off. :param client_value: None to accept server_value; otherwise, an integral number in seconds; 0 (zero) to disable heartbeat. :param server_value: integral value of the heartbeat timeout proposed by broker; 0 (zero) to disable heartbeat. :returns: the value of the heartbeat timeout to use and return to broker """ if client_value is None: # Accept server's limit timeout = server_value elif client_value == 0 or server_value == 0: # 0 has a special meaning "disable heartbeats", which makes it the # least frequent heartbeat value there is timeout = 0 else: # Pick the one with the bigger heartbeat timeout (i.e., the less # frequent one) timeout = max(client_value, server_value) return timeout def _on_connection_tune(self, method_frame): """Once the Broker sends back a Connection.Tune, we will set our tuning variables that have been returned to us and kick off the Heartbeat monitor if required, send our TuneOk and then the Connection. Open rpc call on channel 0. :param pika.frame.Method method_frame: The frame received """ self._set_connection_state(self.CONNECTION_TUNE) # Get our max channels, frames and heartbeat interval self.params.channel_max = self._combine(self.params.channel_max, method_frame.method.channel_max) self.params.frame_max = self._combine(self.params.frame_max, method_frame.method.frame_max) # Negotiate heatbeat timeout self.params.heartbeat = self._tune_heartbeat_timeout( client_value=self.params.heartbeat, server_value=method_frame.method.heartbeat) # Calculate the maximum pieces for body frames self._body_max_length = self._get_body_frame_max_length() # Create a new heartbeat checker if needed self.heartbeat = self._create_heartbeat_checker() # Send the TuneOk response with what we've agreed upon self._send_connection_tune_ok() # Send the Connection.Open RPC call for the vhost self._send_connection_open() def _on_data_available(self, data_in): """This is called by our Adapter, passing in the data from the socket. As long as we have buffer try and map out frame data. :param str data_in: The data that is available to read """ self._append_frame_buffer(data_in) while self._frame_buffer: consumed_count, frame_value = self._read_frame() if not frame_value: return self._trim_frame_buffer(consumed_count) self._process_frame(frame_value) def _on_terminate(self, reason_code, reason_text): """Terminate the connection and notify registered ON_CONNECTION_ERROR and/or ON_CONNECTION_CLOSED callbacks :param integer reason_code: either IETF RFC 821 reply code for AMQP-level closures or a value from `InternalCloseReasons` for internal causes, such as socket errors :param str reason_text: human-readable text message describing the error """ LOGGER.info( 'Disconnected from RabbitMQ at %s:%i (%s): %s', self.params.host, self.params.port, reason_code, reason_text) if not isinstance(reason_code, numbers.Integral): raise TypeError('reason_code must be an integer, but got %r' % (reason_code,)) # Stop the heartbeat checker if it exists self._remove_heartbeat() # Remove connection management callbacks # TODO This call was moved here verbatim from legacy code and the # following doesn't seem to be right: `Connection.Open` here is # unexpected, we don't appear to ever register it, and the broker # shouldn't be sending `Connection.Open` to us, anyway. self._remove_callbacks(0, [spec.Connection.Close, spec.Connection.Start, spec.Connection.Open]) if self.params.blocked_connection_timeout is not None: self._remove_callbacks(0, [spec.Connection.Blocked, spec.Connection.Unblocked]) # Close the socket self._adapter_disconnect() # Determine whether this was an error during connection setup connection_error = None if self.connection_state == self.CONNECTION_PROTOCOL: LOGGER.error('Incompatible Protocol Versions') connection_error = exceptions.IncompatibleProtocolError( reason_code, reason_text) elif self.connection_state == self.CONNECTION_START: LOGGER.error('Connection closed while authenticating indicating a ' 'probable authentication error') connection_error = exceptions.ProbableAuthenticationError( reason_code, reason_text) elif self.connection_state == self.CONNECTION_TUNE: LOGGER.error('Connection closed while tuning the connection ' 'indicating a probable permission error when ' 'accessing a virtual host') connection_error = exceptions.ProbableAccessDeniedError( reason_code, reason_text) elif self.connection_state not in [self.CONNECTION_OPEN, self.CONNECTION_CLOSED, self.CONNECTION_CLOSING]: LOGGER.warning('Unexpected connection state on disconnect: %i', self.connection_state) # Transition to closed state self._set_connection_state(self.CONNECTION_CLOSED) # Inform our channel proxies for channel in dictkeys(self._channels): if channel not in self._channels: continue # pylint: disable=W0212 self._channels[channel]._on_close_meta(reason_code, reason_text) # Inform interested parties if connection_error is not None: LOGGER.error('Connection setup failed due to %r', connection_error) self.callbacks.process(0, self.ON_CONNECTION_ERROR, self, self, connection_error) self.callbacks.process(0, self.ON_CONNECTION_CLOSED, self, self, reason_code, reason_text) # Reset connection properties self._init_connection_state() def _process_callbacks(self, frame_value): """Process the callbacks for the frame if the frame is a method frame and if it has any callbacks pending. :param pika.frame.Method frame_value: The frame to process :rtype: bool """ if (self._is_method_frame(frame_value) and self._has_pending_callbacks(frame_value)): self.callbacks.process(frame_value.channel_number, # Prefix frame_value.method, # Key self, # Caller frame_value) # Args return True return False def _process_frame(self, frame_value): """Process an inbound frame from the socket. :param frame_value: The frame to process :type frame_value: pika.frame.Frame | pika.frame.Method """ # Will receive a frame type of -1 if protocol version mismatch if frame_value.frame_type < 0: return # Keep track of how many frames have been read self.frames_received += 1 # Process any callbacks, if True, exit method if self._process_callbacks(frame_value): return # If a heartbeat is received, update the checker if isinstance(frame_value, frame.Heartbeat): if self.heartbeat: self.heartbeat.received() else: LOGGER.warning('Received heartbeat frame without a heartbeat ' 'checker') # If the frame has a channel number beyond the base channel, deliver it elif frame_value.channel_number > 0: self._deliver_frame_to_channel(frame_value) def _read_frame(self): """Try and read from the frame buffer and decode a frame. :rtype tuple: (int, pika.frame.Frame) """ return frame.decode_frame(self._frame_buffer) def _remove_callback(self, channel_number, method_class): """Remove the specified method_frame callback if it is set for the specified channel number. :param int channel_number: The channel number to remove the callback on :param pika.amqp_object.Method method_class: The method class for the callback """ self.callbacks.remove(str(channel_number), method_class) def _remove_callbacks(self, channel_number, method_classes): """Remove the callbacks for the specified channel number and list of method frames. :param int channel_number: The channel number to remove the callback on :param sequence method_classes: The method classes (derived from `pika.amqp_object.Method`) for the callbacks """ for method_frame in method_classes: self._remove_callback(channel_number, method_frame) def _rpc(self, channel_number, method, callback_method=None, acceptable_replies=None): """Make an RPC call for the given callback, channel number and method. acceptable_replies lists out what responses we'll process from the server with the specified callback. :param int channel_number: The channel number for the RPC call :param pika.amqp_object.Method method: The method frame to call :param method callback_method: The callback for the RPC response :param list acceptable_replies: The replies this RPC call expects """ # Validate that acceptable_replies is a list or None if acceptable_replies and not isinstance(acceptable_replies, list): raise TypeError('acceptable_replies should be list or None') # Validate the callback is callable if callback_method: if not utils.is_callable(callback_method): raise TypeError('callback should be None, function or method.') for reply in acceptable_replies: self.callbacks.add(channel_number, reply, callback_method) # Send the rpc call to RabbitMQ self._send_method(channel_number, method) def _send_connection_close(self, reply_code, reply_text): """Send a Connection.Close method frame. :param int reply_code: The reason for the close :param str reply_text: The text reason for the close """ self._rpc(0, spec.Connection.Close(reply_code, reply_text, 0, 0), self._on_connection_close_ok, [spec.Connection.CloseOk]) def _send_connection_open(self): """Send a Connection.Open frame""" self._rpc(0, spec.Connection.Open(self.params.virtual_host, insist=True), self._on_connection_open, [spec.Connection.OpenOk]) def _send_connection_start_ok(self, authentication_type, response): """Send a Connection.StartOk frame :param str authentication_type: The auth type value :param str response: The encoded value to send """ self._send_method(0, spec.Connection.StartOk(self._client_properties, authentication_type, response, self.params.locale)) def _send_connection_tune_ok(self): """Send a Connection.TuneOk frame""" self._send_method(0, spec.Connection.TuneOk(self.params.channel_max, self.params.frame_max, self.params.heartbeat)) def _send_frame(self, frame_value): """This appends the fully generated frame to send to the broker to the output buffer which will be then sent via the connection adapter. :param frame_value: The frame to write :type frame_value: pika.frame.Frame|pika.frame.ProtocolHeader :raises: exceptions.ConnectionClosed """ if self.is_closed: LOGGER.error('Attempted to send frame when closed') raise exceptions.ConnectionClosed marshaled_frame = frame_value.marshal() self.bytes_sent += len(marshaled_frame) self.frames_sent += 1 self.outbound_buffer.append(marshaled_frame) self._flush_outbound() if self.params.backpressure_detection: self._detect_backpressure() def _send_method(self, channel_number, method, content=None): """Constructs a RPC method frame and then sends it to the broker. :param int channel_number: The channel number for the frame :param pika.amqp_object.Method method: The method to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ if content: self._send_message(channel_number, method, content) else: self._send_frame(frame.Method(channel_number, method)) def _send_message(self, channel_number, method, content=None): """Send the message directly, bypassing the single _send_frame invocation by directly appending to the output buffer and flushing within a lock. :param int channel_number: The channel number for the frame :param pika.amqp_object.Method method: The method frame to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ length = len(content[1]) write_buffer = [frame.Method(channel_number, method).marshal(), frame.Header(channel_number, length, content[0]).marshal()] if content[1]: chunks = int(math.ceil(float(length) / self._body_max_length)) for chunk in xrange(0, chunks): start = chunk * self._body_max_length end = start + self._body_max_length if end > length: end = length write_buffer.append(frame.Body(channel_number, content[1][start:end]).marshal()) self.outbound_buffer += write_buffer self.frames_sent += len(write_buffer) self.bytes_sent += sum(len(frame) for frame in write_buffer) self._flush_outbound() if self.params.backpressure_detection: self._detect_backpressure() def _set_connection_state(self, connection_state): """Set the connection state. :param int connection_state: The connection state to set """ self.connection_state = connection_state def _set_server_information(self, method_frame): """Set the server properties and capabilities :param spec.connection.Start method_frame: The Connection.Start frame """ self.server_properties = method_frame.method.server_properties self.server_capabilities = self.server_properties.get('capabilities', dict()) if hasattr(self.server_properties, 'capabilities'): del self.server_properties['capabilities'] def _trim_frame_buffer(self, byte_count): """Trim the leading N bytes off the frame buffer and increment the counter that keeps track of how many bytes have been read/used from the socket. :param int byte_count: The number of bytes consumed """ self._frame_buffer = self._frame_buffer[byte_count:] self.bytes_received += byte_count pika-0.11.0/pika/credentials.py000066400000000000000000000103601315131611700163070ustar00rootroot00000000000000"""The credentials classes are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. The :class:`~pika.credentials.PlainCredentials` class returns the properly formatted username and password to the :class:`~pika.connection.Connection`. To authenticate with Pika, create a :class:`~pika.credentials.PlainCredentials` object passing in the username and password and pass it as the credentials argument value to the :class:`~pika.connection.ConnectionParameters` object. If you are using :class:`~pika.connection.URLParameters` you do not need a credentials object, one will automatically be created for you. If you are looking to implement SSL certificate style authentication, you would extend the :class:`~pika.credentials.ExternalCredentials` class implementing the required behavior. """ from .compat import as_bytes import logging LOGGER = logging.getLogger(__name__) class PlainCredentials(object): """A credentials object for the default authentication methodology with RabbitMQ. If you do not pass in credentials to the ConnectionParameters object, it will create credentials for 'guest' with the password of 'guest'. If you pass True to erase_on_connect the credentials will not be stored in memory after the Connection attempt has been made. :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ TYPE = 'PLAIN' def __init__(self, username, password, erase_on_connect=False): """Create a new instance of PlainCredentials :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ self.username = username self.password = password self.erase_on_connect = erase_on_connect def __eq__(self, other): return (isinstance(other, PlainCredentials) and other.username == self.username and other.password == self.password and other.erase_on_connect == self.erase_on_connect) def __ne__(self, other): return not self == other def response_for(self, start): """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str|None, str|None) """ if as_bytes(PlainCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return (PlainCredentials.TYPE, b'\0' + as_bytes(self.username) + b'\0' + as_bytes(self.password)) def erase_credentials(self): """Called by Connection when it no longer needs the credentials""" if self.erase_on_connect: LOGGER.info("Erasing stored credential values") self.username = None self.password = None class ExternalCredentials(object): """The ExternalCredentials class allows the connection to use EXTERNAL authentication, generally with a client SSL certificate. """ TYPE = 'EXTERNAL' def __init__(self): """Create a new instance of ExternalCredentials""" self.erase_on_connect = False def __eq__(self, other): return (isinstance(other, ExternalCredentials) and other.erase_on_connect == self.erase_on_connect) def __ne__(self, other): return not self == other def response_for(self, start): """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str or None, str or None) """ if as_bytes(ExternalCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return ExternalCredentials.TYPE, b'' def erase_credentials(self): """Called by Connection when it no longer needs the credentials""" LOGGER.debug('Not supported by this Credentials type') # Append custom credential types to this list for validation support VALID_TYPES = [PlainCredentials, ExternalCredentials] pika-0.11.0/pika/data.py000066400000000000000000000213241315131611700147250ustar00rootroot00000000000000"""AMQP Table Encoding/Decoding""" import struct import decimal import calendar from datetime import datetime from pika import exceptions from pika.compat import unicode_type, PY2, long, as_bytes def encode_short_string(pieces, value): """Encode a string value as short string and append it to pieces list returning the size of the encoded value. :param list pieces: Already encoded values :param value: String value to encode :type value: str or unicode :rtype: int """ encoded_value = as_bytes(value) length = len(encoded_value) # 4.2.5.3 # Short strings, stored as an 8-bit unsigned integer length followed by zero # or more octets of data. Short strings can carry up to 255 octets of UTF-8 # data, but may not contain binary zero octets. # ... # 4.2.5.5 # The server SHOULD validate field names and upon receiving an invalid field # name, it SHOULD signal a connection exception with reply code 503 (syntax # error). # -> validate length (avoid truncated utf-8 / corrupted data), but skip null # byte check. if length > 255: raise exceptions.ShortStringTooLong(encoded_value) pieces.append(struct.pack('B', length)) pieces.append(encoded_value) return 1 + length if PY2: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 # Purely for compatibility with original python2 code. No idea what # and why this does. value = encoded[offset:offset + length] try: value = bytes(value) except UnicodeEncodeError: pass offset += length return value, offset else: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 value = encoded[offset:offset + length].decode('utf8') offset += length return value, offset def encode_table(pieces, table): """Encode a dict as an AMQP table appending the encded table to the pieces list passed in. :param list pieces: Already encoded frame pieces :param dict table: The dict to encode :rtype: int """ table = table or {} length_index = len(pieces) pieces.append(None) # placeholder tablesize = 0 for (key, value) in table.items(): tablesize += encode_short_string(pieces, key) tablesize += encode_value(pieces, value) pieces[length_index] = struct.pack('>I', tablesize) return tablesize + 4 def encode_value(pieces, value): """Encode the value passed in and append it to the pieces list returning the the size of the encoded value. :param list pieces: Already encoded values :param any value: The value to encode :rtype: int """ if PY2: if isinstance(value, basestring): if isinstance(value, unicode_type): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) else: # support only str on Python 3 if isinstance(value, str): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) if isinstance(value, bool): pieces.append(struct.pack('>cB', b't', int(value))) return 2 if isinstance(value, long): pieces.append(struct.pack('>cq', b'l', value)) return 9 elif isinstance(value, int): pieces.append(struct.pack('>ci', b'I', value)) return 5 elif isinstance(value, decimal.Decimal): value = value.normalize() if value.as_tuple().exponent < 0: decimals = -value.as_tuple().exponent raw = int(value * (decimal.Decimal(10) ** decimals)) pieces.append(struct.pack('>cBi', b'D', decimals, raw)) else: # per spec, the "decimals" octet is unsigned (!) pieces.append(struct.pack('>cBi', b'D', 0, int(value))) return 6 elif isinstance(value, datetime): pieces.append(struct.pack('>cQ', b'T', calendar.timegm(value.utctimetuple()))) return 9 elif isinstance(value, dict): pieces.append(struct.pack('>c', b'F')) return 1 + encode_table(pieces, value) elif isinstance(value, list): p = [] for v in value: encode_value(p, v) piece = b''.join(p) pieces.append(struct.pack('>cI', b'A', len(piece))) pieces.append(piece) return 5 + len(piece) elif value is None: pieces.append(struct.pack('>c', b'V')) return 1 else: raise exceptions.UnsupportedAMQPFieldException(pieces, value) def decode_table(encoded, offset): """Decode the AMQP table passed in from the encoded value returning the decoded result and the number of bytes read plus the offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple """ result = {} tablesize = struct.unpack_from('>I', encoded, offset)[0] offset += 4 limit = offset + tablesize while offset < limit: key, offset = decode_short_string(encoded, offset) value, offset = decode_value(encoded, offset) result[key] = value return result, offset def decode_value(encoded, offset): """Decode the value passed in returning the decoded value and the number of bytes read in addition to the starting offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple :raises: pika.exceptions.InvalidFieldTypeException """ # slice to get bytes in Python 3 and str in Python 2 kind = encoded[offset:offset + 1] offset += 1 # Bool if kind == b't': value = struct.unpack_from('>B', encoded, offset)[0] value = bool(value) offset += 1 # Short-Short Int elif kind == b'b': value = struct.unpack_from('>B', encoded, offset)[0] offset += 1 # Short-Short Unsigned Int elif kind == b'B': value = struct.unpack_from('>b', encoded, offset)[0] offset += 1 # Short Int elif kind == b'U': value = struct.unpack_from('>h', encoded, offset)[0] offset += 2 # Short Unsigned Int elif kind == b'u': value = struct.unpack_from('>H', encoded, offset)[0] offset += 2 # Long Int elif kind == b'I': value = struct.unpack_from('>i', encoded, offset)[0] offset += 4 # Long Unsigned Int elif kind == b'i': value = struct.unpack_from('>I', encoded, offset)[0] offset += 4 # Long-Long Int elif kind == b'L': value = long(struct.unpack_from('>q', encoded, offset)[0]) offset += 8 # Long-Long Unsigned Int elif kind == b'l': value = long(struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Float elif kind == b'f': value = long(struct.unpack_from('>f', encoded, offset)[0]) offset += 4 # Double elif kind == b'd': value = long(struct.unpack_from('>d', encoded, offset)[0]) offset += 8 # Decimal elif kind == b'D': decimals = struct.unpack_from('B', encoded, offset)[0] offset += 1 raw = struct.unpack_from('>i', encoded, offset)[0] offset += 4 value = decimal.Decimal(raw) * (decimal.Decimal(10) ** -decimals) # Short String elif kind == b's': value, offset = decode_short_string(encoded, offset) # Long String elif kind == b'S': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 value = encoded[offset:offset + length].decode('utf8') offset += length # Field Array elif kind == b'A': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 offset_end = offset + length value = [] while offset < offset_end: v, offset = decode_value(encoded, offset) value.append(v) # Timestamp elif kind == b'T': value = datetime.utcfromtimestamp(struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Field Table elif kind == b'F': (value, offset) = decode_table(encoded, offset) # Null / Void elif kind == b'V': value = None else: raise exceptions.InvalidFieldTypeException(kind) return value, offset pika-0.11.0/pika/exceptions.py000066400000000000000000000162001315131611700161720ustar00rootroot00000000000000"""Pika specific exceptions""" class AMQPError(Exception): def __repr__(self): return 'An unspecified AMQP error has occurred' class AMQPConnectionError(AMQPError): def __repr__(self): if len(self.args) == 1: if self.args[0] == 1: return ('No connection could be opened after 1 ' 'connection attempt') elif isinstance(self.args[0], int): return ('No connection could be opened after %s ' 'connection attempts' % self.args[0]) else: return ('No connection could be opened: %s' % self.args[0]) elif len(self.args) == 2: return '%s: %s' % (self.args[0], self.args[1]) class IncompatibleProtocolError(AMQPConnectionError): def __repr__(self): return ('The protocol returned by the server is not supported: %s' % (self.args,)) class AuthenticationError(AMQPConnectionError): def __repr__(self): return ('Server and client could not negotiate use of the %s ' 'authentication mechanism' % self.args[0]) class ProbableAuthenticationError(AMQPConnectionError): def __repr__(self): return ('Client was disconnected at a connection stage indicating a ' 'probable authentication error: %s' % (self.args,)) class ProbableAccessDeniedError(AMQPConnectionError): def __repr__(self): return ('Client was disconnected at a connection stage indicating a ' 'probable denial of access to the specified virtual host: %s' % (self.args,)) class NoFreeChannels(AMQPConnectionError): def __repr__(self): return 'The connection has run out of free channels' class ConnectionClosed(AMQPConnectionError): def __repr__(self): if len(self.args) == 2: return 'The AMQP connection was closed (%s) %s' % (self.args[0], self.args[1]) else: return 'The AMQP connection was closed: %s' % (self.args,) class AMQPChannelError(AMQPError): def __repr__(self): return 'An unspecified AMQP channel error has occurred' class ChannelClosed(AMQPChannelError): def __repr__(self): if len(self.args) == 2: return 'The channel was closed (%s) %s' % (self.args[0], self.args[1]) else: return 'The channel was closed: %s' % (self.args,) class ChannelAlreadyClosing(AMQPChannelError): """Raised when `Channel.close` is called while channel is already closing""" pass class DuplicateConsumerTag(AMQPChannelError): def __repr__(self): return ('The consumer tag specified already exists for this ' 'channel: %s' % self.args[0]) class ConsumerCancelled(AMQPChannelError): def __repr__(self): return 'Server cancelled consumer' class UnroutableError(AMQPChannelError): """Exception containing one or more unroutable messages returned by broker via Basic.Return. Used by BlockingChannel. In publisher-acknowledgements mode, this is raised upon receipt of Basic.Ack from broker; in the event of Basic.Nack from broker, `NackError` is raised instead """ def __init__(self, messages): """ :param messages: sequence of returned unroutable messages :type messages: sequence of `blocking_connection.ReturnedMessage` objects """ super(UnroutableError, self).__init__( "%s unroutable message(s) returned" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class NackError(AMQPChannelError): """This exception is raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. Used by BlockingChannel. """ def __init__(self, messages): """ :param messages: sequence of returned unroutable messages :type messages: sequence of `blocking_connection.ReturnedMessage` objects """ super(NackError, self).__init__( "%s message(s) NACKed" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class InvalidChannelNumber(AMQPError): def __repr__(self): return 'An invalid channel number has been specified: %s' % self.args[0] class ProtocolSyntaxError(AMQPError): def __repr__(self): return 'An unspecified protocol syntax error occurred' class UnexpectedFrameError(ProtocolSyntaxError): def __repr__(self): return 'Received a frame out of sequence: %r' % self.args[0] class ProtocolVersionMismatch(ProtocolSyntaxError): def __repr__(self): return 'Protocol versions did not match: %r vs %r' % (self.args[0], self.args[1]) class BodyTooLongError(ProtocolSyntaxError): def __repr__(self): return ('Received too many bytes for a message delivery: ' 'Received %i, expected %i' % (self.args[0], self.args[1])) class InvalidFrameError(ProtocolSyntaxError): def __repr__(self): return 'Invalid frame received: %r' % self.args[0] class InvalidFieldTypeException(ProtocolSyntaxError): def __repr__(self): return 'Unsupported field kind %s' % self.args[0] class UnsupportedAMQPFieldException(ProtocolSyntaxError): def __repr__(self): return 'Unsupported field kind %s' % type(self.args[1]) class UnspportedAMQPFieldException(UnsupportedAMQPFieldException): """Deprecated version of UnsupportedAMQPFieldException""" class MethodNotImplemented(AMQPError): pass class ChannelError(Exception): def __repr__(self): return 'An unspecified error occurred with the Channel' class InvalidMinimumFrameSize(ProtocolSyntaxError): """ DEPRECATED; pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. """ def __repr__(self): return 'AMQP Minimum Frame Size is 4096 Bytes' class InvalidMaximumFrameSize(ProtocolSyntaxError): """ DEPRECATED; pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. """ def __repr__(self): return 'AMQP Maximum Frame Size is 131072 Bytes' class RecursionError(Exception): """The requested operation would result in unsupported recursion or reentrancy. Used by BlockingConnection/BlockingChannel """ class ShortStringTooLong(AMQPError): def __repr__(self): return ('AMQP Short String can contain up to 255 bytes: ' '%.300s' % self.args[0]) class DuplicateGetOkCallback(ChannelError): def __repr__(self): return ('basic_get can only be called again after the callback for the' 'previous basic_get is executed') pika-0.11.0/pika/frame.py000066400000000000000000000170661315131611700151160ustar00rootroot00000000000000"""Frame objects that do the frame demarshaling and marshaling.""" import logging import struct from pika import amqp_object from pika import exceptions from pika import spec from pika.compat import byte LOGGER = logging.getLogger(__name__) class Frame(amqp_object.AMQPObject): """Base Frame object mapping. Defines a behavior for all child classes for assignment of core attributes and implementation of the a core _marshal method which child classes use to create the binary AMQP frame. """ NAME = 'Frame' def __init__(self, frame_type, channel_number): """Create a new instance of a frame :param int frame_type: The frame type :param int channel_number: The channel number for the frame """ self.frame_type = frame_type self.channel_number = channel_number def _marshal(self, pieces): """Create the full AMQP wire protocol frame data representation :rtype: bytes """ payload = b''.join(pieces) return struct.pack('>BHI', self.frame_type, self.channel_number, len(payload)) + payload + byte(spec.FRAME_END) def marshal(self): """To be ended by child classes :raises NotImplementedError """ raise NotImplementedError class Method(Frame): """Base Method frame object mapping. AMQP method frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'METHOD' def __init__(self, channel_number, method): """Create a new instance of a frame :param int channel_number: The frame type :param pika.Spec.Class.Method method: The AMQP Class.Method """ Frame.__init__(self, spec.FRAME_METHOD, channel_number) self.method = method def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.method.encode() pieces.insert(0, struct.pack('>I', self.method.INDEX)) return self._marshal(pieces) class Header(Frame): """Header frame object mapping. AMQP content header frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'Header' def __init__(self, channel_number, body_size, props): """Create a new instance of a AMQP ContentHeader object :param int channel_number: The channel number for the frame :param int body_size: The number of bytes for the body :param pika.spec.BasicProperties props: Basic.Properties object """ Frame.__init__(self, spec.FRAME_HEADER, channel_number) self.body_size = body_size self.properties = props def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.properties.encode() pieces.insert(0, struct.pack('>HxxQ', self.properties.INDEX, self.body_size)) return self._marshal(pieces) class Body(Frame): """Body frame object mapping class. AMQP content body frames are mapped on to this base class for getting/setting of attributes/data. """ NAME = 'Body' def __init__(self, channel_number, fragment): """ Parameters: - channel_number: int - fragment: unicode or str """ Frame.__init__(self, spec.FRAME_BODY, channel_number) self.fragment = fragment def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal([self.fragment]) class Heartbeat(Frame): """Heartbeat frame object mapping class. AMQP Heartbeat frames are mapped on to this class for a common access structure to the attributes/data values. """ NAME = 'Heartbeat' def __init__(self): """Create a new instance of the Heartbeat frame""" Frame.__init__(self, spec.FRAME_HEARTBEAT, 0) def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal(list()) class ProtocolHeader(amqp_object.AMQPObject): """AMQP Protocol header frame class which provides a pythonic interface for creating AMQP Protocol headers """ NAME = 'ProtocolHeader' def __init__(self, major=None, minor=None, revision=None): """Construct a Protocol Header frame object for the specified AMQP version :param int major: Major version number :param int minor: Minor version number :param int revision: Revision """ self.frame_type = -1 self.major = major or spec.PROTOCOL_VERSION[0] self.minor = minor or spec.PROTOCOL_VERSION[1] self.revision = revision or spec.PROTOCOL_VERSION[2] def marshal(self): """Return the full AMQP wire protocol frame data representation of the ProtocolHeader frame :rtype: str """ return b'AMQP' + struct.pack('BBBB', 0, self.major, self.minor, self.revision) def decode_frame(data_in): """Receives raw socket data and attempts to turn it into a frame. Returns bytes used to make the frame and the frame :param str data_in: The raw data stream :rtype: tuple(bytes consumed, frame) :raises: pika.exceptions.InvalidFrameError """ # Look to see if it's a protocol header frame try: if data_in[0:4] == b'AMQP': major, minor, revision = struct.unpack_from('BBB', data_in, 5) return 8, ProtocolHeader(major, minor, revision) except (IndexError, struct.error): return 0, None # Get the Frame Type, Channel Number and Frame Size try: (frame_type, channel_number, frame_size) = struct.unpack('>BHL', data_in[0:7]) except struct.error: return 0, None # Get the frame data frame_end = spec.FRAME_HEADER_SIZE + frame_size + spec.FRAME_END_SIZE # We don't have all of the frame yet if frame_end > len(data_in): return 0, None # The Frame termination chr is wrong if data_in[frame_end - 1:frame_end] != byte(spec.FRAME_END): raise exceptions.InvalidFrameError("Invalid FRAME_END marker") # Get the raw frame data frame_data = data_in[spec.FRAME_HEADER_SIZE:frame_end - 1] if frame_type == spec.FRAME_METHOD: # Get the Method ID from the frame data method_id = struct.unpack_from('>I', frame_data)[0] # Get a Method object for this method_id method = spec.methods[method_id]() # Decode the content method.decode(frame_data, 4) # Return the amount of data consumed and the Method object return frame_end, Method(channel_number, method) elif frame_type == spec.FRAME_HEADER: # Return the header class and body size class_id, weight, body_size = struct.unpack_from('>HHQ', frame_data) # Get the Properties type properties = spec.props[class_id]() # Decode the properties out = properties.decode(frame_data[12:]) # Return a Header frame return frame_end, Header(channel_number, body_size, properties) elif frame_type == spec.FRAME_BODY: # Return the amount of data consumed and the Body frame w/ data return frame_end, Body(channel_number, frame_data) elif frame_type == spec.FRAME_HEARTBEAT: # Return the amount of data and a Heartbeat frame return frame_end, Heartbeat() raise exceptions.InvalidFrameError("Unknown frame type: %i" % frame_type) pika-0.11.0/pika/heartbeat.py000066400000000000000000000127351315131611700157610ustar00rootroot00000000000000"""Handle AMQP Heartbeats""" import logging from pika import frame LOGGER = logging.getLogger(__name__) class HeartbeatChecker(object): """Checks to make sure that our heartbeat is received at the expected intervals. """ MAX_IDLE_COUNT = 2 _CONNECTION_FORCED = 320 _STALE_CONNECTION = "Too Many Missed Heartbeats, No reply in %i seconds" def __init__(self, connection, interval, idle_count=MAX_IDLE_COUNT): """Create a heartbeat on connection sending a heartbeat frame every interval seconds. :param pika.connection.Connection: Connection object :param int interval: Heartbeat check interval :param int idle_count: Number of heartbeat intervals missed until the connection is considered idle and disconnects """ self._connection = connection self._interval = interval self._max_idle_count = idle_count # Initialize counters self._bytes_received = 0 self._bytes_sent = 0 self._heartbeat_frames_received = 0 self._heartbeat_frames_sent = 0 self._idle_byte_intervals = 0 # The handle for the last timer self._timer = None # Setup the timer to fire in _interval seconds self._setup_timer() @property def active(self): """Return True if the connection's heartbeat attribute is set to this instance. :rtype True """ return self._connection.heartbeat is self @property def bytes_received_on_connection(self): """Return the number of bytes received by the connection bytes object. :rtype int """ return self._connection.bytes_received @property def connection_is_idle(self): """Returns true if the byte count hasn't changed in enough intervals to trip the max idle threshold. """ return self._idle_byte_intervals >= self._max_idle_count def received(self): """Called when a heartbeat is received""" LOGGER.debug('Received heartbeat frame') self._heartbeat_frames_received += 1 def send_and_check(self): """Invoked by a timer to send a heartbeat when we need to, check to see if we've missed any heartbeats and disconnect our connection if it's been idle too long. """ LOGGER.debug('Received %i heartbeat frames, sent %i', self._heartbeat_frames_received, self._heartbeat_frames_sent) if self.connection_is_idle: return self._close_connection() # Connection has not received any data, increment the counter if not self._has_received_data: self._idle_byte_intervals += 1 else: self._idle_byte_intervals = 0 # Update the counters of bytes sent/received and the frames received self._update_counters() # Send a heartbeat frame self._send_heartbeat_frame() # Update the timer to fire again self._start_timer() def stop(self): """Stop the heartbeat checker""" if self._timer: LOGGER.debug('Removing timeout for next heartbeat interval') self._connection.remove_timeout(self._timer) self._timer = None def _close_connection(self): """Close the connection with the AMQP Connection-Forced value.""" LOGGER.info('Connection is idle, %i stale byte intervals', self._idle_byte_intervals) duration = self._max_idle_count * self._interval text = HeartbeatChecker._STALE_CONNECTION % duration # NOTE: this won't achieve the perceived effect of sending # Connection.Close to broker, because the frame will only get buffered # in memory before the next statement terminates the connection. self._connection.close(HeartbeatChecker._CONNECTION_FORCED, text) self._connection._on_terminate(HeartbeatChecker._CONNECTION_FORCED, text) @property def _has_received_data(self): """Returns True if the connection has received data on the connection. :rtype: bool """ return not self._bytes_received == self.bytes_received_on_connection @staticmethod def _new_heartbeat_frame(): """Return a new heartbeat frame. :rtype pika.frame.Heartbeat """ return frame.Heartbeat() def _send_heartbeat_frame(self): """Send a heartbeat frame on the connection. """ LOGGER.debug('Sending heartbeat frame') self._connection._send_frame(self._new_heartbeat_frame()) self._heartbeat_frames_sent += 1 def _setup_timer(self): """Use the connection objects delayed_call function which is implemented by the Adapter for calling the check_heartbeats function every interval seconds. """ self._timer = self._connection.add_timeout(self._interval, self.send_and_check) def _start_timer(self): """If the connection still has this object set for heartbeats, add a new timer. """ if self.active: self._setup_timer() def _update_counters(self): """Update the internal counters for bytes sent and received and the number of frames received """ self._bytes_sent = self._connection.bytes_sent self._bytes_received = self._connection.bytes_received pika-0.11.0/pika/spec.py000066400000000000000000002321371315131611700147540ustar00rootroot00000000000000""" AMQP Specification ================== This module implements the constants and classes that comprise AMQP protocol level constructs. It should rarely be directly referenced outside of Pika's own internal use. .. note:: Auto-generated code by codegen.py, do not edit directly. Pull requests to this file without accompanying ``utils/codegen.py`` changes will be rejected. """ import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type # Python 3 support for str object str = bytes PROTOCOL_VERSION = (0, 9, 1) PORT = 5672 ACCESS_REFUSED = 403 CHANNEL_ERROR = 504 COMMAND_INVALID = 503 CONNECTION_FORCED = 320 CONTENT_TOO_LARGE = 311 FRAME_BODY = 3 FRAME_END = 206 FRAME_END_SIZE = 1 FRAME_ERROR = 501 FRAME_HEADER = 2 FRAME_HEADER_SIZE = 7 FRAME_HEARTBEAT = 8 FRAME_MAX_SIZE = 131072 FRAME_METHOD = 1 FRAME_MIN_SIZE = 4096 INTERNAL_ERROR = 541 INVALID_PATH = 402 NOT_ALLOWED = 530 NOT_FOUND = 404 NOT_IMPLEMENTED = 540 NO_CONSUMERS = 313 NO_ROUTE = 312 PERSISTENT_DELIVERY_MODE = 2 PRECONDITION_FAILED = 406 REPLY_SUCCESS = 200 RESOURCE_ERROR = 506 RESOURCE_LOCKED = 405 SYNTAX_ERROR = 502 TRANSIENT_DELIVERY_MODE = 1 UNEXPECTED_FRAME = 505 class Connection(amqp_object.Class): INDEX = 0x000A # 10 NAME = 'Connection' class Start(amqp_object.Method): INDEX = 0x000A000A # 10, 10; 655370 NAME = 'Connection.Start' def __init__(self, version_major=0, version_minor=9, server_properties=None, mechanisms='PLAIN', locales='en_US'): self.version_major = version_major self.version_minor = version_minor self.server_properties = server_properties self.mechanisms = mechanisms self.locales = locales @property def synchronous(self): return True def decode(self, encoded, offset=0): self.version_major = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.version_minor = struct.unpack_from('B', encoded, offset)[0] offset += 1 (self.server_properties, offset) = data.decode_table(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.mechanisms = encoded[offset:offset + length] try: self.mechanisms = str(self.mechanisms) except UnicodeEncodeError: pass offset += length length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.locales = encoded[offset:offset + length] try: self.locales = str(self.locales) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() pieces.append(struct.pack('B', self.version_major)) pieces.append(struct.pack('B', self.version_minor)) data.encode_table(pieces, self.server_properties) assert isinstance(self.mechanisms, str_or_bytes),\ 'A non-string value was supplied for self.mechanisms' value = self.mechanisms.encode('utf-8') if isinstance(self.mechanisms, unicode_type) else self.mechanisms pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locales, str_or_bytes),\ 'A non-string value was supplied for self.locales' value = self.locales.encode('utf-8') if isinstance(self.locales, unicode_type) else self.locales pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class StartOk(amqp_object.Method): INDEX = 0x000A000B # 10, 11; 655371 NAME = 'Connection.StartOk' def __init__(self, client_properties=None, mechanism='PLAIN', response=None, locale='en_US'): self.client_properties = client_properties self.mechanism = mechanism self.response = response self.locale = locale @property def synchronous(self): return False def decode(self, encoded, offset=0): (self.client_properties, offset) = data.decode_table(encoded, offset) self.mechanism, offset = data.decode_short_string(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length self.locale, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() data.encode_table(pieces, self.client_properties) assert isinstance(self.mechanism, str_or_bytes),\ 'A non-string value was supplied for self.mechanism' data.encode_short_string(pieces, self.mechanism) assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locale, str_or_bytes),\ 'A non-string value was supplied for self.locale' data.encode_short_string(pieces, self.locale) return pieces class Secure(amqp_object.Method): INDEX = 0x000A0014 # 10, 20; 655380 NAME = 'Connection.Secure' def __init__(self, challenge=None): self.challenge = challenge @property def synchronous(self): return True def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.challenge = encoded[offset:offset + length] try: self.challenge = str(self.challenge) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.challenge, str_or_bytes),\ 'A non-string value was supplied for self.challenge' value = self.challenge.encode('utf-8') if isinstance(self.challenge, unicode_type) else self.challenge pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class SecureOk(amqp_object.Method): INDEX = 0x000A0015 # 10, 21; 655381 NAME = 'Connection.SecureOk' def __init__(self, response=None): self.response = response @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Tune(amqp_object.Method): INDEX = 0x000A001E # 10, 30; 655390 NAME = 'Connection.Tune' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return True def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class TuneOk(amqp_object.Method): INDEX = 0x000A001F # 10, 31; 655391 NAME = 'Connection.TuneOk' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return False def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class Open(amqp_object.Method): INDEX = 0x000A0028 # 10, 40; 655400 NAME = 'Connection.Open' def __init__(self, virtual_host='/', capabilities='', insist=False): self.virtual_host = virtual_host self.capabilities = capabilities self.insist = insist @property def synchronous(self): return True def decode(self, encoded, offset=0): self.virtual_host, offset = data.decode_short_string(encoded, offset) self.capabilities, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.insist = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.virtual_host, str_or_bytes),\ 'A non-string value was supplied for self.virtual_host' data.encode_short_string(pieces, self.virtual_host) assert isinstance(self.capabilities, str_or_bytes),\ 'A non-string value was supplied for self.capabilities' data.encode_short_string(pieces, self.capabilities) bit_buffer = 0 if self.insist: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class OpenOk(amqp_object.Method): INDEX = 0x000A0029 # 10, 41; 655401 NAME = 'Connection.OpenOk' def __init__(self, known_hosts=''): self.known_hosts = known_hosts @property def synchronous(self): return False def decode(self, encoded, offset=0): self.known_hosts, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.known_hosts, str_or_bytes),\ 'A non-string value was supplied for self.known_hosts' data.encode_short_string(pieces, self.known_hosts) return pieces class Close(amqp_object.Method): INDEX = 0x000A0032 # 10, 50; 655410 NAME = 'Connection.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x000A0033 # 10, 51; 655411 NAME = 'Connection.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Blocked(amqp_object.Method): INDEX = 0x000A003C # 10, 60; 655420 NAME = 'Connection.Blocked' def __init__(self, reason=''): self.reason = reason @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reason, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.reason, str_or_bytes),\ 'A non-string value was supplied for self.reason' data.encode_short_string(pieces, self.reason) return pieces class Unblocked(amqp_object.Method): INDEX = 0x000A003D # 10, 61; 655421 NAME = 'Connection.Unblocked' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Channel(amqp_object.Class): INDEX = 0x0014 # 20 NAME = 'Channel' class Open(amqp_object.Method): INDEX = 0x0014000A # 20, 10; 1310730 NAME = 'Channel.Open' def __init__(self, out_of_band=''): self.out_of_band = out_of_band @property def synchronous(self): return True def decode(self, encoded, offset=0): self.out_of_band, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.out_of_band, str_or_bytes),\ 'A non-string value was supplied for self.out_of_band' data.encode_short_string(pieces, self.out_of_band) return pieces class OpenOk(amqp_object.Method): INDEX = 0x0014000B # 20, 11; 1310731 NAME = 'Channel.OpenOk' def __init__(self, channel_id=''): self.channel_id = channel_id @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.channel_id = encoded[offset:offset + length] try: self.channel_id = str(self.channel_id) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.channel_id, str_or_bytes),\ 'A non-string value was supplied for self.channel_id' value = self.channel_id.encode('utf-8') if isinstance(self.channel_id, unicode_type) else self.channel_id pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Flow(amqp_object.Method): INDEX = 0x00140014 # 20, 20; 1310740 NAME = 'Channel.Flow' def __init__(self, active=None): self.active = active @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class FlowOk(amqp_object.Method): INDEX = 0x00140015 # 20, 21; 1310741 NAME = 'Channel.FlowOk' def __init__(self, active=None): self.active = active @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Close(amqp_object.Method): INDEX = 0x00140028 # 20, 40; 1310760 NAME = 'Channel.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x00140029 # 20, 41; 1310761 NAME = 'Channel.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Access(amqp_object.Class): INDEX = 0x001E # 30 NAME = 'Access' class Request(amqp_object.Method): INDEX = 0x001E000A # 30, 10; 1966090 NAME = 'Access.Request' def __init__(self, realm='/data', exclusive=False, passive=True, active=True, write=True, read=True): self.realm = realm self.exclusive = exclusive self.passive = passive self.active = active self.write = write self.read = read @property def synchronous(self): return True def decode(self, encoded, offset=0): self.realm, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.exclusive = (bit_buffer & (1 << 0)) != 0 self.passive = (bit_buffer & (1 << 1)) != 0 self.active = (bit_buffer & (1 << 2)) != 0 self.write = (bit_buffer & (1 << 3)) != 0 self.read = (bit_buffer & (1 << 4)) != 0 return self def encode(self): pieces = list() assert isinstance(self.realm, str_or_bytes),\ 'A non-string value was supplied for self.realm' data.encode_short_string(pieces, self.realm) bit_buffer = 0 if self.exclusive: bit_buffer = bit_buffer | (1 << 0) if self.passive: bit_buffer = bit_buffer | (1 << 1) if self.active: bit_buffer = bit_buffer | (1 << 2) if self.write: bit_buffer = bit_buffer | (1 << 3) if self.read: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) return pieces class RequestOk(amqp_object.Method): INDEX = 0x001E000B # 30, 11; 1966091 NAME = 'Access.RequestOk' def __init__(self, ticket=1): self.ticket = ticket @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) return pieces class Exchange(amqp_object.Class): INDEX = 0x0028 # 40 NAME = 'Exchange' class Declare(amqp_object.Method): INDEX = 0x0028000A # 40, 10; 2621450 NAME = 'Exchange.Declare' def __init__(self, ticket=0, exchange=None, type='direct', passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments={}): self.ticket = ticket self.exchange = exchange self.type = type self.passive = passive self.durable = durable self.auto_delete = auto_delete self.internal = internal self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.type, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.auto_delete = (bit_buffer & (1 << 2)) != 0 self.internal = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) bit_buffer = 0 if self.passive: bit_buffer = bit_buffer | (1 << 0) if self.durable: bit_buffer = bit_buffer | (1 << 1) if self.auto_delete: bit_buffer = bit_buffer | (1 << 2) if self.internal: bit_buffer = bit_buffer | (1 << 3) if self.nowait: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0028000B # 40, 11; 2621451 NAME = 'Exchange.DeclareOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Delete(amqp_object.Method): INDEX = 0x00280014 # 40, 20; 2621460 NAME = 'Exchange.Delete' def __init__(self, ticket=0, exchange=None, if_unused=False, nowait=False): self.ticket = ticket self.exchange = exchange self.if_unused = if_unused self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.nowait = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) bit_buffer = 0 if self.if_unused: bit_buffer = bit_buffer | (1 << 0) if self.nowait: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00280015 # 40, 21; 2621461 NAME = 'Exchange.DeleteOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Bind(amqp_object.Method): INDEX = 0x0028001E # 40, 30; 2621470 NAME = 'Exchange.Bind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x0028001F # 40, 31; 2621471 NAME = 'Exchange.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Unbind(amqp_object.Method): INDEX = 0x00280028 # 40, 40; 2621480 NAME = 'Exchange.Unbind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00280033 # 40, 51; 2621491 NAME = 'Exchange.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Queue(amqp_object.Class): INDEX = 0x0032 # 50 NAME = 'Queue' class Declare(amqp_object.Method): INDEX = 0x0032000A # 50, 10; 3276810 NAME = 'Queue.Declare' def __init__(self, ticket=0, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.passive = passive self.durable = durable self.exclusive = exclusive self.auto_delete = auto_delete self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.auto_delete = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.passive: bit_buffer = bit_buffer | (1 << 0) if self.durable: bit_buffer = bit_buffer | (1 << 1) if self.exclusive: bit_buffer = bit_buffer | (1 << 2) if self.auto_delete: bit_buffer = bit_buffer | (1 << 3) if self.nowait: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0032000B # 50, 11; 3276811 NAME = 'Queue.DeclareOk' def __init__(self, queue=None, message_count=None, consumer_count=None): self.queue = queue self.message_count = message_count self.consumer_count = consumer_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.queue, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.consumer_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) pieces.append(struct.pack('>I', self.message_count)) pieces.append(struct.pack('>I', self.consumer_count)) return pieces class Bind(amqp_object.Method): INDEX = 0x00320014 # 50, 20; 3276820 NAME = 'Queue.Bind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x00320015 # 50, 21; 3276821 NAME = 'Queue.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Purge(amqp_object.Method): INDEX = 0x0032001E # 50, 30; 3276830 NAME = 'Queue.Purge' def __init__(self, ticket=0, queue='', nowait=False): self.ticket = ticket self.queue = queue self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class PurgeOk(amqp_object.Method): INDEX = 0x0032001F # 50, 31; 3276831 NAME = 'Queue.PurgeOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Delete(amqp_object.Method): INDEX = 0x00320028 # 50, 40; 3276840 NAME = 'Queue.Delete' def __init__(self, ticket=0, queue='', if_unused=False, if_empty=False, nowait=False): self.ticket = ticket self.queue = queue self.if_unused = if_unused self.if_empty = if_empty self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.if_empty = (bit_buffer & (1 << 1)) != 0 self.nowait = (bit_buffer & (1 << 2)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.if_unused: bit_buffer = bit_buffer | (1 << 0) if self.if_empty: bit_buffer = bit_buffer | (1 << 1) if self.nowait: bit_buffer = bit_buffer | (1 << 2) pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00320029 # 50, 41; 3276841 NAME = 'Queue.DeleteOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Unbind(amqp_object.Method): INDEX = 0x00320032 # 50, 50; 3276850 NAME = 'Queue.Unbind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', arguments={}): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00320033 # 50, 51; 3276851 NAME = 'Queue.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Basic(amqp_object.Class): INDEX = 0x003C # 60 NAME = 'Basic' class Qos(amqp_object.Method): INDEX = 0x003C000A # 60, 10; 3932170 NAME = 'Basic.Qos' def __init__(self, prefetch_size=0, prefetch_count=0, global_=False): self.prefetch_size = prefetch_size self.prefetch_count = prefetch_count self.global_ = global_ @property def synchronous(self): return True def decode(self, encoded, offset=0): self.prefetch_size = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.prefetch_count = struct.unpack_from('>H', encoded, offset)[0] offset += 2 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.global_ = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.prefetch_size)) pieces.append(struct.pack('>H', self.prefetch_count)) bit_buffer = 0 if self.global_: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class QosOk(amqp_object.Method): INDEX = 0x003C000B # 60, 11; 3932171 NAME = 'Basic.QosOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Consume(amqp_object.Method): INDEX = 0x003C0014 # 60, 20; 3932180 NAME = 'Basic.Consume' def __init__(self, ticket=0, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.consumer_tag = consumer_tag self.no_local = no_local self.no_ack = no_ack self.exclusive = exclusive self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_local = (bit_buffer & (1 << 0)) != 0 self.no_ack = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.nowait = (bit_buffer & (1 << 3)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.no_local: bit_buffer = bit_buffer | (1 << 0) if self.no_ack: bit_buffer = bit_buffer | (1 << 1) if self.exclusive: bit_buffer = bit_buffer | (1 << 2) if self.nowait: bit_buffer = bit_buffer | (1 << 3) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class ConsumeOk(amqp_object.Method): INDEX = 0x003C0015 # 60, 21; 3932181 NAME = 'Basic.ConsumeOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Cancel(amqp_object.Method): INDEX = 0x003C001E # 60, 30; 3932190 NAME = 'Basic.Cancel' def __init__(self, consumer_tag=None, nowait=False): self.consumer_tag = consumer_tag self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class CancelOk(amqp_object.Method): INDEX = 0x003C001F # 60, 31; 3932191 NAME = 'Basic.CancelOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Publish(amqp_object.Method): INDEX = 0x003C0028 # 60, 40; 3932200 NAME = 'Basic.Publish' def __init__(self, ticket=0, exchange='', routing_key='', mandatory=False, immediate=False): self.ticket = ticket self.exchange = exchange self.routing_key = routing_key self.mandatory = mandatory self.immediate = immediate @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.mandatory = (bit_buffer & (1 << 0)) != 0 self.immediate = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.mandatory: bit_buffer = bit_buffer | (1 << 0) if self.immediate: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class Return(amqp_object.Method): INDEX = 0x003C0032 # 60, 50; 3932210 NAME = 'Basic.Return' def __init__(self, reply_code=None, reply_text='', exchange=None, routing_key=None): self.reply_code = reply_code self.reply_text = reply_text self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Deliver(amqp_object.Method): INDEX = 0x003C003C # 60, 60; 3932220 NAME = 'Basic.Deliver' def __init__(self, consumer_tag=None, delivery_tag=None, redelivered=False, exchange=None, routing_key=None): self.consumer_tag = consumer_tag self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Get(amqp_object.Method): INDEX = 0x003C0046 # 60, 70; 3932230 NAME = 'Basic.Get' def __init__(self, ticket=0, queue='', no_ack=False): self.ticket = ticket self.queue = queue self.no_ack = no_ack @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_ack = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.no_ack: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class GetOk(amqp_object.Method): INDEX = 0x003C0047 # 60, 71; 3932231 NAME = 'Basic.GetOk' def __init__(self, delivery_tag=None, redelivered=False, exchange=None, routing_key=None, message_count=None): self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) pieces.append(struct.pack('>I', self.message_count)) return pieces class GetEmpty(amqp_object.Method): INDEX = 0x003C0048 # 60, 72; 3932232 NAME = 'Basic.GetEmpty' def __init__(self, cluster_id=''): self.cluster_id = cluster_id @property def synchronous(self): return False def decode(self, encoded, offset=0): self.cluster_id, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) return pieces class Ack(amqp_object.Method): INDEX = 0x003C0050 # 60, 80; 3932240 NAME = 'Basic.Ack' def __init__(self, delivery_tag=0, multiple=False): self.delivery_tag = delivery_tag self.multiple = multiple @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Reject(amqp_object.Method): INDEX = 0x003C005A # 60, 90; 3932250 NAME = 'Basic.Reject' def __init__(self, delivery_tag=None, requeue=True): self.delivery_tag = delivery_tag self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverAsync(amqp_object.Method): INDEX = 0x003C0064 # 60, 100; 3932260 NAME = 'Basic.RecoverAsync' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Recover(amqp_object.Method): INDEX = 0x003C006E # 60, 110; 3932270 NAME = 'Basic.Recover' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverOk(amqp_object.Method): INDEX = 0x003C006F # 60, 111; 3932271 NAME = 'Basic.RecoverOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Nack(amqp_object.Method): INDEX = 0x003C0078 # 60, 120; 3932280 NAME = 'Basic.Nack' def __init__(self, delivery_tag=0, multiple=False, requeue=True): self.delivery_tag = delivery_tag self.multiple = multiple self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 self.requeue = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer = bit_buffer | (1 << 0) if self.requeue: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class Tx(amqp_object.Class): INDEX = 0x005A # 90 NAME = 'Tx' class Select(amqp_object.Method): INDEX = 0x005A000A # 90, 10; 5898250 NAME = 'Tx.Select' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class SelectOk(amqp_object.Method): INDEX = 0x005A000B # 90, 11; 5898251 NAME = 'Tx.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Commit(amqp_object.Method): INDEX = 0x005A0014 # 90, 20; 5898260 NAME = 'Tx.Commit' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class CommitOk(amqp_object.Method): INDEX = 0x005A0015 # 90, 21; 5898261 NAME = 'Tx.CommitOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Rollback(amqp_object.Method): INDEX = 0x005A001E # 90, 30; 5898270 NAME = 'Tx.Rollback' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class RollbackOk(amqp_object.Method): INDEX = 0x005A001F # 90, 31; 5898271 NAME = 'Tx.RollbackOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Confirm(amqp_object.Class): INDEX = 0x0055 # 85 NAME = 'Confirm' class Select(amqp_object.Method): INDEX = 0x0055000A # 85, 10; 5570570 NAME = 'Confirm.Select' def __init__(self, nowait=False): self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class SelectOk(amqp_object.Method): INDEX = 0x0055000B # 85, 11; 5570571 NAME = 'Confirm.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class BasicProperties(amqp_object.Properties): CLASS = Basic INDEX = 0x003C # 60 NAME = 'BasicProperties' FLAG_CONTENT_TYPE = (1 << 15) FLAG_CONTENT_ENCODING = (1 << 14) FLAG_HEADERS = (1 << 13) FLAG_DELIVERY_MODE = (1 << 12) FLAG_PRIORITY = (1 << 11) FLAG_CORRELATION_ID = (1 << 10) FLAG_REPLY_TO = (1 << 9) FLAG_EXPIRATION = (1 << 8) FLAG_MESSAGE_ID = (1 << 7) FLAG_TIMESTAMP = (1 << 6) FLAG_TYPE = (1 << 5) FLAG_USER_ID = (1 << 4) FLAG_APP_ID = (1 << 3) FLAG_CLUSTER_ID = (1 << 2) def __init__(self, content_type=None, content_encoding=None, headers=None, delivery_mode=None, priority=None, correlation_id=None, reply_to=None, expiration=None, message_id=None, timestamp=None, type=None, user_id=None, app_id=None, cluster_id=None): self.content_type = content_type self.content_encoding = content_encoding self.headers = headers self.delivery_mode = delivery_mode self.priority = priority self.correlation_id = correlation_id self.reply_to = reply_to self.expiration = expiration self.message_id = message_id self.timestamp = timestamp self.type = type self.user_id = user_id self.app_id = app_id self.cluster_id = cluster_id def decode(self, encoded, offset=0): flags = 0 flagword_index = 0 while True: partial_flags = struct.unpack_from('>H', encoded, offset)[0] offset += 2 flags = flags | (partial_flags << (flagword_index * 16)) if not (partial_flags & 1): break flagword_index += 1 if flags & BasicProperties.FLAG_CONTENT_TYPE: self.content_type, offset = data.decode_short_string(encoded, offset) else: self.content_type = None if flags & BasicProperties.FLAG_CONTENT_ENCODING: self.content_encoding, offset = data.decode_short_string(encoded, offset) else: self.content_encoding = None if flags & BasicProperties.FLAG_HEADERS: (self.headers, offset) = data.decode_table(encoded, offset) else: self.headers = None if flags & BasicProperties.FLAG_DELIVERY_MODE: self.delivery_mode = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.delivery_mode = None if flags & BasicProperties.FLAG_PRIORITY: self.priority = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.priority = None if flags & BasicProperties.FLAG_CORRELATION_ID: self.correlation_id, offset = data.decode_short_string(encoded, offset) else: self.correlation_id = None if flags & BasicProperties.FLAG_REPLY_TO: self.reply_to, offset = data.decode_short_string(encoded, offset) else: self.reply_to = None if flags & BasicProperties.FLAG_EXPIRATION: self.expiration, offset = data.decode_short_string(encoded, offset) else: self.expiration = None if flags & BasicProperties.FLAG_MESSAGE_ID: self.message_id, offset = data.decode_short_string(encoded, offset) else: self.message_id = None if flags & BasicProperties.FLAG_TIMESTAMP: self.timestamp = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 else: self.timestamp = None if flags & BasicProperties.FLAG_TYPE: self.type, offset = data.decode_short_string(encoded, offset) else: self.type = None if flags & BasicProperties.FLAG_USER_ID: self.user_id, offset = data.decode_short_string(encoded, offset) else: self.user_id = None if flags & BasicProperties.FLAG_APP_ID: self.app_id, offset = data.decode_short_string(encoded, offset) else: self.app_id = None if flags & BasicProperties.FLAG_CLUSTER_ID: self.cluster_id, offset = data.decode_short_string(encoded, offset) else: self.cluster_id = None return self def encode(self): pieces = list() flags = 0 if self.content_type is not None: flags = flags | BasicProperties.FLAG_CONTENT_TYPE assert isinstance(self.content_type, str_or_bytes),\ 'A non-string value was supplied for self.content_type' data.encode_short_string(pieces, self.content_type) if self.content_encoding is not None: flags = flags | BasicProperties.FLAG_CONTENT_ENCODING assert isinstance(self.content_encoding, str_or_bytes),\ 'A non-string value was supplied for self.content_encoding' data.encode_short_string(pieces, self.content_encoding) if self.headers is not None: flags = flags | BasicProperties.FLAG_HEADERS data.encode_table(pieces, self.headers) if self.delivery_mode is not None: flags = flags | BasicProperties.FLAG_DELIVERY_MODE pieces.append(struct.pack('B', self.delivery_mode)) if self.priority is not None: flags = flags | BasicProperties.FLAG_PRIORITY pieces.append(struct.pack('B', self.priority)) if self.correlation_id is not None: flags = flags | BasicProperties.FLAG_CORRELATION_ID assert isinstance(self.correlation_id, str_or_bytes),\ 'A non-string value was supplied for self.correlation_id' data.encode_short_string(pieces, self.correlation_id) if self.reply_to is not None: flags = flags | BasicProperties.FLAG_REPLY_TO assert isinstance(self.reply_to, str_or_bytes),\ 'A non-string value was supplied for self.reply_to' data.encode_short_string(pieces, self.reply_to) if self.expiration is not None: flags = flags | BasicProperties.FLAG_EXPIRATION assert isinstance(self.expiration, str_or_bytes),\ 'A non-string value was supplied for self.expiration' data.encode_short_string(pieces, self.expiration) if self.message_id is not None: flags = flags | BasicProperties.FLAG_MESSAGE_ID assert isinstance(self.message_id, str_or_bytes),\ 'A non-string value was supplied for self.message_id' data.encode_short_string(pieces, self.message_id) if self.timestamp is not None: flags = flags | BasicProperties.FLAG_TIMESTAMP pieces.append(struct.pack('>Q', self.timestamp)) if self.type is not None: flags = flags | BasicProperties.FLAG_TYPE assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) if self.user_id is not None: flags = flags | BasicProperties.FLAG_USER_ID assert isinstance(self.user_id, str_or_bytes),\ 'A non-string value was supplied for self.user_id' data.encode_short_string(pieces, self.user_id) if self.app_id is not None: flags = flags | BasicProperties.FLAG_APP_ID assert isinstance(self.app_id, str_or_bytes),\ 'A non-string value was supplied for self.app_id' data.encode_short_string(pieces, self.app_id) if self.cluster_id is not None: flags = flags | BasicProperties.FLAG_CLUSTER_ID assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) flag_pieces = list() while True: remainder = flags >> 16 partial_flags = flags & 0xFFFE if remainder != 0: partial_flags |= 1 flag_pieces.append(struct.pack('>H', partial_flags)) flags = remainder if not flags: break return flag_pieces + pieces methods = { 0x000A000A: Connection.Start, 0x000A000B: Connection.StartOk, 0x000A0014: Connection.Secure, 0x000A0015: Connection.SecureOk, 0x000A001E: Connection.Tune, 0x000A001F: Connection.TuneOk, 0x000A0028: Connection.Open, 0x000A0029: Connection.OpenOk, 0x000A0032: Connection.Close, 0x000A0033: Connection.CloseOk, 0x000A003C: Connection.Blocked, 0x000A003D: Connection.Unblocked, 0x0014000A: Channel.Open, 0x0014000B: Channel.OpenOk, 0x00140014: Channel.Flow, 0x00140015: Channel.FlowOk, 0x00140028: Channel.Close, 0x00140029: Channel.CloseOk, 0x001E000A: Access.Request, 0x001E000B: Access.RequestOk, 0x0028000A: Exchange.Declare, 0x0028000B: Exchange.DeclareOk, 0x00280014: Exchange.Delete, 0x00280015: Exchange.DeleteOk, 0x0028001E: Exchange.Bind, 0x0028001F: Exchange.BindOk, 0x00280028: Exchange.Unbind, 0x00280033: Exchange.UnbindOk, 0x0032000A: Queue.Declare, 0x0032000B: Queue.DeclareOk, 0x00320014: Queue.Bind, 0x00320015: Queue.BindOk, 0x0032001E: Queue.Purge, 0x0032001F: Queue.PurgeOk, 0x00320028: Queue.Delete, 0x00320029: Queue.DeleteOk, 0x00320032: Queue.Unbind, 0x00320033: Queue.UnbindOk, 0x003C000A: Basic.Qos, 0x003C000B: Basic.QosOk, 0x003C0014: Basic.Consume, 0x003C0015: Basic.ConsumeOk, 0x003C001E: Basic.Cancel, 0x003C001F: Basic.CancelOk, 0x003C0028: Basic.Publish, 0x003C0032: Basic.Return, 0x003C003C: Basic.Deliver, 0x003C0046: Basic.Get, 0x003C0047: Basic.GetOk, 0x003C0048: Basic.GetEmpty, 0x003C0050: Basic.Ack, 0x003C005A: Basic.Reject, 0x003C0064: Basic.RecoverAsync, 0x003C006E: Basic.Recover, 0x003C006F: Basic.RecoverOk, 0x003C0078: Basic.Nack, 0x005A000A: Tx.Select, 0x005A000B: Tx.SelectOk, 0x005A0014: Tx.Commit, 0x005A0015: Tx.CommitOk, 0x005A001E: Tx.Rollback, 0x005A001F: Tx.RollbackOk, 0x0055000A: Confirm.Select, 0x0055000B: Confirm.SelectOk } props = { 0x003C: BasicProperties } def has_content(methodNumber): return methodNumber in ( Basic.Publish.INDEX, Basic.Return.INDEX, Basic.Deliver.INDEX, Basic.GetOk.INDEX, ) pika-0.11.0/pika/utils.py000066400000000000000000000005131315131611700151510ustar00rootroot00000000000000""" Non-module specific functions shared by modules in the pika package """ import collections def is_callable(handle): """Returns a bool value if the handle passed in is a callable method/function :param any handle: The object to check :rtype: bool """ return isinstance(handle, collections.Callable) pika-0.11.0/pylintrc000066400000000000000000000276061315131611700143160ustar00rootroot00000000000000[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS # Pickle collected data for later comparisons. persistent=no # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= # Deprecated. It was used to include message's id in output. Use --msg-template # instead. #include-ids=no # Deprecated. It was used to include symbolic ids of messages in output. Use # --msg-template instead. #symbols=no # Use multiple processes to speed up Pylint. #jobs=1 # Allow loading of arbitrary C extensions. Extensions are imported into the # active Python interpreter and may run arbitrary code. #unsafe-load-any-extension=no # A comma-separated list of package or module names from where C extensions may # be loaded. Extensions are loading into the active Python interpreter and may # run arbitrary code #extension-pkg-whitelist= # Allow optimization of some AST trees. This will activate a peephole AST # optimizer, which will apply various small optimizations. For instance, it can # be used to obtain the result of joining multiple strings with the addition # operator. Joining a lot of strings can lead to a maximum recursion error in # Pylint and this flag can prevent that. It has one side effect, the resulting # AST will be different than the one from reality. #optimize-ast=no [MESSAGES CONTROL] # Only show warnings with the listed confidence levels. Leave empty to show # all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED confidence= # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. See also the "--disable" option for examples. #enable= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once).You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" disable= [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html. You can also give a reporter class, eg # mypackage.mymodule.MyReporterClass. output-format=text # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=no # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details msg-template={msg_id}, {line:3d}:{column:2d} - {msg} ({symbol}) #msg-template= [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=map,filter,input # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,fd,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # Colon-delimited sets of names that determine each other's naming style when # the name regexes allow several styles. name-group= # Include a hint for the correct naming format with invalid-name include-naming-hint=no # Regular expression matching correct function names function-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for function names function-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression matching correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Naming hint for variable names variable-name-hint=[a-z_][a-z0-9_]{2,30}$ # Regular expression matching correct constant names const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$ # Naming hint for constant names const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$ # Regular expression matching correct attribute names attr-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for attribute names attr-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression matching correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Naming hint for argument names argument-name-hint=[a-z_][a-z0-9_]{2,30}$ # Regular expression matching correct class attribute names class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,40}|(__.*__))$ # Naming hint for class attribute names class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,40}|(__.*__))$ # Regular expression matching correct inline iteration names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Naming hint for inline iteration names inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$ # Regular expression matching correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Naming hint for class names class-name-hint=[A-Z_][a-zA-Z0-9]+$ # Regular expression matching correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Naming hint for module names module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression matching correct method names method-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for method names method-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression which should only match function or class names that do # not require a docstring. no-docstring-rgx=__.*__ # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=-1 [FORMAT] # Maximum number of characters on a single line. max-line-length=100 # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=^\s*(# )??$ # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=no # List of optional constructs for which whitespace checking is disabled no-space-check=trailing-comma,dict-separator # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' # Number of spaces of indent required inside a hanging or continued line. indent-after-paren=4 # Expected format of line ending, e.g. empty (any line ending), LF or CRLF. expected-line-ending-format= [LOGGING] # Logging modules to check that the string format arguments are in logging # function parameter format logging-modules=logging [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no [SPELLING] # Spelling dictionary name. Available dictionaries: none. To make it working # install python-enchant package. spelling-dict= # List of comma separated words that should not be checked. spelling-ignore-words= # A path to a file that contains private dictionary; one word per line. spelling-private-dict-file= # Tells whether to store unknown words to indicated private dictionary in # --spelling-private-dict-file option instead of raising a message. spelling-store-unknown-words=no [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of module names for which member attributes should not be checked # (useful for modules/projects where namespaces are manipulated during runtime # and thus existing member attributes cannot be deduced by static analysis ignored-modules= # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the name of dummy variables (i.e. expectedly # not used). dummy-variables-rgx=_|_$|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= # List of strings which can identify a callback function by name. A callback # name must start or end with one of those strings. callbacks=cb_,_cb [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=mcs # List of member names, which should be excluded from the protected access # warning. exclude-protected=_asdict,_fields,_replace,_source,_make [DESIGN] # Maximum number of arguments for function / method max-args=10 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branches=20 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=40 [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception pika-0.11.0/setup.cfg000066400000000000000000000002311315131611700143310ustar00rootroot00000000000000[bdist_wheel] universal = 1 [nosetests] with-coverage=1 cover-package=pika cover-branches=1 cover-erase=1 tests=tests/unit,tests/acceptance verbosity=3 pika-0.11.0/setup.py000066400000000000000000000043071315131611700142320ustar00rootroot00000000000000from setuptools import setup import os # Conditionally include additional modules for docs on_rtd = os.environ.get('READTHEDOCS', None) == 'True' requirements = list() if on_rtd: requirements.append('tornado') requirements.append('twisted') #requirements.append('pyev') long_description = ('Pika is a pure-Python implementation of the AMQP 0-9-1 ' 'protocol that tries to stay fairly independent of the ' 'underlying network support library. Pika was developed ' 'primarily for use with RabbitMQ, but should also work ' 'with other AMQP 0-9-1 brokers.') setup(name='pika', version='0.11.0', description='Pika Python AMQP Client Library', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='gavinmroy@gmail.com', url='https://pika.readthedocs.io', packages=['pika', 'pika.adapters'], license='BSD', install_requires=requirements, package_data={'': ['LICENSE', 'README.rst']}, extras_require={'tornado': ['tornado'], 'twisted': ['twisted'], 'libev': ['pyev']}, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: Jython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Communications', 'Topic :: Internet', 'Topic :: Software Development :: Libraries', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: System :: Networking'], zip_safe=True) pika-0.11.0/test-requirements.txt000066400000000000000000000000621315131611700167530ustar00rootroot00000000000000coverage codecov mock nose tornado twisted<15.4.0 pika-0.11.0/tests/000077500000000000000000000000001315131611700136565ustar00rootroot00000000000000pika-0.11.0/tests/acceptance/000077500000000000000000000000001315131611700157445ustar00rootroot00000000000000pika-0.11.0/tests/acceptance/async_adapter_tests.py000066400000000000000000000422271315131611700223640ustar00rootroot00000000000000# Suppress pylint messages concerning missing class and method docstrings # pylint: disable=C0111 # Suppress pylint warning about attribute defined outside __init__ # pylint: disable=W0201 # Suppress pylint warning about access to protected member # pylint: disable=W0212 # Suppress pylint warning about unused argument # pylint: disable=W0613 import time import uuid from pika import spec from pika.compat import as_bytes import pika.connection import pika.frame import pika.spec from async_test_base import (AsyncTestCase, BoundQueueTestCase, AsyncAdapters) class TestA_Connect(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Connect, open channel and disconnect" def begin(self, channel): self.stop() class TestConfirmSelect(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Receive confirmation of Confirm.Select" def begin(self, channel): channel._on_selectok = self.on_complete channel.confirm_delivery() def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Confirm.SelectOk) self.stop() class TestBlockingNonBlockingBlockingRPCWontStall(AsyncTestCase, AsyncAdapters): DESCRIPTION = ("Verify that a sequence of blocking, non-blocking, blocking " "RPC requests won't stall") def begin(self, channel): # Queue declaration params table: queue name, nowait value self._expected_queue_params = ( ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False), ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, True), ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False) ) self._declared_queue_names = [] for queue, nowait in self._expected_queue_params: channel.queue_declare(callback=self._queue_declare_ok_cb if not nowait else None, queue=queue, auto_delete=True, nowait=nowait, arguments={'x-expires': self.TIMEOUT * 1000}) def _queue_declare_ok_cb(self, declare_ok_frame): self._declared_queue_names.append(declare_ok_frame.method.queue) if len(self._declared_queue_names) == 2: # Initiate check for creation of queue declared with nowait=True self.channel.queue_declare(callback=self._queue_declare_ok_cb, queue=self._expected_queue_params[1][0], passive=True, nowait=False) elif len(self._declared_queue_names) == 3: self.assertSequenceEqual( sorted(self._declared_queue_names), sorted(item[0] for item in self._expected_queue_params)) self.stop() class TestConsumeCancel(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Consume and cancel" def begin(self, channel): self.queue_name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.queue_declare(self.on_queue_declared, queue=self.queue_name) def on_queue_declared(self, frame): for i in range(0, 100): msg_body = '{0}:{1}:{2}'.format(self.__class__.__name__, i, time.time()) self.channel.basic_publish('', self.queue_name, msg_body) self.ctag = self.channel.basic_consume(self.on_message, queue=self.queue_name, no_ack=True) def on_message(self, _channel, _frame, _header, body): self.channel.basic_cancel(self.on_cancel, self.ctag) def on_cancel(self, _frame): self.channel.queue_delete(self.on_deleted, self.queue_name) def on_deleted(self, _frame): self.stop() class TestExchangeDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete and exchange" X_TYPE = 'direct' def begin(self, channel): self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.exchange_declare(self.on_exchange_declared, self.name, exchange_type=self.X_TYPE, passive=False, durable=False, auto_delete=True) def on_exchange_declared(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeclareOk) self.channel.exchange_delete(self.on_exchange_delete, self.name) def on_exchange_delete(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeleteOk) self.stop() class TestExchangeRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "should close chan: re-declared exchange w/ diff params" X_TYPE1 = 'direct' X_TYPE2 = 'topic' def begin(self, channel): self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex self.channel.add_on_close_callback(self.on_channel_closed) channel.exchange_declare(self.on_exchange_declared, self.name, exchange_type=self.X_TYPE1, passive=False, durable=False, auto_delete=True) def on_cleanup_channel(self, channel): channel.exchange_delete(None, self.name, nowait=True) self.stop() def on_channel_closed(self, channel, reply_code, reply_text): self.connection.channel(self.on_cleanup_channel) def on_exchange_declared(self, frame): self.channel.exchange_declare(self.on_bad_result, self.name, exchange_type=self.X_TYPE2, passive=False, durable=False, auto_delete=True) def on_bad_result(self, frame): self.channel.exchange_delete(None, self.name, nowait=True) raise AssertionError("Should not have received a Queue.DeclareOk") class TestQueueDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete a queue" def begin(self, channel): channel.queue_declare(self.on_queue_declared, passive=False, durable=False, exclusive=True, auto_delete=False, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_queue_declared(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeclareOk) self.channel.queue_delete(self.on_queue_delete, frame.method.queue) def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueNameDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete a named queue" def begin(self, channel): self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.queue_declare(self.on_queue_declared, self._q_name, passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_queue_declared(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeclareOk) # Frame's method's queue is encoded (impl detail) self.assertEqual(frame.method.queue, self._q_name) self.channel.queue_delete(self.on_queue_delete, frame.method.queue) def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Should close chan: re-declared queue w/ diff params" def begin(self, channel): self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex self.channel.add_on_close_callback(self.on_channel_closed) channel.queue_declare(self.on_queue_declared, self._q_name, passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() def on_queue_declared(self, frame): self.channel.queue_declare(self.on_bad_result, self._q_name, passive=False, durable=True, exclusive=False, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_bad_result(self, frame): self.channel.queue_delete(None, self._q_name, nowait=True) raise AssertionError("Should not have received a Queue.DeclareOk") class TestTX1_Select(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Receive confirmation of Tx.Select" def begin(self, channel): channel.tx_select(self.on_complete) def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.stop() class TestTX2_Commit(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Start a transaction, and commit it" def begin(self, channel): channel.tx_select(self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_commit(self.on_commitok) def on_commitok(self, frame): self.assertIsInstance(frame.method, spec.Tx.CommitOk) self.stop() class TestTX2_CommitFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Close the channel: commit without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_commit(self.on_commitok) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) @staticmethod def on_commitok(frame): raise AssertionError("Should not have received a Tx.CommitOk") class TestTX3_Rollback(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Start a transaction, then rollback" def begin(self, channel): channel.tx_select(self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_rollback(self.on_rollbackok) def on_rollbackok(self, frame): self.assertIsInstance(frame.method, spec.Tx.RollbackOk) self.stop() class TestTX3_RollbackFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Close the channel: rollback without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_rollback(self.on_commitok) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() @staticmethod def on_commitok(frame): raise AssertionError("Should not have received a Tx.RollbackOk") class TestZ_PublishAndConsume(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a message and consume it" def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.on_message, self.queue) self.msg_body = "%s: %i" % (self.__class__.__name__, time.time()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.on_cancelled, self.ctag) class TestZ_PublishAndConsumeBig(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a big message and consume it" @staticmethod def _get_msg_body(): return '\n'.join(["%s" % i for i in range(0, 2097152)]) def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.on_message, self.queue) self.msg_body = self._get_msg_body() self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.on_cancelled, self.ctag) class TestZ_PublishAndGet(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a message and get it" def on_ready(self, frame): self.msg_body = "%s: %i" % (self.__class__.__name__, time.time()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) self.channel.basic_get(self.on_get, self.queue) def on_get(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.GetOk) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.stop() class TestZ_AccessDenied(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Verify that access denied invokes on open error callback" def start(self, *args, **kwargs): self.parameters.virtual_host = str(uuid.uuid4()) self.error_captured = False super(TestZ_AccessDenied, self).start(*args, **kwargs) self.assertTrue(self.error_captured) def on_open_error(self, connection, error): self.error_captured = True self.stop() def on_open(self, connection): super(TestZ_AccessDenied, self).on_open(connection) self.stop() class TestBlockedConnectionTimesOut(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Verify that blocked connection terminates on timeout" def start(self, *args, **kwargs): self.parameters.blocked_connection_timeout = 0.001 self.on_closed_pair = None super(TestBlockedConnectionTimesOut, self).start(*args, **kwargs) self.assertEqual( self.on_closed_pair, (pika.connection.InternalCloseReasons.BLOCKED_CONNECTION_TIMEOUT, 'Blocked connection timeout expired')) def begin(self, channel): # Simulate Connection.Blocked channel.connection._on_connection_blocked(pika.frame.Method( 0, pika.spec.Connection.Blocked('Testing blocked connection timeout'))) def on_closed(self, connection, reply_code, reply_text): """called when the connection has finished closing""" self.on_closed_pair = (reply_code, reply_text) super(TestBlockedConnectionTimesOut, self).on_closed(connection, reply_code, reply_text) class TestBlockedConnectionUnblocks(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Verify that blocked-unblocked connection closes normally" def start(self, *args, **kwargs): self.parameters.blocked_connection_timeout = 0.001 self.on_closed_pair = None super(TestBlockedConnectionUnblocks, self).start(*args, **kwargs) self.assertEqual( self.on_closed_pair, (200, 'Normal shutdown')) def begin(self, channel): # Simulate Connection.Blocked channel.connection._on_connection_blocked(pika.frame.Method( 0, pika.spec.Connection.Blocked( 'Testing blocked connection unblocks'))) # Simulate Connection.Unblocked channel.connection._on_connection_unblocked(pika.frame.Method( 0, pika.spec.Connection.Unblocked())) # Schedule shutdown after blocked connection timeout would expire channel.connection.add_timeout(0.005, self.on_cleanup_timer) def on_cleanup_timer(self): self.stop() def on_closed(self, connection, reply_code, reply_text): """called when the connection has finished closing""" self.on_closed_pair = (reply_code, reply_text) super(TestBlockedConnectionUnblocks, self).on_closed(connection, reply_code, reply_text) pika-0.11.0/tests/acceptance/async_test_base.py000066400000000000000000000160331315131611700214670ustar00rootroot00000000000000# Suppress pylint warnings concerning attribute defined outside __init__ # pylint: disable=W0201 # Suppress pylint messages concerning missing docstrings # pylint: disable=C0111 from datetime import datetime import select import sys import logging try: import unittest2 as unittest except ImportError: import unittest import platform _TARGET = platform.python_implementation() import uuid try: from unittest import mock except ImportError: import mock import pika from pika import adapters from pika.adapters import select_connection class AsyncTestCase(unittest.TestCase): DESCRIPTION = "" ADAPTER = None TIMEOUT = 15 def setUp(self): self.logger = logging.getLogger(self.__class__.__name__) self.parameters = pika.URLParameters( 'amqp://guest:guest@localhost:5672/%2F') self._timed_out = False super(AsyncTestCase, self).setUp() def tearDown(self): self._stop() def shortDescription(self): method_desc = super(AsyncTestCase, self).shortDescription() if self.DESCRIPTION: return "%s (%s)" % (self.DESCRIPTION, method_desc) else: return method_desc def begin(self, channel): # pylint: disable=R0201,W0613 """Extend to start the actual tests on the channel""" self.fail("AsyncTestCase.begin_test not extended") def start(self, adapter=None): self.logger.info('start at %s', datetime.utcnow()) self.adapter = adapter or self.ADAPTER self.connection = self.adapter(self.parameters, self.on_open, self.on_open_error, self.on_closed) self.timeout = self.connection.add_timeout(self.TIMEOUT, self.on_timeout) self.connection.ioloop.start() self.assertFalse(self._timed_out) def stop(self): """close the connection and stop the ioloop""" self.logger.info("Stopping test") if self.timeout is not None: self.connection.remove_timeout(self.timeout) self.timeout = None self.connection.close() def _stop(self): if hasattr(self, 'timeout') and self.timeout is not None: self.logger.info("Removing timeout") self.connection.remove_timeout(self.timeout) self.timeout = None if hasattr(self, 'connection') and self.connection: self.logger.info("Stopping ioloop") self.connection.ioloop.stop() self.connection = None def on_closed(self, connection, reply_code, reply_text): """called when the connection has finished closing""" self.logger.info('on_closed: %r %r %r', connection, reply_code, reply_text) self._stop() def on_open(self, connection): self.logger.debug('on_open: %r', connection) self.channel = connection.channel(self.begin) def on_open_error(self, connection, error): self.logger.error('on_open_error: %r %r', connection, error) connection.ioloop.stop() raise AssertionError('Error connecting to RabbitMQ') def on_timeout(self): """called when stuck waiting for connection to close""" self.logger.error('%s timed out; on_timeout called at %s', self, datetime.utcnow()) self.timeout = None # the dispatcher should have removed it self._timed_out = True # initiate cleanup self.stop() class BoundQueueTestCase(AsyncTestCase): def start(self, adapter=None): # PY3 compat encoding self.exchange = 'e-' + self.__class__.__name__ + ':' + uuid.uuid1().hex self.queue = 'q-' + self.__class__.__name__ + ':' + uuid.uuid1().hex self.routing_key = self.__class__.__name__ super(BoundQueueTestCase, self).start(adapter) def begin(self, channel): self.channel.exchange_declare(self.on_exchange_declared, self.exchange, exchange_type='direct', passive=False, durable=False, auto_delete=True) def on_exchange_declared(self, frame): # pylint: disable=W0613 self.channel.queue_declare(self.on_queue_declared, self.queue, passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_queue_declared(self, frame): # pylint: disable=W0613 self.channel.queue_bind(self.on_ready, self.queue, self.exchange, self.routing_key) def on_ready(self, frame): raise NotImplementedError # # In order to write test cases that will tested using all the Async Adapters # write a class that inherits both from one of TestCase classes above and # from the AsyncAdapters class below. This allows you to avoid duplicating the # test methods for each adapter in each test class. # class AsyncAdapters(object): def start(self, adapter_class): raise NotImplementedError def select_default_test(self): """SelectConnection:DefaultPoller""" with mock.patch.multiple(select_connection, SELECT_TYPE=None): self.start(adapters.SelectConnection) def select_select_test(self): """SelectConnection:select""" with mock.patch.multiple(select_connection, SELECT_TYPE='select'): self.start(adapters.SelectConnection) @unittest.skipIf( not hasattr(select, 'poll') or not hasattr(select.poll(), 'modify'), "poll not supported") # pylint: disable=E1101 def select_poll_test(self): """SelectConnection:poll""" with mock.patch.multiple(select_connection, SELECT_TYPE='poll'): self.start(adapters.SelectConnection) @unittest.skipIf(not hasattr(select, 'epoll'), "epoll not supported") def select_epoll_test(self): """SelectConnection:epoll""" with mock.patch.multiple(select_connection, SELECT_TYPE='epoll'): self.start(adapters.SelectConnection) @unittest.skipIf(not hasattr(select, 'kqueue'), "kqueue not supported") def select_kqueue_test(self): """SelectConnection:kqueue""" with mock.patch.multiple(select_connection, SELECT_TYPE='kqueue'): self.start(adapters.SelectConnection) def tornado_test(self): """TornadoConnection""" self.start(adapters.TornadoConnection) @unittest.skipIf(sys.version_info < (3, 4), "Asyncio available for Python 3.4+") def asyncio_test(self): """AsyncioConnection""" self.start(adapters.AsyncioConnection) @unittest.skipIf(_TARGET == 'PyPy', 'PyPy is not supported') @unittest.skipIf(adapters.LibevConnection is None, 'pyev is not installed') def libev_test(self): """LibevConnection""" self.start(adapters.LibevConnection) pika-0.11.0/tests/acceptance/blocking_adapter_test.py000066400000000000000000002642421315131611700226570ustar00rootroot00000000000000"""blocking adapter test""" from datetime import datetime import logging import socket import time try: import unittest2 as unittest except ImportError: import unittest import uuid from forward_server import ForwardServer from test_utils import retry_assertion import pika from pika.adapters import blocking_connection from pika.compat import as_bytes import pika.connection import pika.exceptions # Disable warning about access to protected member # pylint: disable=W0212 # Disable warning Attribute defined outside __init__ # pylint: disable=W0201 # Disable warning Missing docstring # pylint: disable=C0111 # Disable warning Too many public methods # pylint: disable=R0904 # Disable warning Invalid variable name # pylint: disable=C0103 LOGGER = logging.getLogger(__name__) PARAMS_URL_TEMPLATE = ( 'amqp://guest:guest@127.0.0.1:%(port)s/%%2f?socket_timeout=1') DEFAULT_URL = PARAMS_URL_TEMPLATE % {'port': 5672} DEFAULT_PARAMS = pika.URLParameters(DEFAULT_URL) DEFAULT_TIMEOUT = 15 def setUpModule(): logging.basicConfig(level=logging.DEBUG) class BlockingTestCaseBase(unittest.TestCase): TIMEOUT = DEFAULT_TIMEOUT def _connect(self, url=DEFAULT_URL, connection_class=pika.BlockingConnection, impl_class=None): parameters = pika.URLParameters(url) connection = connection_class(parameters, _impl_class=impl_class) self.addCleanup(lambda: connection.close() if connection.is_open else None) connection._impl.add_timeout( self.TIMEOUT, # pylint: disable=E1101 self._on_test_timeout) return connection def _on_test_timeout(self): """Called when test times out""" LOGGER.info('%s TIMED OUT (%s)', datetime.utcnow(), self) self.fail('Test timed out') @retry_assertion(TIMEOUT/2) def _assert_exact_message_count_with_retries(self, channel, queue, expected_count): frame = channel.queue_declare(queue, passive=True) self.assertEqual(frame.method.message_count, expected_count) class TestCreateAndCloseConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection""" connection = self._connect() self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertFalse(connection.is_closed) self.assertFalse(connection.is_closing) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) class TestMultiCloseConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: Close connection twice""" connection = self._connect() self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertFalse(connection.is_closed) self.assertFalse(connection.is_closing) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) # Second close call shouldn't crash connection.close() class TestConnectionContextManagerClosesConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection""" with self._connect() as connection: self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesOriginalException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes original exception""" # pylint: disable=C0301 class MyException(Exception): pass with self.assertRaises(MyException): with self._connect() as connection: self.assertTrue(connection.is_open) raise MyException() self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesSystemException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes system exception""" # pylint: disable=C0301 with self.assertRaises(SystemExit): with self._connect() as connection: self.assertTrue(connection.is_open) raise SystemExit() self.assertTrue(connection.is_closed) class TestInvalidExchangeTypeRaisesConnectionClosed(BlockingTestCaseBase): def test(self): """BlockingConnection: ConnectionClosed raised when creating exchange with invalid type""" # pylint: disable=C0301 # This test exploits behavior specific to RabbitMQ whereby the broker # closes the connection if an attempt is made to declare an exchange # with an invalid exchange type connection = self._connect() ch = connection.channel() exg_name = ("TestInvalidExchangeTypeRaisesConnectionClosed_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ConnectionClosed) as ex_cm: # Attempt to create an exchange with invalid exchange type ch.exchange_declare(exg_name, exchange_type='ZZwwInvalid') self.assertEqual(ex_cm.exception.args[0], 503) class TestCreateAndCloseConnectionWithChannelAndConsumer(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection with channel and consumer""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ( 'TestCreateAndCloseConnectionWithChannelAndConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish the message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) # Create a non-ackable consumer ch.basic_consume(lambda *x: None, q_name, no_ack=True, exclusive=False, arguments=None) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) self.assertFalse(connection._impl._channels) self.assertFalse(ch._consumer_infos) self.assertFalse(ch._impl._consumers) class TestSuddenBrokerDisconnectBeforeChannel(BlockingTestCaseBase): def test(self): """BlockingConnection resets properly on TCP/IP drop during channel() """ with ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl.socket) class TestNoAccessToFileDescriptorAfterConnectionClosed(BlockingTestCaseBase): def test(self): """BlockingConnection no access file descriptor after ConnectionClosed """ with ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl.socket) # Attempt to operate on the connection once again after ConnectionClosed self.assertIsNone(self.connection._impl.socket) with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() class TestConnectWithDownedBroker(BlockingTestCaseBase): def test(self): """ BlockingConnection to downed broker results in AMQPConnectionError """ # Reserve a port for use in connect sock = socket.socket() self.addCleanup(sock.close) sock.bind(("127.0.0.1", 0)) port = sock.getsockname()[1] sock.close() with self.assertRaises(pika.exceptions.AMQPConnectionError): self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": port}) class TestDisconnectDuringConnectionStart(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_START """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_start') def _on_connection_start(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connection_start( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAuthenticationError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionTune(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_TUNE """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_tune') def _on_connection_tune(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connection_tune( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAccessDeniedError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionProtocol(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_PROTOCOL """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connected') def _on_connected(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connected( *args, **kwargs) with self.assertRaises(pika.exceptions.IncompatibleProtocolError): self._connect(PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestProcessDataEvents(BlockingTestCaseBase): def test(self): """BlockingConnection.process_data_events""" connection = self._connect() # Try with time_limit=0 start_time = time.time() connection.process_data_events(time_limit=0) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Try with time_limit=0.005 start_time = time.time() connection.process_data_events(time_limit=0.005) elapsed = time.time() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionRegisterForBlockAndUnblock(BlockingTestCaseBase): def test(self): """BlockingConnection register for Connection.Blocked/Unblocked""" connection = self._connect() # NOTE: I haven't figured out yet how to coerce RabbitMQ to emit # Connection.Block and Connection.Unblock from the test, so we'll # just call the registration functions for now, to make sure that # registration doesn't crash connection.add_on_connection_blocked_callback(lambda frame: None) blocked_buffer = [] evt = blocking_connection._ConnectionBlockedEvt( lambda f: blocked_buffer.append("blocked"), pika.frame.Method(1, pika.spec.Connection.Blocked('reason'))) repr(evt) evt.dispatch() self.assertEqual(blocked_buffer, ["blocked"]) unblocked_buffer = [] connection.add_on_connection_unblocked_callback(lambda frame: None) evt = blocking_connection._ConnectionUnblockedEvt( lambda f: unblocked_buffer.append("unblocked"), pika.frame.Method(1, pika.spec.Connection.Unblocked())) repr(evt) evt.dispatch() self.assertEqual(unblocked_buffer, ["unblocked"]) class TestBlockedConnectionTimeout(BlockingTestCaseBase): def test(self): """BlockingConnection Connection.Blocked timeout """ url = DEFAULT_URL + '&blocked_connection_timeout=0.001' conn = self._connect(url=url) # NOTE: I haven't figured out yet how to coerce RabbitMQ to emit # Connection.Block and Connection.Unblock from the test, so we'll # simulate it for now # Simulate Connection.Blocked conn._impl._on_connection_blocked(pika.frame.Method( 0, pika.spec.Connection.Blocked('TestBlockedConnectionTimeout'))) # Wait for connection teardown with self.assertRaises(pika.exceptions.ConnectionClosed) as excCtx: while True: conn.process_data_events(time_limit=1) self.assertEqual( excCtx.exception.args, (pika.connection.InternalCloseReasons.BLOCKED_CONNECTION_TIMEOUT, 'Blocked connection timeout expired')) class TestAddTimeoutRemoveTimeout(BlockingTestCaseBase): def test(self): """BlockingConnection.add_timeout and remove_timeout""" connection = self._connect() # Test timer completion start_time = time.time() rx_callback = [] timer_id = connection.add_timeout( 0.005, lambda: rx_callback.append(time.time())) while not rx_callback: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_callback), 1) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Test removing triggered timeout connection.remove_timeout(timer_id) # Test aborted timer rx_callback = [] timer_id = connection.add_timeout( 0.001, lambda: rx_callback.append(time.time())) connection.remove_timeout(timer_id) connection.process_data_events(time_limit=0.1) self.assertFalse(rx_callback) # Make sure _TimerEvt repr doesn't crash evt = blocking_connection._TimerEvt(lambda: None) repr(evt) class TestRemoveTimeoutFromTimeoutCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.remove_timeout from timeout callback""" connection = self._connect() # Test timer completion timer_id1 = connection.add_timeout(5, lambda: 0/0) rx_timer2 = [] def on_timer2(): connection.remove_timeout(timer_id1) connection.remove_timeout(timer_id2) rx_timer2.append(1) timer_id2 = connection.add_timeout(0, on_timer2) while not rx_timer2: connection.process_data_events(time_limit=None) self.assertNotIn(timer_id1, connection._impl.ioloop._poller._timeouts) self.assertFalse(connection._ready_events) class TestSleep(BlockingTestCaseBase): def test(self): """BlockingConnection.sleep""" connection = self._connect() # Try with duration=0 start_time = time.time() connection.sleep(duration=0) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Try with duration=0.005 start_time = time.time() connection.sleep(duration=0.005) elapsed = time.time() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionProperties(BlockingTestCaseBase): def test(self): """Test BlockingConnection properties""" connection = self._connect() self.assertTrue(connection.is_open) self.assertFalse(connection.is_closing) self.assertFalse(connection.is_closed) self.assertTrue(connection.basic_nack_supported) self.assertTrue(connection.consumer_cancel_notify_supported) self.assertTrue(connection.exchange_exchange_bindings_supported) self.assertTrue(connection.publisher_confirms_supported) connection.close() self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) self.assertTrue(connection.is_closed) class TestCreateAndCloseChannel(BlockingTestCaseBase): def test(self): """BlockingChannel: Create and close channel""" connection = self._connect() ch = connection.channel() self.assertIsInstance(ch, blocking_connection.BlockingChannel) self.assertTrue(ch.is_open) self.assertFalse(ch.is_closed) self.assertFalse(ch.is_closing) self.assertIs(ch.connection, connection) ch.close() self.assertTrue(ch.is_closed) self.assertFalse(ch.is_open) self.assertFalse(ch.is_closing) class TestExchangeDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_declare and exchange_delete""" connection = self._connect() ch = connection.channel() name = "TestExchangeDeclareAndDelete_" + uuid.uuid1().hex # Declare a new exchange frame = ch.exchange_declare(name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Check if it exists by declaring it passively frame = ch.exchange_declare(name, passive=True) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Delete the exchange frame = ch.exchange_delete(name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosed) as cm: ch.exchange_declare(name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestExchangeBindAndUnbind(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_bind and exchange_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestExchangeBindAndUnbind_q' + uuid.uuid1().hex src_exg_name = 'TestExchangeBindAndUnbind_src_exg_' + uuid.uuid1().hex dest_exg_name = 'TestExchangeBindAndUnbind_dest_exg_' + uuid.uuid1().hex routing_key = 'TestExchangeBindAndUnbind' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare both exchanges ch.exchange_declare(src_exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, src_exg_name) ch.exchange_declare(dest_exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, dest_exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the destination exchange ch.queue_bind(q_name, exchange=dest_exg_name, routing_key=routing_key) # Verify that the queue is unreachable without exchange-exchange binding with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(src_exg_name, routing_key, body='', mandatory=True) # Bind the exchanges frame = ch.exchange_bind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.BindOk) # Publish a message via the source exchange ch.publish(src_exg_name, routing_key, body='TestExchangeBindAndUnbind', mandatory=True) # Check that the queue now has one message self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) # Unbind the exchanges frame = ch.exchange_unbind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.UnbindOk) # Verify that the queue is now unreachable via the source exchange with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(src_exg_name, routing_key, body='', mandatory=True) class TestQueueDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_declare and queue_delete""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueDeclareAndDelete_' + uuid.uuid1().hex # Declare a new queue frame = ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Check if it exists by declaring it passively frame = ch.queue_declare(q_name, passive=True) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Delete the queue frame = ch.queue_delete(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosed) as cm: ch.queue_declare(q_name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed( BlockingTestCaseBase): def test(self): """BlockingChannel: ChannelClosed raised when passive-declaring unknown queue""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ("TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosed) as ex_cm: ch.queue_declare(q_name, passive=True) self.assertEqual(ex_cm.exception.args[0], 404) class TestQueueBindAndUnbindAndPurge(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_bind and queue_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueBindAndUnbindAndPurge_q' + uuid.uuid1().hex exg_name = 'TestQueueBindAndUnbindAndPurge_exg_' + uuid.uuid1().hex routing_key = 'TestQueueBindAndUnbindAndPurge' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.BindOk) # Check that the queue is empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Deposit a message in the queue ch.publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge', mandatory=True) # Check that the queue now has one message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) # Unbind the queue frame = ch.queue_unbind(queue=q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.UnbindOk) # Verify that the queue is now unreachable via that binding with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge-2', mandatory=True) # Purge the queue and verify that 1 message was purged frame = ch.queue_purge(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.PurgeOk) self.assertEqual(frame.method.message_count, 1) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicGet(BlockingTestCaseBase): def tearDown(self): LOGGER.info('%s TEARING DOWN (%s)', datetime.utcnow(), self) def test(self): """BlockingChannel.basic_get""" LOGGER.info('%s STARTED (%s)', datetime.utcnow(), self) connection = self._connect() LOGGER.info('%s CONNECTED (%s)', datetime.utcnow(), self) ch = connection.channel() LOGGER.info('%s CREATED CHANNEL (%s)', datetime.utcnow(), self) q_name = 'TestBasicGet_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() LOGGER.info('%s ENABLED PUB-ACKS (%s)', datetime.utcnow(), self) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) LOGGER.info('%s DECLARED QUEUE (%s)', datetime.utcnow(), self) # Verify result of getting a message from an empty queue msg = ch.basic_get(q_name, no_ack=False) self.assertTupleEqual(msg, (None, None, None)) LOGGER.info('%s GOT FROM EMPTY QUEUE (%s)', datetime.utcnow(), self) body = 'TestBasicGet' # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body=body, mandatory=True) LOGGER.info('%s PUBLISHED (%s)', datetime.utcnow(), self) # Get the message (method, properties, body) = ch.basic_get(q_name, no_ack=False) LOGGER.info('%s GOT FROM NON-EMPTY QUEUE (%s)', datetime.utcnow(), self) self.assertIsInstance(method, pika.spec.Basic.GetOk) self.assertEqual(method.delivery_tag, 1) self.assertFalse(method.redelivered) self.assertEqual(method.exchange, '') self.assertEqual(method.routing_key, q_name) self.assertEqual(method.message_count, 0) self.assertIsInstance(properties, pika.BasicProperties) self.assertIsNone(properties.headers) self.assertEqual(body, as_bytes(body)) # Ack it ch.basic_ack(delivery_tag=method.delivery_tag) LOGGER.info('%s ACKED (%s)', datetime.utcnow(), self) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicReject(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicReject_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicReject1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicReject2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) class TestBasicRejectNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicRejectNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicNack(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack single message""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNack_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNack1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNack2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, multiple=False, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) class TestBasicNackNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicNackMultiple(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack multiple messages""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackMultiple_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) # Nack both messages via the "multiple" option ch.basic_nack(rx_method.delivery_tag, multiple=True, requeue=True) # Verify that both messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) class TestBasicRecoverWithRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_recover with requeue=True. NOTE: the requeue=False option is not supported by RabbitMQ broker as of this writing (using RabbitMQ 3.5.1) """ connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicRecoverWithRequeue_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue2', mandatory=True) rx_messages = [] num_messages = 0 for msg in ch.consume(q_name, no_ack=False): num_messages += 1 if num_messages == 2: ch.basic_recover(requeue=True) if num_messages > 2: rx_messages.append(msg) if num_messages == 4: break else: self.fail('consumer aborted prematurely') # Get the messages (_, _, rx_body) = rx_messages[0] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue1')) (_, _, rx_body) = rx_messages[1] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue2')) class TestTxCommit(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxCommit_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestTxCommit1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Commit the transaction ch.tx_commit() # Verify that the queue has the expected message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) (_, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestTxCommit1')) class TestTxRollback(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxRollback_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestTxRollback1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Roll back the transaction ch.tx_rollback() # Verify that the queue continues to be empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicConsumeFromUnknownQueueRaisesChannelClosed(BlockingTestCaseBase): def test(self): """ChannelClosed raised when consuming from unknown queue""" connection = self._connect() ch = connection.channel() q_name = ("TestBasicConsumeFromUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosed) as ex_cm: ch.basic_consume(lambda *args: None, q_name) self.assertEqual(ex_cm.exception.args[0], 404) class TestPublishAndBasicPublishWithPubacksUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.publish amd basic_publish unroutable message with pubacks""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ('TestPublishAndBasicPublishUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestPublishAndBasicPublishUnroutable' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Verify unroutable message handling using basic_publish res = ch.basic_publish(exg_name, routing_key=routing_key, body='', mandatory=True) self.assertEqual(res, False) # Verify unroutable message handling using publish msg2_headers = dict( test_name='TestPublishAndBasicPublishWithPubacksUnroutable') msg2_properties = pika.spec.BasicProperties(headers=msg2_headers) with self.assertRaises(pika.exceptions.UnroutableError) as cm: ch.publish(exg_name, routing_key=routing_key, body='', properties=msg2_properties, mandatory=True) (msg,) = cm.exception.messages self.assertIsInstance(msg, blocking_connection.ReturnedMessage) self.assertIsInstance(msg.method, pika.spec.Basic.Return) self.assertEqual(msg.method.reply_code, 312) self.assertEqual(msg.method.exchange, exg_name) self.assertEqual(msg.method.routing_key, routing_key) self.assertIsInstance(msg.properties, pika.BasicProperties) self.assertEqual(msg.properties.headers, msg2_headers) self.assertEqual(msg.body, as_bytes('')) class TestConfirmDeliveryAfterUnroutableMessage(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.confirm_delivery following unroutable message""" connection = self._connect() ch = connection.channel() exg_name = ('TestConfirmDeliveryAfterUnroutableMessage_exg_' + uuid.uuid1().hex) routing_key = 'TestConfirmDeliveryAfterUnroutableMessage' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback(lambda *args: returned_messages.append(args)) # Emit unroutable message without pubacks res = ch.basic_publish(exg_name, routing_key=routing_key, body='', mandatory=True) self.assertEqual(res, True) # Select delivery confirmations ch.confirm_delivery() # Verify that unroutable message is in pending events self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) # Dispach events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify that unroutable message was dispatched ((channel, method, properties, body,),) = returned_messages self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('')) class TestUnroutableMessagesReturnedInNonPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in non-puback mode""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInNonPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInNonPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages without pubacks ch.publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) ch.publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) # Process I/O until Basic.Return are dispatched while len(returned_messages) < 2: connection.process_data_events() self.assertEqual(len(returned_messages), 2) self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestUnroutableMessageReturnedInPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in puback mode""" connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Select delivery confirmations ch.confirm_delivery() # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages with pubacks res = ch.basic_publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) self.assertEqual(res, False) res = ch.basic_publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) self.assertEqual(res, False) # Verify that unroutable messages are already in pending events self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) self.assertIsInstance(ch._pending_events[1], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) repr(ch._pending_events[1]) # Dispatch events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestBasicPublishDeliveredWhenPendingUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.basic_publish msg delivered despite pending unroutable message""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_q' + uuid.uuid1().hex) exg_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestBasicPublishDeliveredWhenPendingUnroutable' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Attempt to send an unroutable message in the queue via basic_publish res = ch.basic_publish(exg_name, routing_key='', body='unroutable-message', mandatory=True) self.assertEqual(res, True) # Flush channel to force Basic.Return connection.channel().close() # Deposit a routable message in the queue res = ch.basic_publish(exg_name, routing_key=routing_key, body='routable-message', mandatory=True) self.assertEqual(res, True) # Wait for the queue to get the routable message self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) msg = ch.basic_get(q_name) # Check the first message self.assertIsInstance(msg, tuple) rx_method, rx_properties, rx_body = msg self.assertIsInstance(rx_method, pika.spec.Basic.GetOk) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('routable-message')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestPublishAndConsumeWithPubacksAndQosOfOne(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish, publish, basic_consume, QoS, \ Basic.Cancel from broker """ connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeAndQos_q' + uuid.uuid1().hex exg_name = 'TestPublishAndConsumeAndQos_exg_' + uuid.uuid1().hex routing_key = 'TestPublishAndConsumeAndQos' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue via basic_publish msg1_headers = dict( test_name='TestPublishAndConsumeWithPubacksAndQosOfOne') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish', properties=msg1_properties, mandatory=True) self.assertEqual(res, True) # Deposit another message in the queue via publish ch.publish(exg_name, routing_key, body='via-publish', mandatory=True) # Check that the queue now has two messages frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) # Configure QoS for one message ch.basic_qos(prefetch_size=0, prefetch_count=1, all_channels=False) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Attempt to cosume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) # Delete the queue and wait for consumer cancellation rx_cancellations = [] ch.add_on_cancel_callback(rx_cancellations.append) ch.queue_delete(q_name) ch.start_consuming() self.assertEqual(len(rx_cancellations), 1) frame, = rx_cancellations self.assertEqual(frame.method.consumer_tag, consumer_tag) class TestTwoBasicConsumersOnSameChannel(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: two basic_consume consumers on same channel """ connection = self._connect() ch = connection.channel() exg_name = 'TestPublishAndConsumeAndQos_exg_' + uuid.uuid1().hex q1_name = 'TestTwoBasicConsumersOnSameChannel_q1' + uuid.uuid1().hex q2_name = 'TestTwoBasicConsumersOnSameChannel_q2' + uuid.uuid1().hex q1_routing_key = 'TestTwoBasicConsumersOnSameChannel1' q2_routing_key = 'TestTwoBasicConsumersOnSameChannel2' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare the two new queues and bind them to the exchange ch.queue_declare(q1_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q1_name) ch.queue_bind(q1_name, exchange=exg_name, routing_key=q1_routing_key) ch.queue_declare(q2_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q2_name) ch.queue_bind(q2_name, exchange=exg_name, routing_key=q2_routing_key) # Deposit messages in the queues q1_tx_message_bodies = ['q1_message+%s' % (i,) for i in pika.compat.xrange(100)] for message_body in q1_tx_message_bodies: ch.publish(exg_name, q1_routing_key, body=message_body, mandatory=True) q2_tx_message_bodies = ['q2_message+%s' % (i,) for i in pika.compat.xrange(150)] for message_body in q2_tx_message_bodies: ch.publish(exg_name, q2_routing_key, body=message_body, mandatory=True) # Create the consumers q1_rx_messages = [] q1_consumer_tag = ch.basic_consume( lambda *args: q1_rx_messages.append(args), q1_name, no_ack=False, exclusive=False, arguments=None) q2_rx_messages = [] q2_consumer_tag = ch.basic_consume( lambda *args: q2_rx_messages.append(args), q2_name, no_ack=False, exclusive=False, arguments=None) # Wait for all messages to be delivered while (len(q1_rx_messages) < len(q1_tx_message_bodies) or len(q2_rx_messages) < len(q2_tx_message_bodies)): connection.process_data_events(time_limit=None) self.assertEqual(len(q2_rx_messages), len(q2_tx_message_bodies)) # Verify the messages def validate_messages(rx_messages, routing_key, consumer_tag, tx_message_bodies): self.assertEqual(len(rx_messages), len(tx_message_bodies)) for msg, expected_body in zip(rx_messages, tx_message_bodies): self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(expected_body)) # Validate q1 consumed messages validate_messages(rx_messages=q1_rx_messages, routing_key=q1_routing_key, consumer_tag=q1_consumer_tag, tx_message_bodies=q1_tx_message_bodies) # Validate q2 consumed messages validate_messages(rx_messages=q2_rx_messages, routing_key=q2_routing_key, consumer_tag=q2_consumer_tag, tx_message_bodies=q2_tx_message_bodies) # There shouldn't be any more events now self.assertFalse(ch._pending_events) class TestBasicCancelPurgesPendingConsumerCancellationEvt(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_cancel purges pending _ConsumerCancellationEvt""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicCancelPurgesPendingConsumerCancellationEvt_q' + uuid.uuid1().hex) ch.queue_declare(q_name) self.addCleanup(self._connect().channel().queue_delete, q_name) ch.publish('', routing_key=q_name, body='via-publish', mandatory=True) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for the published message to arrive, but don't consume it while not ch._pending_events: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ConsumerDeliveryEvt) # Delete the queue and wait for broker-initiated consumer cancellation ch.queue_delete(q_name) while len(ch._pending_events) < 2: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[1], blocking_connection._ConsumerCancellationEvt) # Issue consumer cancellation and verify that the pending # _ConsumerCancellationEvt instance was removed messages = ch.basic_cancel(consumer_tag) self.assertEqual(messages, []) self.assertEqual(len(ch._pending_events), 0) class TestBasicPublishWithoutPubacks(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish without pubacks""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicPublishWithoutPubacks_q' + uuid.uuid1().hex exg_name = 'TestBasicPublishWithoutPubacks_exg_' + uuid.uuid1().hex routing_key = 'TestBasicPublishWithoutPubacks' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue via basic_publish and mandatory=True msg1_headers = dict( test_name='TestBasicPublishWithoutPubacks') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=True', properties=msg1_properties, mandatory=True) self.assertEqual(res, True) # Deposit a message in the queue via basic_publish and mandatory=False res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=False', mandatory=False) self.assertEqual(res, True) # Wait for the messages to arrive in queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertGreaterEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=True')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=False')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Attempt to cosume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) class TestPublishFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_publish from basic_consume callback """ connection = self._connect() ch = connection.channel() src_q_name = ( 'TestPublishFromBasicConsumeCallback_src_q' + uuid.uuid1().hex) dest_q_name = ( 'TestPublishFromBasicConsumeCallback_dest_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare source and destination queues ch.queue_declare(src_q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, src_q_name) ch.queue_declare(dest_q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, dest_q_name) # Deposit a message in the source queue ch.publish('', routing_key=src_q_name, body='via-publish', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): channel.publish( '', routing_key=dest_q_name, body=body, properties=props, mandatory=True) channel.basic_ack(method.delivery_tag) ch.basic_consume(on_consume, src_q_name, no_ack=False, exclusive=False, arguments=None) # Consume from destination queue for _, _, rx_body in ch.consume(dest_q_name, no_ack=True): self.assertEqual(rx_body, as_bytes('via-publish')) break else: self.fail('failed to consume a messages from destination q') class TestStopConsumingFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.stop_consuming from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestStopConsumingFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.stop_consuming() channel.basic_ack(method.delivery_tag) ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() ch.close() ch = connection.channel() # Verify that only the second message is present in the queue _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) msg = ch.basic_get(q_name) self.assertTupleEqual(msg, (None, None, None)) class TestCloseChannelFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseChannelFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.close() ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) # Verify that both messages are present in the queue ch = connection.channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestCloseConnectionFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseConnectionFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 connection.close() ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) self.assertTrue(connection.is_closed) # Verify that both messages are present in the queue ch = self._connect().channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestNonPubAckPublishAndConsumeHugeMessage(BlockingTestCaseBase): def test(self): """BlockingChannel.publish/consume huge message""" connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeHugeMessage_q' + uuid.uuid1().hex body = 'a' * 1000000 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish a message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body) LOGGER.info('Published message body size=%s', len(body)) # Consume the message for rx_method, rx_props, rx_body in ch.consume(q_name, no_ack=False, exclusive=False, arguments=None): self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) # Verify that the queue is now empty ch.close() ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestNonPubackPublishAndConsumeManyMessages(BlockingTestCaseBase): def test(self): """BlockingChannel non-pub-ack publish/consume many messages""" connection = self._connect() ch = connection.channel() q_name = ('TestNonPubackPublishAndConsumeManyMessages_q' + uuid.uuid1().hex) body = 'b' * 1024 num_messages_to_publish = 500 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) for _ in pika.compat.xrange(num_messages_to_publish): # Publish a message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body) # Consume the messages num_consumed = 0 for rx_method, rx_props, rx_body in ch.consume(q_name, no_ack=False, exclusive=False, arguments=None): num_consumed += 1 self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, num_consumed) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) if num_consumed >= num_messages_to_publish: break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) ch.close() self.assertIsNone(ch._queue_consumer_generator) # Verify that the queue is now empty ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicCancelWithNonAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels non-ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithNonAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create a non-ackable consumer consumer_tag = ch.basic_consume(lambda *x: None, q_name, no_ack=True, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 2) _, _, rx_body1 = messages[0] self.assertEqual(rx_body1, as_bytes(body1)) _, _, rx_body2 = messages[1] self.assertEqual(rx_body2, as_bytes(body2)) ch.close() ch = connection.channel() # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicCancelWithAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create an ackable consumer consumer_tag = ch.basic_consume(lambda *x: None, q_name, no_ack=False, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 0) ch.close() ch = connection.channel() # Verify that canceling the ackable consumer restored both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) class TestUnackedMessageAutoRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestUnackedMessageAutoRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Consume the events, but don't ack rx_messages = [] ch.basic_consume(lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) while len(rx_messages) != 2: connection.process_data_events(time_limit=None) self.assertEqual(rx_messages[0][1].delivery_tag, 1) self.assertEqual(rx_messages[1][1].delivery_tag, 2) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should restore messages back to queue ch.close() # Verify that there are two messages in q now ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) class TestNoAckMessageNotRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestNoAckMessageNotRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Consume, but don't ack num_messages = 0 for rx_method, _, _ in ch.consume(q_name, no_ack=True, exclusive=False): num_messages += 1 self.assertEqual(rx_method.delivery_tag, num_messages) if num_messages == 2: break else: self.fail('expected 2 messages, but consumed %i' % (num_messages,)) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should not restore no-ack messages back to queue ch.close() # Verify that there are no messages in q now ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestChannelFlow(BlockingTestCaseBase): def test(self): """BlockingChannel Channel.Flow activate and deactivate """ connection = self._connect() ch = connection.channel() q_name = ('TestChannelFlow_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Verify zero active consumers on the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 0) # Create consumer ch.basic_consume(lambda *args: None, q_name) # Verify one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) # Activate flow from default state (active by default) active = ch.flow(True) self.assertEqual(active, True) # Verify still one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) # active=False is not supported by RabbitMQ per # https://www.rabbitmq.com/specification.html: # "active=false is not supported by the server. Limiting prefetch with # basic.qos provides much better control" ## # Deactivate flow ## active = ch.flow(False) ## self.assertEqual(active, False) ## ## # Verify zero active consumers on the queue now ## frame = ch.queue_declare(q_name, passive=True) ## self.assertEqual(frame.method.consumer_count, 0) ## ## # Re-activate flow ## active = ch.flow(True) ## self.assertEqual(active, True) ## ## # Verify one active consumers on the queue once again ## frame = ch.queue_declare(q_name, passive=True) ## self.assertEqual(frame.method.consumer_count, 1) if __name__ == '__main__': unittest.main() pika-0.11.0/tests/acceptance/enforce_one_basicget_test.py000066400000000000000000000020751315131611700235040ustar00rootroot00000000000000try: import unittest2 as unittest except ImportError: import unittest from mock import MagicMock from pika.frame import Method, Header from pika.exceptions import DuplicateGetOkCallback from pika.channel import Channel from pika.connection import Connection class OnlyOneBasicGetTestCase(unittest.TestCase): def setUp(self): self.channel = Channel(MagicMock(Connection)(), 0, None) self.channel._state = Channel.OPEN self.callback = MagicMock() def test_two_basic_get_with_callback(self): self.channel.basic_get(self.callback) self.channel._on_getok(MagicMock(Method)(), MagicMock(Header)(), '') self.channel.basic_get(self.callback) self.channel._on_getok(MagicMock(Method)(), MagicMock(Header)(), '') self.assertEqual(self.callback.call_count, 2) def test_two_basic_get_without_callback(self): self.channel.basic_get(self.callback) with self.assertRaises(DuplicateGetOkCallback): self.channel.basic_get(self.callback) if __name__ == '__main__': unittest.main() pika-0.11.0/tests/acceptance/forward_server.py000066400000000000000000000526631315131611700213640ustar00rootroot00000000000000"""TCP/IP forwarding/echo service for testing.""" from __future__ import print_function import array from datetime import datetime import errno from functools import partial import logging import multiprocessing import os import socket import struct import sys import threading import traceback from pika.compat import PY3 if PY3: def buffer(object, offset, size): # pylint: disable=W0622 """array etc. have the buffer protocol""" return object[offset:offset + size] try: import SocketServer except ImportError: import socketserver as SocketServer # pylint: disable=F0401 def _trace(fmt, *args): """Format and output the text to stderr""" print((fmt % args) + "\n", end="", file=sys.stderr) class ForwardServer(object): # pylint: disable=R0902 """ Implement a TCP/IP forwarding/echo service for testing. Listens for an incoming TCP/IP connection, accepts it, then connects to the given remote address and forwards data back and forth between the two endpoints. This is similar to the subset of `netcat` functionality, but without dependency on any specific flavor of netcat Connection forwarding example; forward local connection to default rabbitmq addr, connect to rabbit via forwarder, then disconnect forwarder, then attempt another pika operation to see what happens with ForwardServer(("localhost", 5672)) as fwd: params = pika.ConnectionParameters( host=fwd.server_address[0], port=fwd.server_address[1]) conn = pika.BlockingConnection(params) # Once outside the context, the forwarder is disconnected # Let's see what happens in pika with a disconnected server channel = conn.channel() Echo server example def produce(sock): sock.sendall("12345") sock.shutdown(socket.SHUT_WR) with ForwardServer(None) as fwd: sock = socket.socket() sock.connect(fwd.server_address) worker = threading.Thread(target=produce, args=[sock]) worker.start() data = sock.makefile().read() assert data == "12345", data worker.join() """ # Amount of time, in seconds, we're willing to wait for the subprocess _SUBPROC_TIMEOUT = 10 def __init__(self, # pylint: disable=R0913 remote_addr, remote_addr_family=socket.AF_INET, remote_socket_type=socket.SOCK_STREAM, server_addr=("127.0.0.1", 0), server_addr_family=socket.AF_INET, server_socket_type=socket.SOCK_STREAM, local_linger_args=None): """ :param tuple remote_addr: remote server's IP address, whose structure depends on remote_addr_family; pair (host-or-ip-addr, port-number). Pass None to have ForwardServer behave as echo server. :param remote_addr_family: socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX. :param remote_socket_type: only socket.SOCK_STREAM is supported at this time :param server_addr: optional address for binding this server's listening socket; the format depends on server_addr_family; defaults to ("127.0.0.1", 0) :param server_addr_family: Address family for this server's listening socket; socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX; defaults to socket.AF_INET :param server_socket_type: only socket.SOCK_STREAM is supported at this time :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. None for default, which is to not change the SO_LINGER option. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value, in seconds """ self._logger = logging.getLogger(__name__) self._remote_addr = remote_addr self._remote_addr_family = remote_addr_family assert remote_socket_type == socket.SOCK_STREAM, remote_socket_type self._remote_socket_type = remote_socket_type assert server_addr is not None self._server_addr = server_addr assert server_addr_family is not None self._server_addr_family = server_addr_family assert server_socket_type == socket.SOCK_STREAM, server_socket_type self._server_socket_type = server_socket_type self._local_linger_args = local_linger_args self._subproc = None @property def running(self): """Property: True if ForwardServer is active""" return self._subproc is not None @property def server_address_family(self): """Property: Get listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr_family is not None, "Not in context" return self._server_addr_family @property def server_address(self): """ Property: Get listening socket's address; the returned value depends on the listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr is not None, "Not in context" return self._server_addr def __enter__(self): """ Context manager entry. Starts the forwarding server :returns: self """ return self.start() def __exit__(self, *args): """ Context manager exit; stops the forwarding server """ self.stop() def start(self): """ Start the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. :returns: self """ queue = multiprocessing.Queue() self._subproc = multiprocessing.Process( target=_run_server, kwargs=dict( local_addr=self._server_addr, local_addr_family=self._server_addr_family, local_socket_type=self._server_socket_type, local_linger_args=self._local_linger_args, remote_addr=self._remote_addr, remote_addr_family=self._remote_addr_family, remote_socket_type=self._remote_socket_type, queue=queue)) self._subproc.daemon = True self._subproc.start() try: # Get server socket info from subprocess self._server_addr_family, self._server_addr = queue.get( block=True, timeout=self._SUBPROC_TIMEOUT) queue.close() except Exception: # pylint: disable=W0703 try: self._logger.exception( "Failed while waiting for local socket info") # Preserve primary exception and traceback raise finally: # Clean up try: self.stop() except Exception: # pylint: disable=W0703 # Suppress secondary exception in favor of the primary self._logger.exception( "Emergency subprocess shutdown failed") return self def stop(self): """Stop the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. """ self._logger.info("ForwardServer STOPPING") try: self._subproc.terminate() self._subproc.join(timeout=self._SUBPROC_TIMEOUT) if self._subproc.is_alive(): self._logger.error( "ForwardServer failed to terminate, killing it") os.kill(self._subproc.pid) self._subproc.join(timeout=self._SUBPROC_TIMEOUT) assert not self._subproc.is_alive(), self._subproc # Log subprocess's exit code; NOTE: negative signal.SIGTERM (usually # -15) is normal on POSIX systems - it corresponds to SIGTERM exit_code = self._subproc.exitcode self._logger.info("ForwardServer terminated with exitcode=%s", exit_code) finally: self._subproc = None def _run_server(local_addr, local_addr_family, local_socket_type, # pylint: disable=R0913 local_linger_args, remote_addr, remote_addr_family, remote_socket_type, queue): """ Run the server; executed in the subprocess :param local_addr: listening address :param local_addr_family: listening address family; one of socket.AF_* :param local_socket_type: listening socket type; typically socket.SOCK_STREAM :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. Pass None to not change SO_LINGER. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value in seconds :param remote_addr: address of the target server. Pass None to have ForwardServer behave as echo server :param remote_addr_family: address family for connecting to target server; one of socket.AF_* :param remote_socket_type: socket type for connecting to target server; typically socket.SOCK_STREAM :param multiprocessing.Queue queue: queue for depositing the forwarding server's actual listening socket address family and bound address. The parent process waits for this. """ # NOTE: We define _ThreadedTCPServer class as a closure in order to # override some of its class members dynamically # NOTE: we add `object` to the base classes because `_ThreadedTCPServer` # isn't derived from `object`, which prevents `super` from working properly class _ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer, object): """Threaded streaming server for forwarding""" # Override TCPServer's class members address_family = local_addr_family socket_type = local_socket_type allow_reuse_address = True def __init__(self): handler_class_factory = partial( _TCPHandler, local_linger_args=local_linger_args, remote_addr=remote_addr, remote_addr_family=remote_addr_family, remote_socket_type=remote_socket_type) super(_ThreadedTCPServer, self).__init__( local_addr, handler_class_factory, bind_and_activate=True) server = _ThreadedTCPServer() # Send server socket info back to parent process queue.put([server.socket.family, server.server_address]) queue.close() server.serve_forever() # NOTE: we add `object` to the base classes because `StreamRequestHandler` isn't # derived from `object`, which prevents `super` from working properly class _TCPHandler(SocketServer.StreamRequestHandler, object): """TCP/IP session handler instantiated by TCPServer upon incoming connection. Implements forwarding/echo of the incoming connection. """ _SOCK_RX_BUF_SIZE = 16 * 1024 def __init__(self, # pylint: disable=R0913 request, client_address, server, local_linger_args, remote_addr, remote_addr_family, remote_socket_type): """ :param request: for super :param client_address: for super "paarm server: for super :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. Pass None to not change SO_LINGER. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value in seconds :param remote_addr: address of the target server. Pass None to have ForwardServer behave as echo server. :param remote_addr_family: address family for connecting to target server; one of socket.AF_* :param remote_socket_type: socket type for connecting to target server; typically socket.SOCK_STREAM :param **kwargs: kwargs for super class """ self._local_linger_args = local_linger_args self._remote_addr = remote_addr self._remote_addr_family = remote_addr_family self._remote_socket_type = remote_socket_type super(_TCPHandler, self).__init__(request=request, client_address=client_address, server=server) def handle(self): # pylint: disable=R0912 """Connect to remote and forward data between local and remote""" local_sock = self.connection if self._local_linger_args is not None: # Set SO_LINGER socket options on local socket l_onoff, l_linger = self._local_linger_args local_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', l_onoff, l_linger)) if self._remote_addr is not None: # Forwarding set-up remote_dest_sock = remote_src_sock = socket.socket( family=self._remote_addr_family, type=self._remote_socket_type, proto=socket.IPPROTO_IP) remote_dest_sock.connect(self._remote_addr) _trace("%s _TCPHandler connected to remote %s", datetime.utcnow(), remote_dest_sock.getpeername()) else: # Echo set-up remote_dest_sock, remote_src_sock = socket_pair() try: local_forwarder = threading.Thread( target=self._forward, args=(local_sock, remote_dest_sock,)) local_forwarder.setDaemon(True) local_forwarder.start() try: self._forward(remote_src_sock, local_sock) finally: # Wait for local forwarder thread to exit local_forwarder.join() finally: try: try: _safe_shutdown_socket(remote_dest_sock, socket.SHUT_RDWR) finally: if remote_src_sock is not remote_dest_sock: _safe_shutdown_socket(remote_src_sock, socket.SHUT_RDWR) finally: remote_dest_sock.close() if remote_src_sock is not remote_dest_sock: remote_src_sock.close() def _forward(self, src_sock, dest_sock): # pylint: disable=R0912 """Forward from src_sock to dest_sock""" src_peername = src_sock.getpeername() _trace("%s forwarding from %s to %s", datetime.utcnow(), src_peername, dest_sock.getpeername()) try: # NOTE: python 2.6 doesn't support bytearray with recv_into, so # we use array.array instead; this is only okay as long as the # array instance isn't shared across threads. See # http://bugs.python.org/issue7827 and # groups.google.com/forum/#!topic/comp.lang.python/M6Pqr-KUjQw rx_buf = array.array("B", [0] * self._SOCK_RX_BUF_SIZE) while True: try: nbytes = src_sock.recv_into(rx_buf) except socket.error as exc: if exc.errno == errno.EINTR: continue elif exc.errno == errno.ECONNRESET: # Source peer forcibly closed connection _trace("%s errno.ECONNRESET from %s", datetime.utcnow(), src_peername) break else: _trace("%s Unexpected errno=%s from %s\n%s", datetime.utcnow(), exc.errno, src_peername, "".join(traceback.format_stack())) raise if not nbytes: # Source input EOF _trace("%s EOF on %s", datetime.utcnow(), src_peername) break try: dest_sock.sendall(buffer(rx_buf, 0, nbytes)) except socket.error as exc: if exc.errno == errno.EPIPE: # Destination peer closed its end of the connection _trace("%s Destination peer %s closed its end of " "the connection: errno.EPIPE", datetime.utcnow(), dest_sock.getpeername()) break elif exc.errno == errno.ECONNRESET: # Destination peer forcibly closed connection _trace("%s Destination peer %s forcibly closed " "connection: errno.ECONNRESET", datetime.utcnow(), dest_sock.getpeername()) break else: _trace( "%s Unexpected errno=%s in sendall to %s\n%s", datetime.utcnow(), exc.errno, dest_sock.getpeername(), "".join(traceback.format_stack())) raise except: _trace("forward failed\n%s", "".join(traceback.format_exc())) raise finally: _trace("%s done forwarding from %s", datetime.utcnow(), src_peername) try: # Let source peer know we're done receiving _safe_shutdown_socket(src_sock, socket.SHUT_RD) finally: # Let destination peer know we're done sending _safe_shutdown_socket(dest_sock, socket.SHUT_WR) def echo(port=0): """ This function implements a simple echo server for testing the Forwarder class. :param int port: port number on which to listen We run this function and it prints out the listening socket binding. Then, we run Forwarder and point it at this echo "server". Then, we run telnet and point it at forwarder and see if whatever we type gets echoed back to us. This function exits when the remote end connects, then closes connection """ lsock = socket.socket() lsock.bind(("", port)) lsock.listen(1) _trace("Listening on sockname=%s", lsock.getsockname()) sock, remote_addr = lsock.accept() try: _trace("Connection from peer=%s", remote_addr) while True: try: data = sock.recv(4 * 1024) # pylint: disable=E1101 except socket.error as exc: if exc.errno == errno.EINTR: continue else: raise if not data: break sock.sendall(data) # pylint: disable=E1101 finally: try: _safe_shutdown_socket(sock, socket.SHUT_RDWR) finally: sock.close() def _safe_shutdown_socket(sock, how=socket.SHUT_RDWR): """ Shutdown a socket, suppressing ENOTCONN """ try: sock.shutdown(how) except socket.error as exc: if exc.errno != errno.ENOTCONN: raise def socket_pair(family=None, sock_type=socket.SOCK_STREAM, proto=socket.IPPROTO_IP): """ socket.socketpair abstraction with support for Windows :param family: address family; e.g., socket.AF_UNIX, socket.AF_INET, etc.; defaults to socket.AF_UNIX if available, with fallback to socket.AF_INET. :param sock_type: socket type; defaults to socket.SOCK_STREAM :param proto: protocol; defaults to socket.IPPROTO_IP """ if family is None: if hasattr(socket, "AF_UNIX"): family = socket.AF_UNIX else: family = socket.AF_INET if hasattr(socket, "socketpair"): socket1, socket2 = socket.socketpair(family, sock_type, proto) else: # Probably running on Windows where socket.socketpair isn't supported # Work around lack of socket.socketpair() socket1 = socket2 = None listener = socket.socket(family, sock_type, proto) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(("localhost", 0)) listener.listen(1) listener_port = listener.getsockname()[1] socket1 = socket.socket(family, sock_type, proto) # Use thread to connect in background, while foreground issues the # blocking accept() conn_thread = threading.Thread( target=socket1.connect, args=(('localhost', listener_port),)) conn_thread.setDaemon(1) conn_thread.start() try: socket2 = listener.accept()[0] finally: listener.close() # Join/reap background thread conn_thread.join(timeout=10) assert not conn_thread.isAlive() return (socket1, socket2) pika-0.11.0/tests/acceptance/test_utils.py000066400000000000000000000050161315131611700205170ustar00rootroot00000000000000"""Acceptance test utils""" import functools import logging import time import traceback def retry_assertion(timeout_sec, retry_interval_sec=0.1): """Creates a decorator that retries the decorated function or method only upon `AssertionError` exception at the given retry interval not to exceed the overall given timeout. :param float timeout_sec: overall timeout in seconds :param float retry_interval_sec: amount of time to sleep between retries in seconds. :returns: decorator that implements the following behavior 1. This decorator guarantees to call the decorated function or method at least once. 2. It passes through all exceptions besides `AssertionError`, preserving the original exception and its traceback. 3. If no exception, it returns the return value from the decorated function/method. 4. It sleeps `time.sleep(retry_interval_sec)` between retries. 5. It checks for expiry of the overall timeout before sleeping. 6. If the overall timeout is exceeded, it re-raises the latest `AssertionError`, preserving its original traceback """ def retry_assertion_decorator(func): """Decorator""" @functools.wraps(func) def retry_assertion_wrap(*args, **kwargs): """The wrapper""" num_attempts = 0 start_time = time.time() while True: num_attempts += 1 try: result = func(*args, **kwargs) except AssertionError: now = time.time() # Compensate for time adjustment if now < start_time: start_time = now if (now - start_time) > timeout_sec: logging.exception( 'Exceeded retry timeout of %s sec in %s attempts ' 'with func %r. Caller\'s stack:\n%s', timeout_sec, num_attempts, func, ''.join(traceback.format_stack())) raise logging.debug('Attempt %s failed; retrying %r in %s sec.', num_attempts, func, retry_interval_sec) time.sleep(retry_interval_sec) else: logging.debug('%r succeeded at attempt %s', func, num_attempts) return result return retry_assertion_wrap return retry_assertion_decorator pika-0.11.0/tests/unit/000077500000000000000000000000001315131611700146355ustar00rootroot00000000000000pika-0.11.0/tests/unit/amqp_object_tests.py000066400000000000000000000040711315131611700207170ustar00rootroot00000000000000""" Tests for pika.callback """ try: import mock except: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika import amqp_object class AMQPObjectTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.AMQPObject().NAME, 'AMQPObject') def test_repr_no_items(self): obj = amqp_object.AMQPObject() self.assertEqual(repr(obj), '') def test_repr_items(self): obj = amqp_object.AMQPObject() setattr(obj, 'foo', 'bar') setattr(obj, 'baz', 'qux') self.assertEqual(repr(obj), "") class ClassTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Class().NAME, 'Unextended Class') class MethodTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Method().NAME, 'Unextended Method') def test_set_content_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._body, body) def test_set_content_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._properties, properties) def test_get_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_body(), body) def test_get_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_properties(), properties) class PropertiesTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Properties().NAME, 'Unextended Properties') pika-0.11.0/tests/unit/base_connection_tests.py000066400000000000000000000045041315131611700215650ustar00rootroot00000000000000""" Tests for pika.base_connection.BaseConnection """ try: import mock except: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest import pika from pika.adapters import base_connection class BaseConnectionTests(unittest.TestCase): def setUp(self): with mock.patch('pika.connection.Connection.connect'): self.connection = base_connection.BaseConnection() self.connection._set_connection_state( base_connection.BaseConnection.CONNECTION_OPEN) def test_repr(self): text = repr(self.connection) self.assertTrue(text.startswith('I', encoded, offset)[0]") print(prefix + "offset += 4") print(prefix + "%s = encoded[offset:offset + length]" % cLvalue) print(prefix + "try:") print(prefix + " %s = str(%s)" % (cLvalue, cLvalue)) print(prefix + "except UnicodeEncodeError:") print(prefix + " pass") print(prefix + "offset += length") elif type == 'octet': print(prefix + "%s = struct.unpack_from('B', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 1") elif type == 'short': print(prefix + "%s = struct.unpack_from('>H', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 2") elif type == 'long': print(prefix + "%s = struct.unpack_from('>I', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 4") elif type == 'longlong': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'timestamp': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'bit': raise Exception("Can't decode bit in genSingleDecode") elif type == 'table': print(Exception(prefix + "(%s, offset) = data.decode_table(encoded, offset)" % \ cLvalue)) else: raise Exception("Illegal domain in genSingleDecode", type) def genSingleEncode(prefix, cValue, unresolved_domain): type = spec.resolveDomain(unresolved_domain) if type == 'shortstr': print(prefix + \ "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" \ % (cValue, prefix, cValue)) print(prefix + "data.encode_short_string(pieces, %s)" % cValue) elif type == 'longstr': print(prefix + \ "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" \ % (cValue, prefix, cValue)) print( prefix + "value = %s.encode('utf-8') if isinstance(%s, unicode_type) else %s" % (cValue, cValue, cValue)) print(prefix + "pieces.append(struct.pack('>I', len(value)))") print(prefix + "pieces.append(value)") elif type == 'octet': print(prefix + "pieces.append(struct.pack('B', %s))" % cValue) elif type == 'short': print(prefix + "pieces.append(struct.pack('>H', %s))" % cValue) elif type == 'long': print(prefix + "pieces.append(struct.pack('>I', %s))" % cValue) elif type == 'longlong': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'timestamp': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'bit': raise Exception("Can't encode bit in genSingleEncode") elif type == 'table': print(Exception(prefix + "data.encode_table(pieces, %s)" % cValue)) else: raise Exception("Illegal domain in genSingleEncode", type) def genDecodeMethodFields(m): print(" def decode(self, encoded, offset=0):") bitindex = None for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 if bitindex >= 8: bitindex = 0 if not bitindex: print( " bit_buffer = struct.unpack_from('B', encoded, offset)[0]") print(" offset += 1") print(" self.%s = (bit_buffer & (1 << %d)) != 0" % \ (pyize(f.name), bitindex)) bitindex += 1 else: bitindex = None genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" return self") print('') def genDecodeProperties(c): print(" def decode(self, encoded, offset=0):") print(" flags = 0") print(" flagword_index = 0") print(" while True:") print( " partial_flags = struct.unpack_from('>H', encoded, offset)[0]") print(" offset += 2") print( " flags = flags | (partial_flags << (flagword_index * 16))") print(" if not (partial_flags & 1):") print(" break") print(" flagword_index += 1") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" self.%s = (flags & %s) != 0" % (pyize(f.name), flagName(c, f))) else: print(" if flags & %s:" % (flagName(c, f),)) genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" else:") print(" self.%s = None" % (pyize(f.name),)) print(" return self") print('') def genEncodeMethodFields(m): print(" def encode(self):") print(" pieces = list()") bitindex = None def finishBits(): if bitindex is not None: print(" pieces.append(struct.pack('B', bit_buffer))") for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 print(" bit_buffer = 0") if bitindex >= 8: finishBits() print(" bit_buffer = 0") bitindex = 0 print(" if self.%s:" % pyize(f.name)) print(" bit_buffer = bit_buffer | (1 << %d)" % \ bitindex) bitindex += 1 else: finishBits() bitindex = None genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) finishBits() print(" return pieces") print('') def genEncodeProperties(c): print(" def encode(self):") print(" pieces = list()") print(" flags = 0") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" if self.%s: flags = flags | %s" % (pyize(f.name), flagName(c, f))) else: print(" if self.%s is not None:" % (pyize(f.name),)) print(" flags = flags | %s" % (flagName(c, f),)) genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" flag_pieces = list()") print(" while True:") print(" remainder = flags >> 16") print(" partial_flags = flags & 0xFFFE") print(" if remainder != 0:") print(" partial_flags |= 1") print( " flag_pieces.append(struct.pack('>H', partial_flags))") print(" flags = remainder") print(" if not flags:") print(" break") print(" return flag_pieces + pieces") print('') def fieldDeclList(fields): return ''.join([", %s=%s" % (pyize(f.name), fieldvalue(f.defaultvalue)) for f in fields]) def fieldInitList(prefix, fields): if fields: return ''.join(["%sself.%s = %s\n" % (prefix, pyize(f.name), pyize(f.name)) \ for f in fields]) else: return '%spass\n' % (prefix,) print("""\"\"\" AMQP Specification ================== This module implements the constants and classes that comprise AMQP protocol level constructs. It should rarely be directly referenced outside of Pika's own internal use. .. note:: Auto-generated code by codegen.py, do not edit directly. Pull requests to this file without accompanying ``utils/codegen.py`` changes will be rejected. \"\"\" import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type # Python 3 support for str object str = bytes """) print("PROTOCOL_VERSION = (%d, %d, %d)" % (spec.major, spec.minor, spec.revision)) print("PORT = %d" % spec.port) print('') # Append some constants that arent in the spec json file spec.constants.append(('FRAME_MAX_SIZE', 131072, '')) spec.constants.append(('FRAME_HEADER_SIZE', 7, '')) spec.constants.append(('FRAME_END_SIZE', 1, '')) spec.constants.append(('TRANSIENT_DELIVERY_MODE', 1, '')) spec.constants.append(('PERSISTENT_DELIVERY_MODE', 2, '')) constants = {} for c, v, cls in spec.constants: constants[constantName(c)] = v for key in sorted(constants.keys()): print("%s = %s" % (key, constants[key])) print('') for c in spec.allClasses(): print('') print('class %s(amqp_object.Class):' % (camel(c.name),)) print('') print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(camel(c.name)),)) print('') for m in c.allMethods(): print(' class %s(amqp_object.Method):' % (camel(m.name),)) print('') methodid = m.klass.index << 16 | m.index print(" INDEX = 0x%.08X # %d, %d; %d" % \ (methodid, m.klass.index, m.index, methodid)) print(" NAME = %s" % (fieldvalue(m.structName(),))) print('') print(" def __init__(self%s):" % (fieldDeclList(m.arguments),)) print(fieldInitList(' ', m.arguments)) print(" @property") print(" def synchronous(self):") print(" return %s" % m.isSynchronous) print('') genDecodeMethodFields(m) genEncodeMethodFields(m) for c in spec.allClasses(): if c.fields: print('') print('class %s(amqp_object.Properties):' % (c.structName(),)) print('') print(" CLASS = %s" % (camel(c.name),)) print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(c.structName(),))) print('') index = 0 if c.fields: for f in c.fields: if index % 16 == 15: index += 1 shortnum = index / 16 partialindex = 15 - (index % 16) bitindex = shortnum * 16 + partialindex print(' %s = (1 << %d)' % (flagName(None, f), bitindex)) index += 1 print('') print(" def __init__(self%s):" % (fieldDeclList(c.fields),)) print(fieldInitList(' ', c.fields)) genDecodeProperties(c) genEncodeProperties(c) print("methods = {") print(',\n'.join([" 0x%08X: %s" % (m.klass.index << 16 | m.index, m.structName()) \ for m in spec.allMethods()])) print("}") print('') print("props = {") print(',\n'.join([" 0x%04X: %s" % (c.index, c.structName()) \ for c in spec.allClasses() \ if c.fields])) print("}") print('') print('') print("def has_content(methodNumber):") print(' return methodNumber in (') for m in spec.allMethods(): if m.hasContent: print(' %s.INDEX,' % m.structName()) print(' )') if __name__ == "__main__": with open(PIKA_SPEC, 'w') as handle: sys.stdout = handle generate(['%s/amqp-rabbitmq-0.9.1.json' % CODEGEN_PATH])