pax_global_header00006660000000000000000000000064125716307640014525gustar00rootroot0000000000000052 comment=b907f91415169b7f590174ab5d228e75a1b273e6 pika-0.10.0/000077500000000000000000000000001257163076400125275ustar00rootroot00000000000000pika-0.10.0/.checkignore000066400000000000000000000000431257163076400150060ustar00rootroot00000000000000tests utils examples docs setup.py pika-0.10.0/.codeclimate.yml000066400000000000000000000001761257163076400156050ustar00rootroot00000000000000languages: Python: true exclude_paths: - docs/* - tests/* - utils/* - pika/examples/* - pika/spec.py pika-0.10.0/.coveragerc000066400000000000000000000000621257163076400146460ustar00rootroot00000000000000[run] branch = True [report] omit = pika/spec.py pika-0.10.0/.gitignore000066400000000000000000000002331257163076400145150ustar00rootroot00000000000000*.pyc *~ .idea .coverage .tox .DS_Store pika.iml codegen pika.egg-info examples/pika examples/blocking/pika atlassian*xml build dist docs/_build *.conf.in pika-0.10.0/.travis.yml000066400000000000000000000020661257163076400146440ustar00rootroot00000000000000language: python python: - 2.6 - 2.7 - 3.3 - 3.4 before_install: - sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse" - sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse" - sudo apt-get update -qq - sudo apt-get install libev-dev/trusty install: - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install unittest2 ordereddict; fi - if [[ $TRAVIS_PYTHON_VERSION != '2.6' ]]; then pip install pyev; fi - pip install -r test-requirements.txt services: - rabbitmq script: nosetests -c nose.cfg --with-coverage --cover-package=pika --cover-branches after_success: - codecov deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: python: 2.7 tags: true all_branches: true password: secure: "V/JTU/X9C6uUUVGEAWmWWbmKW7NzVVlC/JWYpo05Ha9c0YV0vX4jOfov2EUAphM0WwkD/MRhz4dq3kCU5+cjHxR3aTSb+sbiElsCpaciaPkyrns+0wT5MCMO29Lpnq2qBLc1ePR1ey5aTWC/VibgFJOL7H/3wyvukL6ZaCnktYk=" pika-0.10.0/CHANGELOG.rst000066400000000000000000001115501257163076400145530ustar00rootroot000000000000000.10.0 2015-09-02 ----------------- - a9bf96d - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - 388c55d - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - 4780de3 - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 ------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsynchoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsynchoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as ehnanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd99e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-0.10.0/LICENSE000066400000000000000000000030021257163076400135270ustar00rootroot00000000000000Copyright (c) 2009-2015, Tony Garnock-Jones, Gavin M. Roy, Pivotal and others. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Pika project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pika-0.10.0/MANIFEST.in000066400000000000000000000000421257163076400142610ustar00rootroot00000000000000include LICENSE include README.rstpika-0.10.0/README.rst000066400000000000000000000066071257163076400142270ustar00rootroot00000000000000Pika, an AMQP 0-9-1 client library for Python ============================================= |Version| |Downloads| |Status| |Coverage| |License| Introduction ------------- Pika is a pure-Python implementation of the AMQP 0-9-1 protocol that tries to stay fairly independent of the underlying network support library. - Python 2.6+ and 3.3+ are supported. - Since threads aren't appropriate to every situation, it doesn't require threads. It takes care not to forbid them, either. The same goes for greenlets, callbacks, continuations and generators. It is not necessarily thread-safe however, and your mileage will vary. - People may be using direct sockets, plain old `select()`, or any of the wide variety of ways of getting network events to and from a python application. Pika tries to stay compatible with all of these, and to make adapting it to a new environment as simple as possible. Documentation ------------- Pika's documentation can be found at `https://pika.readthedocs.org `_ Example ------- Here is the most simple example of use, sending a message with the BlockingConnection adapter: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() channel.basic_publish(exchange='example', routing_key='test', body='Test Message') connection.close() And an example of writing a blocking consumer: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() for method_frame, properties, body in channel.consume('test'): # Display the message parts and ack the message print method_frame, properties, body channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print 'Requeued %i messages' % requeued_messages connection.close() Pika provides the following adapters ------------------------------------ - BlockingConnection - enables blocking, synchronous operation on top of library for simple uses - LibevConnection - adapter for use with the libev event loop http://libev.schmorp.de - SelectConnection - fast asynchronous adapter - TornadoConnection - adapter for use with the Tornado IO Loop http://tornadoweb.org - TwistedConnection - adapter for use with the Twisted asynchronous package http://twistedmatrix.com/ Contributing ------------ To contribute to pika, please make sure that any new features or changes to existing functionality include test coverage. Additionally, please format your code using `yapf `_ with ``google`` style prior to issuing your pull request. .. |Version| image:: https://img.shields.io/pypi/v/pika.svg? :target: http://badge.fury.io/py/pika .. |Status| image:: https://img.shields.io/travis/pika/pika.svg? :target: https://travis-ci.org/pika/pika .. |Coverage| image:: https://img.shields.io/codecov/c/github/pika/pika.svg? :target: https://codecov.io/github/pika/pika?branch=master .. |Downloads| image:: https://img.shields.io/pypi/dm/pika.svg? :target: https://pypi.python.org/pypi/pika .. |License| image:: https://img.shields.io/pypi/l/pika.svg? :target: https://pika.readthedocs.org pika-0.10.0/docs/000077500000000000000000000000001257163076400134575ustar00rootroot00000000000000pika-0.10.0/docs/Makefile000066400000000000000000000126641257163076400151300ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pika.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pika.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/pika" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pika" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pika-0.10.0/docs/conf.py000066400000000000000000000017441257163076400147640ustar00rootroot00000000000000# -*- coding: utf-8 -*- import sys sys.path.insert(0, '../') #needs_sphinx = '1.0' extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.intersphinx'] intersphinx_mapping = {'python': ('https://docs.python.org/3/', 'https://docs.python.org/3/objects.inv'), 'tornado': ('http://www.tornadoweb.org/en/stable/', 'http://www.tornadoweb.org/en/stable/objects.inv')} templates_path = ['_templates'] source_suffix = '.rst' master_doc = 'index' project = 'pika' copyright = '2009-2015, Tony Garnock-Jones, Gavin M. Roy, Pivotal and others.' import pika release = pika.__version__ version = '.'.join(release.split('.')[0:1]) exclude_patterns = ['_build'] add_function_parentheses = True add_module_names = True show_authors = True pygments_style = 'sphinx' modindex_common_prefix = ['pika'] html_theme = 'default' html_static_path = ['_static'] htmlhelp_basename = 'pikadoc' pika-0.10.0/docs/contributors.rst000066400000000000000000000034011257163076400167440ustar00rootroot00000000000000Contributors ============ The following people have directly contributes code by way of new features and/or bug fixes to Pika: - Gavin M. Roy - Tony Garnock-Jones - Vitaly Kruglikov - Michael Laing - Marek Majkowski - Jan Urbański - Brian K. Jones - Ask Solem - ml - Will - atatsu - Fredrik Svensson - Pedro Abranches - Kyösti Herrala - Erik Andersson - Charles Law - Alex Chandel - Tristan Penman - Raphaël De Giusti - Jozef Van Eenbergen - Josh Braegger - Jason J. W. Williams - James Mutton - Cenk Alti - Asko Soukka - Antti Haapala - Anton Ryzhov - cellscape - cacovsky - bra-fsn - ateska - Roey Berman - Robert Weidlich - Riccardo Cirimelli - Perttu Ranta-aho - Pau Gargallo - Kane - Kamil Kisiel - Jonty Wareing - Jonathan Kirsch - Jacek 'Forger' Całusiński - Garth Williamson - Erik Olof Gunnar Andersson - David Strauss - Anton V. Yanchenko - Alexey Myasnikov - Alessandro Tagliapietra - Adam Flynn - skftn - saarni - pavlobaron - nonleaf - markcf - george y - eivanov - bstemshorn - a-tal - Yang Yang - Stuart Longland - Sigurd Høgsbro - Sean Dwyer - Samuel Stauffer - Roberto Decurnex - Rikard Hultén - Richard Boulton - Ralf Nyren - Qi Fan - Peter Magnusson - Pankrat - Olivier Le Thanh Duong - Njal Karevoll - Milan Skuhra - Mik Kocikowski - Michael Kenney - Mark Unsworth - Luca Wehrstedt - Laurent Eschenauer - Lars van de Kerkhof - Kyösti Herrala - Juhyeong Park - JuhaS - Josh Hansen - Jorge Puente Sarrín - Jeff Tang - Jeff Fein-Worton - Jeff - Hunter Morris - Guruprasad - Garrett Cooper - Frank Slaughter - Dustin Koupal - Bjorn Sandberg - Axel Eirola - Andrew Smith - Andrew Grigorev - Andrew - Allard Hoeve - A.Shaposhnikov *Contributors listed by commit count.* pika-0.10.0/docs/examples.rst000066400000000000000000000013261257163076400160310ustar00rootroot00000000000000Usage Examples ============== Pika has various methods of use, between the synchronous BlockingConnection adapter and the various asynchronous connection adapter. The following examples illustrate the various ways that you can use Pika in your projects. .. toctree:: :glob: :maxdepth: 1 examples/using_urlparameters examples/connecting_async examples/blocking_basic_get examples/blocking_consume examples/blocking_consumer_generator examples/comparing_publishing_sync_async examples/blocking_delivery_confirmations examples/blocking_publish_mandatory examples/asynchronous_consumer_example examples/asynchronous_publisher_example examples/twisted_example examples/tornado_consumer pika-0.10.0/docs/examples/000077500000000000000000000000001257163076400152755ustar00rootroot00000000000000pika-0.10.0/docs/examples/asynchronous_consumer_example.rst000066400000000000000000000355241257163076400242210ustar00rootroot00000000000000Asynchronous consumer example ============================= The following example implements a consumer that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: # -*- coding: utf-8 -*- import logging import pika LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection(pika.URLParameters(self._url), self.on_connection_open, stop_ioloop_on_close=False) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ # This is the old connection IOLoop instance, stop its ioloop self._connection.ioloop.stop() if not self._closing: # Create a new connection self._connection = self.connect() # There is now a new connection, needs a new ioloop to run self._connection.ioloop.start() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel %i was closed: (%s) %s', channel, reply_code, reply_text) self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume(self.on_message, self.QUEUE) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param str|unicode body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.10.0/docs/examples/asynchronous_publisher_example.rst000066400000000000000000000371511257163076400243610ustar00rootroot00000000000000Asynchronous publisher example ============================== The following example implements a publisher that will respond to RPC commands sent from RabbitMQ and uses delivery confirmations. It will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a publisher can do. publisher.py:: # -*- coding: utf-8 -*- import logging import pika import json LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExamplePublisher(object): """This is an example publisher that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. It uses delivery confirmations and illustrates one way to keep track of messages that have been sent and if they've been confirmed by RabbitMQ. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' PUBLISH_INTERVAL = 1 QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Setup the example publisher object, passing in the URL we will use to connect to RabbitMQ. :param str amqp_url: The URL for connecting to RabbitMQ """ self._connection = None self._channel = None self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 self._stopping = False self._url = amqp_url self._closing = False def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. If you want the reconnection to work, make sure you set stop_ioloop_on_close to False, which is not the default behavior of this adapter. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection(pika.URLParameters(self._url), self.on_connection_open, stop_ioloop_on_close=False) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 # This is the old connection IOLoop instance, stop its ioloop self._connection.ioloop.stop() # Create a new connection self._connection = self.connect() # There is now a new connection, needs a new ioloop to run self._connection.ioloop.start() def open_channel(self): """This method will open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ confirms the channel is open by sending the Channel.OpenOK RPC reply, the on_channel_open method will be invoked. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel was closed: (%s) %s', reply_code, reply_text) if not self._closing: self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_bindok(self, unused_frame): """This method is invoked by pika when it receives the Queue.BindOk response from RabbitMQ. Since we know we're now setup and bound, it's time to start publishing.""" LOGGER.info('Queue bound') self.start_publishing() def start_publishing(self): """This method will enable delivery confirmations and schedule the first message to be sent to RabbitMQ """ LOGGER.info('Issuing consumer related RPC commands') self.enable_delivery_confirmations() self.schedule_next_message() def enable_delivery_confirmations(self): """Send the Confirm.Select RPC method to RabbitMQ to enable delivery confirmations on the channel. The only way to turn this off is to close the channel and create a new one. When the message is confirmed from RabbitMQ, the on_delivery_confirmation method will be invoked passing in a Basic.Ack or Basic.Nack method from RabbitMQ that will indicate which messages it is confirming or rejecting. """ LOGGER.info('Issuing Confirm.Select RPC command') self._channel.confirm_delivery(self.on_delivery_confirmation) def on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ confirmation_type = method_frame.method.NAME.split('.')[1].lower() LOGGER.info('Received %s for delivery tag: %i', confirmation_type, method_frame.method.delivery_tag) if confirmation_type == 'ack': self._acked += 1 elif confirmation_type == 'nack': self._nacked += 1 self._deliveries.remove(method_frame.method.delivery_tag) LOGGER.info('Published %i messages, %i have yet to be confirmed, ' '%i were acked and %i were nacked', self._message_number, len(self._deliveries), self._acked, self._nacked) def schedule_next_message(self): """If we are not closing our connection to RabbitMQ, schedule another message to be delivered in PUBLISH_INTERVAL seconds. """ if self._stopping: return LOGGER.info('Scheduling next message for %0.1f seconds', self.PUBLISH_INTERVAL) self._connection.add_timeout(self.PUBLISH_INTERVAL, self.publish_message) def publish_message(self): """If the class is not stopping, publish a message to RabbitMQ, appending a list of deliveries with the message number that was sent. This list will be used to check for delivery confirmations in the on_delivery_confirmations method. Once the message has been sent, schedule another message to be sent. The main reason I put scheduling in was just so you can get a good idea of how the process is flowing by slowing down and speeding up the delivery intervals by changing the PUBLISH_INTERVAL constant in the class. """ if self._stopping: return message = {u'مفتاح': u' قيمة', u'键': u'值', u'キー': u'値'} properties = pika.BasicProperties(app_id='example-publisher', content_type='application/json', headers=message) self._channel.basic_publish(self.EXCHANGE, self.ROUTING_KEY, json.dumps(message, ensure_ascii=False), properties) self._message_number += 1 self._deliveries.append(self._message_number) LOGGER.info('Published message # %i', self._message_number) self.schedule_next_message() def close_channel(self): """Invoke this command to close the channel with RabbitMQ by sending the Channel.Close RPC command. """ LOGGER.info('Closing the channel') if self._channel: self._channel.close() def run(self): """Run the example code by connecting and then starting the IOLoop. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Stop the example by closing the channel and connection. We set a flag here so that we stop scheduling new messages to be published. The IOLoop is started because this method is invoked by the Try/Catch below when KeyboardInterrupt is caught. Starting the IOLoop again will allow the publisher to cleanly disconnect from RabbitMQ. """ LOGGER.info('Stopping') self._stopping = True self.close_channel() self.close_connection() self._connection.ioloop.start() LOGGER.info('Stopped') def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._closing = True self._connection.close() def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) # Connect to localhost:5672 as guest with the password guest and virtual host "/" (%2F) example = ExamplePublisher('amqp://guest:guest@localhost:5672/%2F?connection_attempts=3&heartbeat_interval=3600') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.10.0/docs/examples/blocking_basic_get.rst000066400000000000000000000022521257163076400216200ustar00rootroot00000000000000Using the Blocking Connection to get a message from RabbitMQ ============================================================ .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.basic_get ` method will return a tuple with the members. If the server returns a message, the first item in the tuple will be a :class:`pika.spec.Basic.GetOk` object with the current message count, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The second item will be a :py:class:`~pika.spec.BasicProperties` object and the third will be the message body. If the server did not return a message a tuple of None, None, None will be returned. Example of getting a message and acknowledging it:: import pika connection = pika.BlockingConnection() channel = connection.channel() method_frame, header_frame, body = channel.basic_get('test') if method_frame: print method_frame, header_frame, body channel.basic_ack(method_frame.delivery_tag) else: print 'No message returned' pika-0.10.0/docs/examples/blocking_consume.rst000066400000000000000000000025151257163076400213530ustar00rootroot00000000000000Using the Blocking Connection to consume messages from RabbitMQ =============================================================== .. _example_blocking_basic_consume: The :py:meth:`BlockingChannel.basic_consume ` method assign a callback method to be called every time that RabbitMQ delivers messages to your consuming application. When pika calls your method, it will pass in the channel, a :py:class:`pika.spec.Basic.Deliver` object with the delivery tag, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The third argument will be a :py:class:`pika.spec.BasicProperties` object and the last will be the message body. Example of consuming messages and acknowledging them:: import pika def on_message(channel, method_frame, header_frame, body): print method_frame.delivery_tag print body print channel.basic_ack(delivery_tag=method_frame.delivery_tag) connection = pika.BlockingConnection() channel = connection.channel() channel.basic_consume(on_message, 'test') try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() pika-0.10.0/docs/examples/blocking_consumer_generator.rst000066400000000000000000000071771257163076400236140ustar00rootroot00000000000000Using the BlockingChannel.consume generator to consume messages =============================================================== .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.consume ` method is a generator that will return a tuple of method, properties and body. When you escape out of the loop, be sure to call consumer.cancel() to return any unprocessed messages. Example of consuming messages and acknowledging them:: import pika connection = pika.BlockingConnection() channel = connection.channel() # Get ten messages and break out for method_frame, properties, body in channel.consume('test'): # Display the message parts print method_frame print properties print body # Acknowledge the message channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print 'Requeued %i messages' % requeued_messages # Close the channel and the connection channel.close() connection.close() If you have pending messages in the test queue, your output should look something like:: (pika)gmr-0x02:pika gmr$ python blocking_nack.py Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Requeued 1894 messages pika-0.10.0/docs/examples/blocking_delivery_confirmations.rst000066400000000000000000000020731257163076400244570ustar00rootroot00000000000000Using Delivery Confirmations with the BlockingConnection ======================================================== The following code demonstrates how to turn on delivery confirmations with the BlockingConnection and how to check for confirmation from RabbitMQ:: import pika # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Turn on delivery confirmations channel.confirm_delivery() # Send a message if channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1)): print 'Message publish was confirmed' else: print 'Message could not be confirmed' pika-0.10.0/docs/examples/blocking_publish_mandatory.rst000066400000000000000000000021431257163076400234230ustar00rootroot00000000000000Ensuring message delivery with the mandatory flag ================================================= The following example demonstrates how to check if a message is delivered by setting the mandatory flag and checking the return result when using the BlockingConnection:: import pika # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Enabled delivery confirmations channel.confirm_delivery() # Send a message if channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1), mandatory=True): print 'Message was published' else: print 'Message was returned' pika-0.10.0/docs/examples/comparing_publishing_sync_async.rst000066400000000000000000000054031257163076400244650ustar00rootroot00000000000000Comparing Message Publishing with BlockingConnection and SelectConnection ========================================================================= For those doing simple, non-asynchronous programing, :py:meth:`pika.adapters.blocking_connection.BlockingConnection` proves to be the easiest way to get up and running with Pika to publish messages. In the following example, a connection is made to RabbitMQ listening to port *5672* on *localhost* using the username *guest* and password *guest* and virtual host */*. Once connected, a channel is opened and a message is published to the *test_exchange* exchange using the *test_routing_key* routing key. The BasicProperties value passed in sets the message to delivery mode *1* (non-persisted) with a content-type of *text/plain*. Once the message is published, the connection is closed:: import pika parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() In contrast, using :py:meth:`pika.adapters.select_connection.SelectConnection` and the other asynchronous adapters is more complicated and less pythonic, but when used with other asynchronous services can have tremendous performance improvements. In the following code example, all of the same parameters and values are used as were used in the previous example:: import pika # Step #3 def on_open(connection): connection.channel(on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() # Step #1: Connect to RabbitMQ parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.SelectConnection(parameters=parameters, on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-0.10.0/docs/examples/connecting_async.rst000066400000000000000000000035031257163076400213540ustar00rootroot00000000000000Connecting to RabbitMQ with Callback-Passing Style ================================================== When you connect to RabbitMQ with an asynchronous adapter, you are writing event oriented code. The connection adapter will block on the IOLoop that is watching to see when pika should read data from and write data to RabbitMQ. Because you're now blocking on the IOLoop, you will receive callback notifications when specific events happen. Example Code ------------ In the example, there are three steps that take place: 1. Setup the connection to RabbitMQ 2. Start the IOLoop 3. Once connected, the on_open method will be called by Pika with a handle to the connection. In this method, a new channel will be opened on the connection. 4. Once the channel is opened, you can do your other actions, whether they be publishing messages, consuming messages or other RabbitMQ related activities.:: import pika # Step #3 def on_open(connection): connection.channel(on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('exchange_name', 'routing_key', 'Test Message', pika.BasicProperties(content_type='text/plain', type='example')) # Step #1: Connect to RabbitMQ connection = pika.SelectConnection(on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-0.10.0/docs/examples/tornado_consumer.rst000066400000000000000000000352331257163076400214160ustar00rootroot00000000000000Tornado Consumer ================ The following example implements a consumer using the :class:`Tornado adapter ` for the `Tornado framework `_ that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: from pika import adapters import pika import logging LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return adapters.TornadoConnection(pika.URLParameters(self._url), self.on_connection_open) def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ if not self._closing: # Create a new connection self._connection = self.connect() def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel %i was closed: (%s) %s', channel, reply_code, reply_text) self._connection.close() def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param str|unicode body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume(self.on_message, self.QUEUE) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.10.0/docs/examples/twisted_example.rst000066400000000000000000000026771257163076400212410ustar00rootroot00000000000000Twisted Consumer Example ======================== Example of writing a consumer using the :py:class:`Twisted connection adapter `:: # -*- coding:utf-8 -*- import pika from pika import exceptions from pika.adapters import twisted_connection from twisted.internet import defer, reactor, protocol,task @defer.inlineCallbacks def run(connection): channel = yield connection.channel() exchange = yield channel.exchange_declare(exchange='topic_link',type='topic') queue = yield channel.queue_declare(queue='hello', auto_delete=False, exclusive=False) yield channel.queue_bind(exchange='topic_link',queue='hello',routing_key='hello.world') yield channel.basic_qos(prefetch_count=1) queue_object, consumer_tag = yield channel.basic_consume(queue='hello',no_ack=False) l = task.LoopingCall(read, queue_object) l.start(0.01) @defer.inlineCallbacks def read(queue_object): ch,method,properties,body = yield queue_object.get() if body: print body yield ch.basic_ack(delivery_tag=method.delivery_tag) parameters = pika.ConnectionParameters() cc = protocol.ClientCreator(reactor, twisted_connection.TwistedProtocolConnection, parameters) d = cc.connectTCP('hostname', 5672) d.addCallback(lambda protocol: protocol.ready) d.addCallback(run) reactor.run() pika-0.10.0/docs/examples/using_urlparameters.rst000066400000000000000000000101551257163076400221240ustar00rootroot00000000000000Using URLParameters =================== Pika has two methods of encapsulating the data that lets it know how to connect to RabbitMQ, :py:class:`pika.connection.ConnectionParameters` and :py:class:`pika.connection.URLParameters`. .. note:: If you're connecting to RabbitMQ on localhost on port 5672, with the default virtual host of */* and the default username and password of *guest* and *guest*, you do not need to specify connection parameters when connecting. Using :py:class:`pika.connection.URLParameters` is an easy way to minimize the variables required to connect to RabbitMQ and supports all of the directives that :py:class:`pika.connection.ConnectionParameters` supports. The following is the format for the URLParameters connection value:: scheme://username:password@host:port/virtual_host?key=value&key=value As you can see, by default, the scheme (amqp, amqps), username, password, host, port and virtual host make up the core of the URL and any other parameter is passed in as query string values. Example Connection URLS ----------------------- The default connection URL connects to the / virtual host as guest using the guest password on localhost port 5672. Note the forwardslash in the URL is encoded to %2F:: amqp://guest:guest@localhost:5672/%2F Connect to a host *rabbit1* as the user *www-data* using the password *rabbit_pwd* on the virtual host *web_messages*:: amqp://www-data:rabbit_pwd@rabbit1/web_messages Connecting via SSL is pretty easy too. To connect via SSL for the previous example, simply change the scheme to *amqps*. If you do not specify a port, Pika will use the default SSL port of 5671:: amqps://www-data:rabbit_pwd@rabbit1/web_messages If you're looking to tweak other parameters, such as enabling heartbeats, simply add the key/value pair as a query string value. The following builds upon the SSL connection, enabling heartbeats every 30 seconds:: amqps://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat_interval=30 Options that are available as query string values: - backpressure_detection: Pass in a value of *t* to enable backpressure detection, it is disabled by default. - channel_max: Alter the default channel maximum by passing in a 32-bit integer value here - connection_attempts: Alter the default of 1 connection attempt by passing in an integer value here [#f1]_. - frame_max: Alter the default frame maximum size value by passing in a long integer value [#f2]_. - heartbeat_interval: Pass a value greater than zero to enable heartbeats between the server and your application. The integer value you pass here will be the number of seconds between heartbeats. - locale: Set the locale of the client using underscore delimited posix Locale code in ll_CC format (en_US, pt_BR, de_DE). - retry_delay: The number of seconds to wait before attempting to reconnect on a failed connection, if connection_attempts is > 0. - socket_timeout: Change the default socket timeout duration from 0.25 seconds to another integer or float value. Adjust with caution. - ssl_options: A url encoded dict of values for the SSL connection. The available keys are: - ca_certs - cert_reqs - certfile - keyfile - ssl_version For an information on what the ssl_options can be set to reference the `official Python documentation `_. Here is an example of setting the client certificate and key:: amqp://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat_interval=30&ssl_options=%7B%27keyfile%27%3A+%27%2Fetc%2Fssl%2Fmykey.pem%27%2C+%27certfile%27%3A+%27%2Fetc%2Fssl%2Fmycert.pem%27%7D The following example demonstrates how to generate the ssl_options string with `Python's urllib `_:: import urllib urllib.urlencode({'ssl_options': {'certfile': '/etc/ssl/mycert.pem', 'keyfile': '/etc/ssl/mykey.pem'}}) .. rubric:: Footnotes .. [#f1] The :py:class:`pika.adapters.blocking_connection.BlockingConnection` adapter does not respect the *connection_attempts* parameter. .. [#f2] The AMQP specification states that a server can reject a request for a frame size larger than the value it passes during content negotiation. pika-0.10.0/docs/faq.rst000066400000000000000000000022721257163076400147630ustar00rootroot00000000000000Frequently Asked Questions -------------------------- - Is Pika thread safe? Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads. - How do I report a bug with Pika? The `main Pika repository `_ is hosted on `Github `_ and we use the Issue tracker at `https://github.com/pika/pika/issues `_. - Is there a mailing list for Pika? Yes, Pika's mailing list is available `on Google Groups `_ and the email address is pika-python@googlegroups.com, though traditionally questions about Pika have been asked on the `RabbitMQ-Discuss mailing list `_. - How can I contribute to Pika? You can `fork the project on Github `_ and issue `Pull Requests `_ when you believe you have something solid to be added to the main repository. pika-0.10.0/docs/index.rst000066400000000000000000000014401257163076400153170ustar00rootroot00000000000000Introduction to Pika ==================== Pika is a pure-Python implementation of the AMQP 0-9-1 protocol that tries to stay fairly independent of the underlying network support library. If you have not developed with Pika or RabbitMQ before, the :doc:`intro` documentation is a good place to get started. Installing Pika --------------- Pika is available for download via PyPI and may be installed using easy_install or pip:: pip install pika or:: easy_install pika To install from source, run "python setup.py install" in the root source directory. Using Pika ---------- .. toctree:: :glob: :maxdepth: 1 intro modules/index examples faq contributors version_history Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search` pika-0.10.0/docs/intro.rst000066400000000000000000000143021257163076400153440ustar00rootroot00000000000000Introduction to Pika ==================== IO and Event Looping -------------------- As AMQP is a two-way RPC protocol where the client can send requests to the server and the server can send requests to a client, Pika implements or extends IO loops in each of its asynchronous connection adapters. These IO loops are blocking methods which loop and listen for events. Each asynchronous adapters follows the same standard for invoking the IO loop. The IO loop is created when the connection adapter is created. To start an IO loop for any given adapter, call the ``connection.ioloop.start()`` method. If you are using an external IO loop such as Tornado's :class:`~tornado.ioloop.IOLoop`, you invoke it as you normally would and then add the adapter to it. Example:: import pika def on_open(connection): # Invoked when the connection is open pass # Create our connection object, passing in the on_open method connection = pika.SelectConnection(on_open_callback=on_open) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() .. _intro_to_cps: Continuation-Passing Style -------------------------- Interfacing with Pika asynchronously is done by passing in callback methods you would like to have invoked when a certain event has completed. For example, if you are going to declare a queue, you pass in a method that will be called when the RabbitMQ server returns a `Queue.DeclareOk `_ response. In our example below we use the following four easy steps: #. We start by creating our connection object, then starting our event loop. #. When we are connected, the *on_connected* method is called. In that method we create a channel. #. When the channel is created, the *on_channel_open* method is called. In that method we declare a queue. #. When the queue is declared successfully, *on_queue_declared* is called. In that method we call :py:meth:`channel.basic_consume ` telling it to call the handle_delivery for each message RabbitMQ delivers to us. #. When RabbitMQ has a message to send us, it call the handle_delivery method passing the AMQP Method frame, Header frame and Body. .. NOTE:: Step #1 is on line #28 and Step #2 is on line #6. This is so that Python knows about the functions we'll call in Steps #2 through #5. .. _cps_example: Example:: import pika # Create a global channel variable to hold our channel object in channel = None # Step #2 def on_connected(connection): """Called when we are fully connected to RabbitMQ""" # Open a channel connection.channel(on_channel_open) # Step #3 def on_channel_open(new_channel): """Called when our channel has opened""" global channel channel = new_channel channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False, callback=on_queue_declared) # Step #4 def on_queue_declared(frame): """Called when RabbitMQ has told us our Queue has been declared, frame is the response from RabbitMQ""" channel.basic_consume(handle_delivery, queue='test') # Step #5 def handle_delivery(channel, method, header, body): """Called when we receive a message from RabbitMQ""" print body # Step #1: Connect to RabbitMQ using the default parameters parameters = pika.ConnectionParameters() connection = pika.SelectConnection(parameters, on_connected) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() Credentials ----------- The :mod:`pika.credentials` module provides the mechanism by which you pass the username and password to the :py:class:`ConnectionParameters ` class when it is created. Example:: import pika credentials = pika.PlainCredentials('username', 'password') parameters = pika.ConnectionParameters(credentials=credentials) .. _connection_parameters: Connection Parameters --------------------- There are two types of connection parameter classes in Pika to allow you to pass the connection information into a connection adapter, :class:`ConnectionParameters ` and :class:`URLParameters `. Both classes share the same default connection values. .. _intro_to_backpressure: TCP Backpressure ---------------- As of RabbitMQ 2.0, client side `Channel.Flow `_ has been removed [#f1]_. Instead, the RabbitMQ broker uses TCP Backpressure to slow your client if it is delivering messages too fast. If you pass in backpressure_detection into your connection parameters, Pika attempts to help you handle this situation by providing a mechanism by which you may be notified if Pika has noticed too many frames have yet to be delivered. By registering a callback function with the :py:meth:`add_backpressure_callback ` method of any connection adapter, your function will be called when Pika sees that a backlog of 10 times the average frame size you have been sending has been exceeded. You may tweak the notification multiplier value by calling the :py:meth:`set_backpressure_multiplier ` method passing any integer value. Example:: import pika parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F?backpressure_detection=t') .. rubric:: Footnotes .. [#f1] "more effective flow control mechanism that does not require cooperation from clients and reacts quickly to prevent the broker from exhausing memory - see http://www.rabbitmq.com/extensions.html#memsup" from http://lists.rabbitmq.com/pipermail/rabbitmq-announce/attachments/20100825/2c672695/attachment.txt pika-0.10.0/docs/modules/000077500000000000000000000000001257163076400151275ustar00rootroot00000000000000pika-0.10.0/docs/modules/adapters/000077500000000000000000000000001257163076400167325ustar00rootroot00000000000000pika-0.10.0/docs/modules/adapters/blocking.rst000066400000000000000000000005271257163076400212600ustar00rootroot00000000000000BlockingConnection ------------------ .. automodule:: pika.adapters.blocking_connection Be sure to check out examples in :doc:`/examples`. .. autoclass:: pika.adapters.blocking_connection.BlockingConnection :members: :inherited-members: .. autoclass:: pika.adapters.blocking_connection.BlockingChannel :members: :inherited-members: pika-0.10.0/docs/modules/adapters/index.rst000066400000000000000000000007451257163076400206010ustar00rootroot00000000000000Connection Adapters =================== Pika uses connection adapters to provide a flexible method for adapting pika's core communication to different IOLoop implementations. In addition to asynchronous adapters, there is the :class:`BlockingConnection ` adapter that provides a more idomatic procedural approach to using Pika. Adapters -------- .. toctree:: :glob: :maxdepth: 1 blocking select tornado twisted pika-0.10.0/docs/modules/adapters/select.rst000066400000000000000000000003101257163076400207350ustar00rootroot00000000000000Select Connection Adapter ========================== .. automodule:: pika.adapters.select_connection .. autoclass:: pika.adapters.select_connection.SelectConnection :members: :inherited-members: pika-0.10.0/docs/modules/adapters/tornado.rst000066400000000000000000000005441257163076400211350ustar00rootroot00000000000000Tornado Connection Adapter ========================== .. automodule:: pika.adapters.tornado_connection Be sure to check out the :doc:`asynchronous examples ` including the Tornado specific :doc:`consumer ` example. .. autoclass:: pika.adapters.tornado_connection.TornadoConnection :members: :inherited-members: pika-0.10.0/docs/modules/adapters/twisted.rst000066400000000000000000000006331257163076400211510ustar00rootroot00000000000000Twisted Connection Adapter ========================== .. automodule:: pika.adapters.twisted_connection .. autoclass:: pika.adapters.twisted_connection.TwistedConnection :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.TwistedProtocolConnection :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.TwistedChannel :members: :inherited-members: pika-0.10.0/docs/modules/channel.rst000066400000000000000000000002241257163076400172670ustar00rootroot00000000000000Channel ======= .. automodule:: pika.channel Channel ------- .. autoclass:: Channel :members: :inherited-members: :member-order: bysource pika-0.10.0/docs/modules/connection.rst000066400000000000000000000002771257163076400200260ustar00rootroot00000000000000Connection ---------- The :class:`~pika.connection.Connection` class implements the base behavior that all connection adapters extend. .. autoclass:: pika.connection.Connection :members: pika-0.10.0/docs/modules/credentials.rst000066400000000000000000000005111257163076400201530ustar00rootroot00000000000000Authentication Credentials ========================== .. automodule:: pika.credentials PlainCredentials ---------------- .. autoclass:: PlainCredentials :members: :inherited-members: :noindex: ExternalCredentials ------------------- .. autoclass:: ExternalCredentials :members: :inherited-members: :noindex: pika-0.10.0/docs/modules/exceptions.rst000066400000000000000000000001261257163076400200410ustar00rootroot00000000000000Exceptions ========== .. automodule:: pika.exceptions :members: :undoc-members: pika-0.10.0/docs/modules/index.rst000066400000000000000000000015751257163076400170000ustar00rootroot00000000000000Core Class and Module Documentation =================================== For the end user, Pika is organized into a small set of objects for all communication with RabbitMQ. - A :doc:`connection adapter ` is used to connect to RabbitMQ and manages the connection. - :doc:`Connection parameters ` are used to instruct the :class:`~pika.connection.Connection` object how to connect to RabbitMQ. - :doc:`credentials` are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. - A :class:`~pika.channel.Channel` object is used to communicate with RabbitMQ via the AMQP RPC methods. - :doc:`exceptions` are raised at various points when using Pika when something goes wrong. .. toctree:: :hidden: :maxdepth: 1 adapters/index channel connection credentials exceptions parameters spec pika-0.10.0/docs/modules/parameters.rst000066400000000000000000000034261257163076400200310ustar00rootroot00000000000000Connection Parameters ===================== To maintain flexibility in how you specify the connection information required for your applications to properly connect to RabbitMQ, pika implements two classes for encapsulating the information, :class:`~pika.connection.ConnectionParameters` and :class:`~pika.connection.URLParameters`. ConnectionParameters -------------------- The classic object for specifying all of the connection parameters required to connect to RabbitMQ, :class:`~pika.connection.ConnectionParameters` provides attributes for tweaking every possible connection option. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('rabbit-server1', 5672, '/', credentials) .. autoclass:: pika.connection.ConnectionParameters :members: :inherited-members: :member-order: bysource URLParameters ------------- The :class:`~pika.connection.URLParameters` class allows you to pass in an AMQP URL when creating the object and supports the host, port, virtual host, ssl, username and password in the base URL and other options are passed in via query parameters. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F') .. autoclass:: pika.connection.URLParameters :members: :inherited-members: :member-order: bysource pika-0.10.0/docs/modules/spec.rst000066400000000000000000000002011257163076400166040ustar00rootroot00000000000000pika.spec ========= .. automodule:: pika.spec :members: :inherited-members: :member-order: bysource :undoc-members: pika-0.10.0/docs/version_history.rst000066400000000000000000001120421257163076400174570ustar00rootroot00000000000000Version History =============== 0.10.0 2015-09-02 ----------------- - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 --------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsynchoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsynchoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as ehnanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.confirm_delivery`: raises UnroutableError when unroutable messages that were sent prior to this call are returned before we receive Confirm.Select-ok. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd99e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-0.10.0/examples/000077500000000000000000000000001257163076400143455ustar00rootroot00000000000000pika-0.10.0/examples/confirmation.py000066400000000000000000000027041257163076400174120ustar00rootroot00000000000000import pika from pika import spec import logging ITERATIONS = 100 logging.basicConfig(level=logging.INFO) confirmed = 0 errors = 0 published = 0 def on_open(connection): connection.channel(on_channel_open) def on_channel_open(channel): global published channel.confirm_delivery(on_delivery_confirmation) for iteration in xrange(0, ITERATIONS): channel.basic_publish('test', 'test.confirm', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) published += 1 def on_delivery_confirmation(frame): global confirmed, errors if isinstance(frame.method, spec.Basic.Ack): confirmed += 1 logging.info('Received confirmation: %r', frame.method) else: logging.error('Received negative confirmation: %r', frame.method) errors += 1 if (confirmed + errors) == ITERATIONS: logging.info('All confirmations received, published %i, confirmed %i with %i errors', published, confirmed, errors) connection.close() parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F?connection_attempts=50') connection = pika.SelectConnection(parameters=parameters, on_open_callback=on_open) try: connection.ioloop.start() except KeyboardInterrupt: connection.close() connection.ioloop.start() pika-0.10.0/examples/consume.py000066400000000000000000000023231257163076400163700ustar00rootroot00000000000000import pika def on_message(channel, method_frame, header_frame, body): channel.queue_declare(queue=body, auto_delete=True) if body.startswith("queue:"): queue = body.replace("queue:", "") key = body + "_key" print("Declaring queue %s bound with key %s" %(queue, key)) channel.queue_declare(queue=queue, auto_delete=True) channel.queue_bind(queue=queue, exchange="test_exchange", routing_key=key) else: print("Message body", body) channel.basic_ack(delivery_tag=method_frame.delivery_tag) credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare(exchange="test_exchange", exchange_type="direct", passive=False, durable=True, auto_delete=False) channel.queue_declare(queue="standard", auto_delete=True) channel.queue_bind(queue="standard", exchange="test_exchange", routing_key="standard_key") channel.basic_qos(prefetch_count=1) channel.basic_consume(on_message, 'standard') try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() pika-0.10.0/examples/consumer_queued.py000066400000000000000000000036101257163076400201220ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json import threading buffer = [] lock = threading.Lock() print('pika version: %s' % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') queue = main_channel.queue_declare(exclusive=True).method.queue queue_tickers = main_channel.queue_declare(exclusive=True).method.queue main_channel.queue_bind(exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def process_buffer(): if not lock.acquire(False): print('locked!') return try: while len(buffer): body = buffer.pop(0) ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: continue print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind(exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) finally: lock.release() def callback(ch, method, properties, body): body = json.loads(body)['order.stop.create'] buffer.append(body) process_buffer() consumer_channel.basic_consume(callback, queue=queue, no_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-0.10.0/examples/consumer_simple.py000066400000000000000000000032551257163076400201300ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json print(('pika version: %s') % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') queue = main_channel.queue_declare(exclusive=True).method.queue queue_tickers = main_channel.queue_declare(exclusive=True).method.queue main_channel.queue_bind(exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def hello(): print('Hello world') connection.add_timeout(5, hello) def callback(ch, method, properties, body): body = json.loads(body)['order.stop.create'] ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: return print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind(exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) import logging logging.basicConfig(level=logging.INFO) consumer_channel.basic_consume(callback, queue=queue, no_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-0.10.0/examples/producer.py000066400000000000000000000027751257163076400165550ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import pika import json import random print(('pika version: %s') % pika.__version__) connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() if pika.__version__=='0.9.5': main_channel.exchange_declare(exchange='com.micex.sten', type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', type='direct') else: main_channel.exchange_declare(exchange='com.micex.sten', exchange_type='direct') main_channel.exchange_declare(exchange='com.micex.lasttrades', exchange_type='direct') tickers = {} tickers['MXSE.EQBR.LKOH'] = (1933,1940) tickers['MXSE.EQBR.MSNG'] = (1.35,1.45) tickers['MXSE.EQBR.SBER'] = (90,92) tickers['MXSE.EQNE.GAZP'] = (156,162) tickers['MXSE.EQNE.PLZL'] = (1025,1040) tickers['MXSE.EQNL.VTBR'] = (0.05,0.06) def getticker(): return list(tickers.keys())[random.randrange(0,len(tickers)-1)] _COUNT_ = 10 for i in range(0,_COUNT_): ticker = getticker() msg = {'order.stop.create':{'data':{'params':{'condition':{'ticker':ticker}}}}} main_channel.basic_publish(exchange='com.micex.sten', routing_key='order.stop.create', body=json.dumps(msg), properties=pika.BasicProperties(content_type='application/json') ) print('send ticker %s' % ticker) connection.close() pika-0.10.0/examples/publish.py000066400000000000000000000023141257163076400163650ustar00rootroot00000000000000import pika import logging logging.basicConfig(level=logging.DEBUG) credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare(exchange="test_exchange", exchange_type="direct", passive=False, durable=True, auto_delete=False) print("Sending message to create a queue") channel.basic_publish('test_exchange', 'standard_key', 'queue:group', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message to group") channel.basic_publish('test_exchange', 'group_key', 'Message to group_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message") channel.basic_publish('test_exchange', 'standard_key', 'Message to standard_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() pika-0.10.0/examples/send.py000066400000000000000000000021731257163076400156530ustar00rootroot00000000000000import pika import time import logging logging.basicConfig(level=logging.DEBUG) ITERATIONS = 100 connection = pika.BlockingConnection(pika.URLParameters('amqp://guest:guest@localhost:5672/%2F?heartbeat_interval=1')) channel = connection.channel() def closeit(): print('Close it') connection.close() connection.add_timeout(5, closeit) connection.sleep(100) """ channel.confirm_delivery() start_time = time.time() for x in range(0, ITERATIONS): if not channel.basic_publish(exchange='test', routing_key='', body='Test 123', properties=pika.BasicProperties(content_type='text/plain', app_id='test', delivery_mode=1)): print 'Delivery not confirmed' else: print 'Confirmed delivery' channel.close() connection.close() duration = time.time() - start_time print "Published %i messages in %.4f seconds (%.2f messages per second)" % (ITERATIONS, duration, (ITERATIONS/duration)) """ pika-0.10.0/examples/tmp.py000066400000000000000000000334741257163076400155320ustar00rootroot00000000000000# -*- coding: utf-8 -*- import logging import pika import json LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExamplePublisher(object): """This is an example publisher that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. It uses delivery confirmations and illustrates one way to keep track of messages that have been sent and if they've been confirmed by RabbitMQ. """ EXCHANGE = 'message' EXCHANGE_TYPE = 'topic' PUBLISH_INTERVAL = 1 QUEUE = 'text' ROUTING_KEY = 'example.text' URLS = ['amqp://test:test@localhost:5672/%2F', 'amqp://guest:guest@localhost:5672/%2F'] def __init__(self): """Setup the example publisher object, passing in the URL we will use to connect to RabbitMQ. """ self._connection = None self._channel = None self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 self._stopping = False self._closing = False self._url_offset = 0 def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ url = self.URLS[self._url_offset] self._url_offset += 1 if self._url_offset == len(self.URLS): self._url_offset = 0 LOGGER.info('Connecting to %s', url) return pika.SelectConnection(pika.URLParameters(url), self.on_connection_open, False) def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._closing = True self._connection.close() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reply_code, reply_text): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param int reply_code: The server provided reply_code if given :param str reply_text: The server provided reply_text if given """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: (%s) %s', reply_code, reply_text) self._connection.add_timeout(5, self.reconnect) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :type unused_connection: pika.SelectConnection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ # This is the old connection IOLoop instance, stop its ioloop self._connection.ioloop.stop() # Create a new connection self._connection = self.connect() # There is now a new connection, needs a new ioloop to run self._connection.ioloop.start() def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reply_code, reply_text): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param int reply_code: The numeric reason the channel was closed :param str reply_text: The text reason the channel was closed """ LOGGER.warning('Channel was closed: (%s) %s', reply_code, reply_text) self._deliveries = [] self._message_number = 0 if not self._closing: self._connection.close() def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare(self.on_exchange_declareok, exchange_name, self.EXCHANGE_TYPE) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare(self.on_queue_declareok, queue_name) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind(self.on_bindok, self.QUEUE, self.EXCHANGE, self.ROUTING_KEY) def on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ confirmation_type = method_frame.method.NAME.split('.')[1].lower() LOGGER.info('Received %s for delivery tag: %i', confirmation_type, method_frame.method.delivery_tag) if confirmation_type == 'ack': self._acked += 1 elif confirmation_type == 'nack': self._nacked += 1 self._deliveries.remove(method_frame.method.delivery_tag) LOGGER.info('Published %i messages, %i have yet to be confirmed, ' '%i were acked and %i were nacked', self._message_number, len(self._deliveries), self._acked, self._nacked) def enable_delivery_confirmations(self): """Send the Confirm.Select RPC method to RabbitMQ to enable delivery confirmations on the channel. The only way to turn this off is to close the channel and create a new one. When the message is confirmed from RabbitMQ, the on_delivery_confirmation method will be invoked passing in a Basic.Ack or Basic.Nack method from RabbitMQ that will indicate which messages it is confirming or rejecting. """ LOGGER.info('Issuing Confirm.Select RPC command') self._channel.confirm_delivery(self.on_delivery_confirmation) def publish_message(self): """If the class is not stopping, publish a message to RabbitMQ, appending a list of deliveries with the message number that was sent. This list will be used to check for delivery confirmations in the on_delivery_confirmations method. Once the message has been sent, schedule another message to be sent. The main reason I put scheduling in was just so you can get a good idea of how the process is flowing by slowing down and speeding up the delivery intervals by changing the PUBLISH_INTERVAL constant in the class. """ if self._stopping: return message = {u'مفتاح': u' قيمة', u'键': u'值', u'キー': u'値'} properties = pika.BasicProperties(app_id='example-publisher', content_type='text/plain', headers=message) self._channel.basic_publish(self.EXCHANGE, self.ROUTING_KEY, json.dumps(message, ensure_ascii=False), properties) self._message_number += 1 self._deliveries.append(self._message_number) LOGGER.info('Published message # %i', self._message_number) self.schedule_next_message() def schedule_next_message(self): """If we are not closing our connection to RabbitMQ, schedule another message to be delivered in PUBLISH_INTERVAL seconds. """ if self._stopping: return LOGGER.info('Scheduling next message for %0.1f seconds', self.PUBLISH_INTERVAL) self._connection.add_timeout(self.PUBLISH_INTERVAL, self.publish_message) def start_publishing(self): """This method will enable delivery confirmations and schedule the first message to be sent to RabbitMQ """ LOGGER.info('Issuing consumer related RPC commands') self.enable_delivery_confirmations() self.schedule_next_message() def on_bindok(self, unused_frame): """This method is invoked by pika when it receives the Queue.BindOk response from RabbitMQ. Since we know we're now setup and bound, it's time to start publishing.""" LOGGER.info('Queue bound') self.start_publishing() def close_channel(self): """Invoke this command to close the channel with RabbitMQ by sending the Channel.Close RPC command. """ LOGGER.info('Closing the channel') if self._channel: self._channel.close() def open_channel(self): """This method will open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ confirms the channel is open by sending the Channel.OpenOK RPC reply, the on_channel_open method will be invoked. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def run(self): """Run the example code by connecting and then starting the IOLoop. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Stop the example by closing the channel and connection. We set a flag here so that we stop scheduling new messages to be published. The IOLoop is started because this method is invoked by the Try/Catch below when KeyboardInterrupt is caught. Starting the IOLoop again will allow the publisher to cleanly disconnect from RabbitMQ. """ LOGGER.info('Stopping') self._stopping = True self.close_channel() self.close_connection() self._connection.ioloop.start() LOGGER.info('Stopped') def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) example = ExamplePublisher() try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-0.10.0/examples/twisted_service.py000066400000000000000000000173731257163076400201350ustar00rootroot00000000000000""" # -*- coding:utf-8 -*- # based on: # - txamqp-helpers by Dan Siemon (March 2010) # http://git.coverfire.com/?p=txamqp-twistd.git;a=tree # - Post by Brian Chandler # https://groups.google.com/forum/#!topic/pika-python/o_deVmGondk # - Pika Documentation # http://pika.readthedocs.org/en/latest/examples/twisted_example.html Fire up this test application via `twistd -ny twisted_service.py` The application will answer to requests to exchange "foobar" and any of the routing_key values: "request1", "request2", or "request3" with messages to the same exchange, but with routing_key "response" When a routing_key of "task" is used on the exchange "foobar", the application can asynchronously run a maximum of 2 tasks at once as defined by PREFETCH_COUNT """ import pika from pika import spec from pika import exceptions from pika.adapters import twisted_connection from twisted.internet import protocol from twisted.application import internet from twisted.application import service from twisted.internet.defer import inlineCallbacks from twisted.internet import ssl, defer, task from twisted.python import log from twisted.internet import reactor PREFETCH_COUNT = 2 class PikaService(service.MultiService): name = 'amqp' def __init__(self, parameter): service.MultiService.__init__(self) self.parameters = parameter def startService(self): self.connect() service.MultiService.startService(self) def getFactory(self): if len(self.services) > 0: return self.services[0].factory def connect(self): f = PikaFactory(self.parameters) if self.parameters.ssl: s = ssl.ClientContextFactory() serv = internet.SSLClient(host=self.parameters.host, port=self.parameters.port, factory=f, contextFactory=s) else: serv = internet.TCPClient(host=self.parameters.host, port=self.parameters.port, factory=f) serv.factory = f f.service = serv name = '%s%s:%d' % ('ssl:' if self.parameters.ssl else '', self.parameters.host, self.parameters.port) serv.__repr__ = lambda : '' % name serv.setName(name) serv.parent = self self.addService(serv) class PikaProtocol(twisted_connection.TwistedProtocolConnection): connected = False name = 'AMQP:Protocol' @inlineCallbacks def connected(self, connection): self.channel = yield connection.channel() yield self.channel.basic_qos(prefetch_count=PREFETCH_COUNT) self.connected = True for (exchange, routing_key, callback,) in self.factory.read_list: yield self.setup_read(exchange, routing_key, callback) self.send() @inlineCallbacks def read(self, exchange, routing_key, callback): """Add an exchange to the list of exchanges to read from.""" if self.connected: yield self.setup_read(exchange, routing_key, callback) @inlineCallbacks def setup_read(self, exchange, routing_key, callback): """This function does the work to read from an exchange.""" if not exchange == '': yield self.channel.exchange_declare(exchange=exchange, type='topic', durable=True, auto_delete=False) self.channel.queue_declare(queue=routing_key, durable=True) (queue, consumer_tag,) = yield self.channel.basic_consume(queue=routing_key, no_ack=False) d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) def _read_item(self, item, queue, callback): """Callback function which is called when an item is read.""" d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) (channel, deliver, props, msg,) = item log.msg('%s (%s): %s' % (deliver.exchange, deliver.routing_key, repr(msg)), system='Pika:<=') d = defer.maybeDeferred(callback, item) d.addCallbacks( lambda _: channel.basic_ack(deliver.delivery_tag), lambda _: channel.basic_nack(deliver.delivery_tag) ) def _read_item_err(self, error): print(error) def send(self): """If connected, send all waiting messages.""" if self.connected: while len(self.factory.queued_messages) > 0: (exchange, r_key, message,) = self.factory.queued_messages.pop(0) self.send_message(exchange, r_key, message) @inlineCallbacks def send_message(self, exchange, routing_key, msg): """Send a single message.""" log.msg('%s (%s): %s' % (exchange, routing_key, repr(msg)), system='Pika:=>') yield self.channel.exchange_declare(exchange=exchange, type='topic', durable=True, auto_delete=False) prop = spec.BasicProperties(delivery_mode=2) try: yield self.channel.basic_publish(exchange=exchange, routing_key=routing_key, body=msg, properties=prop) except Exception as error: log.msg('Error while sending message: %s' % error, system=self.name) class PikaFactory(protocol.ReconnectingClientFactory): name = 'AMQP:Factory' def __init__(self, parameters): self.parameters = parameters self.client = None self.queued_messages = [] self.read_list = [] def startedConnecting(self, connector): log.msg('Started to connect.', system=self.name) def buildProtocol(self, addr): self.resetDelay() log.msg('Connected', system=self.name) self.client = PikaProtocol(self.parameters) self.client.factory = self self.client.ready.addCallback(self.client.connected) return self.client def clientConnectionLost(self, connector, reason): log.msg('Lost connection. Reason: %s' % reason, system=self.name) protocol.ReconnectingClientFactory.clientConnectionLost(self, connector, reason) def clientConnectionFailed(self, connector, reason): log.msg('Connection failed. Reason: %s' % reason, system=self.name) protocol.ReconnectingClientFactory.clientConnectionFailed(self, connector, reason) def send_message(self, exchange = None, routing_key = None, message = None): self.queued_messages.append((exchange, routing_key, message)) if self.client is not None: self.client.send() def read_messages(self, exchange, routing_key, callback): """Configure an exchange to be read from.""" self.read_list.append((exchange, routing_key, callback)) if self.client is not None: self.client.read(exchange, routing_key, callback) application = service.Application("pikaapplication") ps = PikaService(pika.ConnectionParameters(host="localhost", virtual_host="/", credentials=pika.PlainCredentials("guest", "guest"))) ps.setServiceParent(application) class TestService(service.Service): def task(self, msg): """ Method for a time consuming task. This function must return a deferred. If it is successfull, a `basic.ack` will be sent to AMQP. If the task was not completed a `basic.nack` will be sent. In this example it will always return successfully after a 2 second pause. """ return task.deferLater(reactor, 2, lambda: log.msg("task completed")) def respond(self, msg): self.amqp.send_message('foobar', 'response', msg[3]) def startService(self): self.amqp = self.parent.getServiceNamed("amqp").getFactory() self.amqp.read_messages("foobar", "request1", self.respond) self.amqp.read_messages("foobar", "request2", self.respond) self.amqp.read_messages("foobar", "request3", self.respond) self.amqp.read_messages("foobar", "task", self.task) ts = TestService() ts.setServiceParent(application) pika-0.10.0/nose.cfg000066400000000000000000000000721257163076400141530ustar00rootroot00000000000000[nosetests] verbosity=3 tests=tests/unit,tests/acceptance pika-0.10.0/pika/000077500000000000000000000000001257163076400134535ustar00rootroot00000000000000pika-0.10.0/pika/__init__.py000066400000000000000000000013721257163076400155670ustar00rootroot00000000000000__version__ = '0.10.0' import logging try: # not available in python 2.6 from logging import NullHandler except ImportError: class NullHandler(logging.Handler): def emit(self, record): pass # Add NullHandler to prevent logging warnings logging.getLogger(__name__).addHandler(NullHandler()) from pika.connection import ConnectionParameters from pika.connection import URLParameters from pika.credentials import PlainCredentials from pika.spec import BasicProperties from pika.adapters import BaseConnection from pika.adapters import BlockingConnection from pika.adapters import SelectConnection from pika.adapters import TornadoConnection from pika.adapters import TwistedConnection from pika.adapters import LibevConnection pika-0.10.0/pika/adapters/000077500000000000000000000000001257163076400152565ustar00rootroot00000000000000pika-0.10.0/pika/adapters/__init__.py000066400000000000000000000030521257163076400173670ustar00rootroot00000000000000# ***** BEGIN LICENSE BLOCK ***** # # For copyright and licensing please refer to COPYING. # # ***** END LICENSE BLOCK ***** """Pika provides multiple adapters to connect to RabbitMQ: - adapters.select_connection.SelectConnection: A native event based connection adapter that implements select, kqueue, poll and epoll. - adapters.tornado_connection.TornadoConnection: Connection adapter for use with the Tornado web framework. - adapters.blocking_connection.BlockingConnection: Enables blocking, synchronous operation on top of library for simple uses. - adapters.twisted_connection.TwistedConnection: Connection adapter for use with the Twisted framework - adapters.libev_connection.LibevConnection: Connection adapter for use with the libev event loop and employing nonblocking IO """ from pika.adapters.base_connection import BaseConnection from pika.adapters.blocking_connection import BlockingConnection from pika.adapters.select_connection import SelectConnection from pika.adapters.select_connection import IOLoop # Dynamically handle 3rd party library dependencies for optional imports try: from pika.adapters.tornado_connection import TornadoConnection except ImportError: TornadoConnection = None try: from pika.adapters.twisted_connection import TwistedConnection from pika.adapters.twisted_connection import TwistedProtocolConnection except ImportError: TwistedConnection = None TwistedProtocolConnection = None try: from pika.adapters.libev_connection import LibevConnection except ImportError: LibevConnection = None pika-0.10.0/pika/adapters/base_connection.py000066400000000000000000000430251257163076400207650ustar00rootroot00000000000000"""Base class extended by connection adapters. This extends the connection.Connection class to encapsulate connection behavior but still isolate socket and low level communication. """ import errno import logging import socket import ssl import pika.compat from pika import connection from pika import exceptions try: SOL_TCP = socket.SOL_TCP except AttributeError: SOL_TCP = 6 if pika.compat.PY2: _SOCKET_ERROR = socket.error else: # socket.error was deprecated and replaced by OSError in python 3.3 _SOCKET_ERROR = OSError LOGGER = logging.getLogger(__name__) class BaseConnection(connection.Connection): """BaseConnection class that should be extended by connection adapters""" # Use epoll's constants to keep life easy READ = 0x0001 WRITE = 0x0004 ERROR = 0x0008 ERRORS_TO_ABORT = [errno.EBADF, errno.ECONNABORTED, errno.EPIPE] ERRORS_TO_IGNORE = [errno.EWOULDBLOCK, errno.EAGAIN, errno.EINTR] DO_HANDSHAKE = True WARN_ABOUT_IOLOOP = False def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, ioloop=None, stop_ioloop_on_close=True): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Method to call on connection open :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param method on_close_callback: Method to call on connection close :param object ioloop: IOLoop object to use :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :raises: RuntimeError :raises: ValueError """ if parameters and not isinstance(parameters, connection.Parameters): raise ValueError('Expected instance of Parameters, not %r' % parameters) # Let the developer know we could not import SSL if parameters and parameters.ssl and not ssl: raise RuntimeError("SSL specified but it is not available") self.base_events = self.READ | self.ERROR self.event_state = self.base_events self.ioloop = ioloop self.socket = None self.stop_ioloop_on_close = stop_ioloop_on_close self.write_buffer = None super(BaseConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback) def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ return self.ioloop.add_timeout(deadline, callback_method) def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ super(BaseConnection, self).close(reply_code, reply_text) self._handle_ioloop_stop() def remove_timeout(self, timeout_id): """Remove the timeout from the IOLoop by the ID returned from add_timeout. :rtype: str """ self.ioloop.remove_timeout(timeout_id) def _adapter_connect(self): """Connect to the RabbitMQ broker, returning True if connected. :returns: error string or exception instance on error; None on success """ # Get the addresses for the socket, supporting IPv4 & IPv6 while True: try: addresses = socket.getaddrinfo(self.params.host, self.params.port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue LOGGER.critical('Could not get addresses to use: %s (%s)', error, self.params.host) return error # If the socket is created and connected, continue on error = "No socket addresses available" for sock_addr in addresses: error = self._create_and_connect_to_socket(sock_addr) if not error: # Make the socket non-blocking after the connect self.socket.setblocking(0) return None self._cleanup_socket() # Failed to connect return error def _adapter_disconnect(self): """Invoked if the connection is being told to disconnect""" try: self._remove_heartbeat() self._cleanup_socket() self._check_state_on_disconnect() finally: # Ensure proper cleanup since _check_state_on_disconnect may raise # an exception self._handle_ioloop_stop() self._init_connection_state() def _check_state_on_disconnect(self): """Checks to see if we were in opening a connection with RabbitMQ when we were disconnected and raises exceptions for the anticipated exception types. """ if self.connection_state == self.CONNECTION_PROTOCOL: LOGGER.error('Incompatible Protocol Versions') raise exceptions.IncompatibleProtocolError elif self.connection_state == self.CONNECTION_START: LOGGER.error("Socket closed while authenticating indicating a " "probable authentication error") raise exceptions.ProbableAuthenticationError elif self.connection_state == self.CONNECTION_TUNE: LOGGER.error("Socket closed while tuning the connection indicating " "a probable permission error when accessing a virtual " "host") raise exceptions.ProbableAccessDeniedError elif self.is_open: LOGGER.warning("Socket closed when connection was open") elif not self.is_closed and not self.is_closing: LOGGER.warning('Unknown state on disconnect: %i', self.connection_state) def _cleanup_socket(self): """Close the socket cleanly""" if self.socket: try: self.socket.shutdown(socket.SHUT_RDWR) except _SOCKET_ERROR: pass self.socket.close() self.socket = None def _create_and_connect_to_socket(self, sock_addr_tuple): """Create socket and connect to it, using SSL if enabled. :returns: error string on failure; None on success """ self.socket = socket.socket(sock_addr_tuple[0], socket.SOCK_STREAM, 0) self.socket.setsockopt(SOL_TCP, socket.TCP_NODELAY, 1) self.socket.settimeout(self.params.socket_timeout) # Wrap socket if using SSL if self.params.ssl: self.socket = self._wrap_socket(self.socket) ssl_text = " with SSL" else: ssl_text = "" LOGGER.info('Connecting to %s:%s%s', sock_addr_tuple[4][0], sock_addr_tuple[4][1], ssl_text) # Connect to the socket try: self.socket.connect(sock_addr_tuple[4]) except socket.timeout: error = 'Connection to %s:%s failed: timeout' % ( sock_addr_tuple[4][0], sock_addr_tuple[4][1] ) LOGGER.error(error) return error except _SOCKET_ERROR as error: error = 'Connection to %s:%s failed: %s' % (sock_addr_tuple[4][0], sock_addr_tuple[4][1], error) LOGGER.warning(error) return error # Handle SSL Connection Negotiation if self.params.ssl and self.DO_HANDSHAKE: try: self._do_ssl_handshake() except ssl.SSLError as error: error = 'SSL connection to %s:%s failed: %s' % ( sock_addr_tuple[4][0], sock_addr_tuple[4][1], error ) LOGGER.error(error) return error # Made it this far return None def _do_ssl_handshake(self): """Perform SSL handshaking, copied from python stdlib test_ssl.py. """ if not self.DO_HANDSHAKE: return while True: try: self.socket.do_handshake() break except ssl.SSLError as err: if err.args[0] == ssl.SSL_ERROR_WANT_READ: self.event_state = self.READ elif err.args[0] == ssl.SSL_ERROR_WANT_WRITE: self.event_state = self.WRITE else: raise self._manage_event_state() @staticmethod def _get_error_code(error_value): """Get the error code from the error_value accounting for Python version differences. :rtype: int """ if not error_value: return None if hasattr(error_value, 'errno'): # Python >= 2.6 return error_value.errno elif error_value is not None: return error_value[0] # Python <= 2.5 return None def _flush_outbound(self): """write early, if the socket will take the data why not get it out there asap. """ self._handle_write() self._manage_event_state() def _handle_disconnect(self): """Called internally when the socket is disconnected already """ self._adapter_disconnect() self._on_connection_closed(None, True) def _handle_ioloop_stop(self): """Invoked when the connection is closed to determine if the IOLoop should be stopped or not. """ if self.stop_ioloop_on_close and self.ioloop: self.ioloop.stop() elif self.WARN_ABOUT_IOLOOP: LOGGER.warning('Connection is closed but not stopping IOLoop') def _handle_error(self, error_value): """Internal error handling method. Here we expect a socket.error coming in and will handle different socket errors differently. :param int|object error_value: The inbound error """ if 'timed out' in str(error_value): raise socket.timeout error_code = self._get_error_code(error_value) if not error_code: LOGGER.critical("Tried to handle an error where no error existed") return # Ok errors, just continue what we were doing before if error_code in self.ERRORS_TO_IGNORE: LOGGER.debug("Ignoring %s", error_code) return # Socket is no longer connected, abort elif error_code in self.ERRORS_TO_ABORT: LOGGER.error("Fatal Socket Error: %r", error_value) elif self.params.ssl and isinstance(error_value, ssl.SSLError): if error_value.args[0] == ssl.SSL_ERROR_WANT_READ: self.event_state = self.READ elif error_value.args[0] == ssl.SSL_ERROR_WANT_WRITE: self.event_state = self.WRITE else: LOGGER.error("SSL Socket error: %r", error_value) else: # Haven't run into this one yet, log it. LOGGER.error("Socket Error: %s", error_code) # Disconnect from our IOLoop and let Connection know what's up self._handle_disconnect() def _handle_timeout(self): """Handle a socket timeout in read or write. We don't do anything in the non-blocking handlers because we only have the socket in a blocking state during connect.""" pass def _handle_events(self, fd, events, error=None, write_only=False): """Handle IO/Event loop events, processing them. :param int fd: The file descriptor for the events :param int events: Events from the IO/Event loop :param int error: Was an error specified :param bool write_only: Only handle write events """ if not self.socket: LOGGER.error('Received events on closed socket: %r', fd) return if self.socket and (events & self.WRITE): self._handle_write() self._manage_event_state() if self.socket and not write_only and (events & self.READ): self._handle_read() if (self.socket and write_only and (events & self.READ) and (events & self.ERROR)): LOGGER.error('BAD libc: Write-Only but Read+Error. ' 'Assume socket disconnected.') self._handle_disconnect() if self.socket and (events & self.ERROR): LOGGER.error('Error event %r, %r', events, error) self._handle_error(error) def _handle_read(self): """Read from the socket and call our on_data_available with the data.""" try: while True: try: if self.params.ssl: data = self.socket.read(self._buffer_size) else: data = self.socket.recv(self._buffer_size) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue else: raise except socket.timeout: self._handle_timeout() return 0 except ssl.SSLError as error: if error.args[0] == ssl.SSL_ERROR_WANT_READ: # ssl wants more data but there is nothing currently # available in the socket, wait for it to become readable. return 0 return self._handle_error(error) except _SOCKET_ERROR as error: if error.errno in (errno.EAGAIN, errno.EWOULDBLOCK): return 0 return self._handle_error(error) # Empty data, should disconnect if not data or data == 0: LOGGER.error('Read empty data, calling disconnect') return self._handle_disconnect() # Pass the data into our top level frame dispatching method self._on_data_available(data) return len(data) def _handle_write(self): """Try and write as much as we can, if we get blocked requeue what's left""" bytes_written = 0 try: while self.outbound_buffer: frame = self.outbound_buffer.popleft() while True: try: bw = self.socket.send(frame) break except _SOCKET_ERROR as error: if error.errno == errno.EINTR: continue else: raise bytes_written += bw if bw < len(frame): LOGGER.debug("Partial write, requeing remaining data") self.outbound_buffer.appendleft(frame[bw:]) break except socket.timeout: # Will only come here if the socket is blocking LOGGER.debug("socket timeout, requeuing frame") self.outbound_buffer.appendleft(frame) self._handle_timeout() except _SOCKET_ERROR as error: if error.errno in (errno.EAGAIN, errno.EWOULDBLOCK): LOGGER.debug("Would block, requeuing frame") self.outbound_buffer.appendleft(frame) else: return self._handle_error(error) return bytes_written def _init_connection_state(self): """Initialize or reset all of our internal state variables for a given connection. If we disconnect and reconnect, all of our state needs to be wiped. """ super(BaseConnection, self)._init_connection_state() self.base_events = self.READ | self.ERROR self.event_state = self.base_events self.socket = None def _manage_event_state(self): """Manage the bitmask for reading/writing/error which is used by the io/event handler to specify when there is an event such as a read or write. """ if self.outbound_buffer: if not self.event_state & self.WRITE: self.event_state |= self.WRITE self.ioloop.update_handler(self.socket.fileno(), self.event_state) elif self.event_state & self.WRITE: self.event_state = self.base_events self.ioloop.update_handler(self.socket.fileno(), self.event_state) def _wrap_socket(self, sock): """Wrap the socket for connecting over SSL. :rtype: ssl.SSLSocket """ return ssl.wrap_socket(sock, do_handshake_on_connect=self.DO_HANDSHAKE, **self.params.ssl_options) pika-0.10.0/pika/adapters/blocking_connection.py000066400000000000000000003054131257163076400216450ustar00rootroot00000000000000"""The blocking connection adapter module implements blocking semantics on top of Pika's core AMQP driver. While most of the asynchronous expectations are removed when using the blocking connection adapter, it attempts to remain true to the asynchronous RPC nature of the AMQP protocol, supporting server sent RPC commands. The user facing classes in the module consist of the :py:class:`~pika.adapters.blocking_connection.BlockingConnection` and the :class:`~pika.adapters.blocking_connection.BlockingChannel` classes. """ # Disable "access to protected member warnings: this wrapper implementation is # a friend of those instances # pylint: disable=W0212 from collections import namedtuple, deque import contextlib import functools import logging import time import pika.channel from pika import compat from pika import exceptions import pika.spec # NOTE: import SelectConnection after others to avoid circular depenency from pika.adapters.select_connection import SelectConnection LOGGER = logging.getLogger(__name__) class _CallbackResult(object): """ CallbackResult is a non-thread-safe implementation for receiving callback results; INTERNAL USE ONLY! """ __slots__ = ('_value_class', '_ready', '_values') def __init__(self, value_class=None): """ :param callable value_class: only needed if the CallbackResult instance will be used with `set_value_once` and `append_element`. *args and **kwargs of the value setter methods will be passed to this class. """ self._value_class = value_class self._ready = None self._values = None self.reset() def reset(self): """Reset value, but not _value_class""" self._ready = False self._values = None def __bool__(self): """ Called by python runtime to implement truth value testing and the built-in operation bool(); NOTE: python 3.x """ return self.is_ready() # python 2.x version of __bool__ __nonzero__ = __bool__ def __enter__(self): """ Entry into context manager that automatically resets the object on exit; this usage pattern helps garbage-collection by eliminating potential circular references. """ return self def __exit__(self, *args, **kwargs): """Reset value""" self.reset() def is_ready(self): """ :returns: True if the object is in a signaled state """ return self._ready @property def ready(self): """True if the object is in a signaled state""" return self._ready def signal_once(self, *_args, **_kwargs): # pylint: disable=W0613 """ Set as ready :raises AssertionError: if result was already signalled """ assert not self._ready, '_CallbackResult was already set' self._ready = True def set_value_once(self, *args, **kwargs): """ Set as ready with value; the value may be retrived via the `value` property getter :raises AssertionError: if result was already set """ self.signal_once() try: self._values = (self._value_class(*args, **kwargs),) except Exception: LOGGER.error( "set_value_once failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise def append_element(self, *args, **kwargs): """Append an element to values""" assert not self._ready or isinstance(self._values, list), ( '_CallbackResult state is incompatible with append_element: ' 'ready=%r; values=%r' % (self._ready, self._values)) try: value = self._value_class(*args, **kwargs) except Exception: LOGGER.error( "append_element failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise if self._values is None: self._values = [value] else: self._values.append(value) self._ready = True @property def value(self): """ :returns: a reference to the value that was set via `set_value_once` :raises AssertionError: if result was not set or value is incompatible with `set_value_once` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, tuple) and len(self._values) == 1, ( '_CallbackResult value is incompatible with set_value_once: %r' % (self._values,)) return self._values[0] @property def elements(self): """ :returns: a reference to the list containing one or more elements that were added via `append_element` :raises AssertionError: if result was not set or value is incompatible with `append_element` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, list) and len(self._values) > 0, ( '_CallbackResult value is incompatible with append_element: %r' % (self._values,)) return self._values class _IoloopTimerContext(object): # pylint: disable=R0903 """Context manager for registering and safely unregistering a SelectConnection ioloop-based timer """ def __init__(self, duration, connection): """ :param float duration: non-negative timer duration in seconds :param SelectConnection connection: """ assert hasattr(connection, 'add_timeout'), connection self._duration = duration self._connection = connection self._callback_result = _CallbackResult() self._timer_id = None def __enter__(self): """Register a timer""" self._timer_id = self._connection.add_timeout( self._duration, self._callback_result.signal_once) return self def __exit__(self, *_args, **_kwargs): """Unregister timer if it hasn't fired yet""" if not self._callback_result: self._connection.remove_timeout(self._timer_id) def is_ready(self): """ :returns: True if timer has fired, False otherwise """ return self._callback_result.is_ready() class _TimerEvt(object): # pylint: disable=R0903 """Represents a timer created via `BlockingConnection.add_timeout`""" __slots__ = ('timer_id', '_callback') def __init__(self, callback): """ :param callback: see callback_method in `BlockingConnection.add_timeout` """ self._callback = callback # Will be set to timer id returned from the underlying implementation's # `add_timeout` method self.timer_id = None def __repr__(self): return '%s(timer_id=%s, callback=%s)' % (self.__class__.__name__, self.timer_id, self._callback) def dispatch(self): """Dispatch the user's callback method""" self._callback() class _ConnectionBlockedUnblockedEvtBase(object): # pylint: disable=R0903 """Base class for `_ConnectionBlockedEvt` and `_ConnectionUnblockedEvt`""" __slots__ = ('_callback', '_method_frame') def __init__(self, callback, method_frame): """ :param callback: see callback_method parameter in `BlockingConnection.add_on_connection_blocked_callback` and `BlockingConnection.add_on_connection_unblocked_callback` :param pika.frame.Method method_frame: with method_frame.method of type `pika.spec.Connection.Blocked` or `pika.spec.Connection.Unblocked` """ self._callback = callback self._method_frame = method_frame def __repr__(self): return '%s(callback=%s, frame=%s)' % (self.__class__.__name__, self._callback, self._method_frame) def dispatch(self): """Dispatch the user's callback method""" self._callback(self._method_frame) class _ConnectionBlockedEvt( # pylint: disable=R0903 _ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Blocked notification from RabbitMQ broker`""" pass class _ConnectionUnblockedEvt( # pylint: disable=R0903 _ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Unblocked notification from RabbitMQ broker`""" pass class BlockingConnection(object): # pylint: disable=R0902 """The BlockingConnection creates a layer on top of Pika's asynchronous core providing methods that will block until their expected response has returned. Due to the asynchronous nature of the `Basic.Deliver` and `Basic.Return` calls from RabbitMQ to your application, you can still implement continuation-passing style asynchronous methods if you'd like to receive messages from RabbitMQ using :meth:`basic_consume ` or if you want to be notified of a delivery failure when using :meth:`basic_publish ` . For more information about communicating with the blocking_connection adapter, be sure to check out the :class:`BlockingChannel ` class which implements the :class:`Channel ` based communication for the blocking_connection adapter. """ # Connection-opened callback args _OnOpenedArgs = namedtuple('BlockingConnection__OnOpenedArgs', 'connection') # Connection-establishment error callback args _OnOpenErrorArgs = namedtuple('BlockingConnection__OnOpenErrorArgs', 'connection error_text') # Connection-closing callback args _OnClosedArgs = namedtuple('BlockingConnection__OnClosedArgs', 'connection reason_code reason_text') # Channel-opened callback args _OnChannelOpenedArgs = namedtuple( 'BlockingConnection__OnChannelOpenedArgs', 'channel') def __init__(self, parameters=None, _impl_class=None): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param _impl_class: for tests/debugging only; implementation class; None=default :raises RuntimeError: """ # Used by the _acquire_event_dispatch decorator; when already greater # than 0, event dispatch is already acquired higher up the call stack self._event_dispatch_suspend_depth = 0 # Connection-specific events that are ready for dispatch: _TimerEvt, # _ConnectionBlockedEvt, _ConnectionUnblockedEvt self._ready_events = deque() # Channel numbers of channels that are requesting a call to their # BlockingChannel._dispatch_events method; See # `_request_channel_dispatch` self._channels_pending_dispatch = set() # Receives on_open_callback args from Connection self._opened_result = _CallbackResult(self._OnOpenedArgs) # Receives on_open_error_callback args from Connection self._open_error_result = _CallbackResult(self._OnOpenErrorArgs) # Receives on_close_callback args from Connection self._closed_result = _CallbackResult(self._OnClosedArgs) # Set to True when when user calls close() on the connection # NOTE: this is a workaround to detect socket error because # on_close_callback passes reason_code=0 when called due to socket error self._user_initiated_close = False impl_class = _impl_class or SelectConnection self._impl = impl_class( parameters=parameters, on_open_callback=self._opened_result.set_value_once, on_open_error_callback=self._open_error_result.set_value_once, on_close_callback=self._closed_result.set_value_once, stop_ioloop_on_close=False) self._process_io_for_connection_setup() def _cleanup(self): """Clean up members that might inhibit garbage collection""" self._ready_events.clear() self._opened_result.reset() self._open_error_result.reset() self._closed_result.reset() @contextlib.contextmanager def _acquire_event_dispatch(self): """ Context manager that controls access to event dispatcher for preventing reentrancy. The "as" value is True if the managed code block owns the event dispatcher and False if caller higher up in the call stack already owns it. Only managed code that gets ownership (got True) is permitted to dispatch """ try: # __enter__ part self._event_dispatch_suspend_depth += 1 yield self._event_dispatch_suspend_depth == 1 finally: # __exit__ part self._event_dispatch_suspend_depth -= 1 def _process_io_for_connection_setup(self): # pylint: disable=C0103 """ Perform follow-up processing for connection setup request: flush connection output and process input while waiting for connection-open or connection-error. :raises AMQPConnectionError: on connection open error """ self._flush_output(self._opened_result.is_ready, self._open_error_result.is_ready) if self._open_error_result.ready: raise exceptions.AMQPConnectionError( self._open_error_result.value.error_text) assert self._opened_result.ready assert self._opened_result.value.connection is self._impl def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. """ if self._impl.is_closed: raise exceptions.ConnectionClosed() # Conditions for terminating the processing loop: # connection closed # OR # empty outbound buffer and no waiters # OR # empty outbound buffer and any waiter is ready is_done = (lambda: self._closed_result.ready or (not self._impl.outbound_buffer and (not waiters or any(ready() for ready in waiters)))) # Process I/O until our completion condition is satisified while not is_done(): self._impl.ioloop.poll() self._impl.ioloop.process_timeouts() if self._closed_result.ready: try: result = self._closed_result.value if result.reason_code not in [0, 200]: LOGGER.critical('Connection close detected; result=%r', result) raise exceptions.ConnectionClosed(result.reason_code, result.reason_text) elif not self._user_initiated_close: # NOTE: unfortunately, upon socket error, on_close_callback # presently passes reason_code=0, so we don't detect that as # an error LOGGER.critical('Connection close detected') raise exceptions.ConnectionClosed() else: LOGGER.info('Connection closed; result=%r', result) finally: self._cleanup() def _request_channel_dispatch(self, channel_number): """Called by BlockingChannel instances to request a call to their _dispatch_events method or to terminate `process_data_events`; BlockingConnection will honor these requests from a safe context. :param int channel_number: positive channel number to request a call to the channel's `_dispatch_events`; a negative channel number to request termination of `process_data_events` """ self._channels_pending_dispatch.add(channel_number) def _dispatch_channel_events(self): """Invoke the `_dispatch_events` method on open channels that requested it """ if not self._channels_pending_dispatch: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return candidates = list(self._channels_pending_dispatch) self._channels_pending_dispatch.clear() for channel_number in candidates: if channel_number < 0: # This was meant to terminate process_data_events continue try: impl_channel = self._impl._channels[channel_number] except KeyError: continue if impl_channel.is_open: impl_channel._get_cookie()._dispatch_events() def _on_timer_ready(self, evt): """Handle expiry of a timer that was registered via `add_timeout` :param _TimerEvt evt: """ self._ready_events.append(evt) def _on_connection_blocked(self, user_callback, method_frame): """Handle Connection.Blocked notification from RabbitMQ broker :param callable user_callback: callback_method passed to `add_on_connection_blocked_callback` :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionBlockedEvt(user_callback, method_frame)) def _on_connection_unblocked(self, user_callback, method_frame): """Handle Connection.Unblocked notification from RabbitMQ broker :param callable user_callback: callback_method passed to `add_on_connection_unblocked_callback` :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionUnblockedEvt(user_callback, method_frame)) def _dispatch_connection_events(self): """Dispatch ready connection events""" if not self._ready_events: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return # Limit dispatch to the number of currently ready events to avoid # getting stuck in this loop for _ in compat.xrange(len(self._ready_events)): try: evt = self._ready_events.popleft() except IndexError: # Some events (e.g., timers) must have been cancelled break evt.dispatch() def add_on_connection_blocked_callback(self, # pylint: disable=C0103 callback_method): """Add a callback to be notified when RabbitMQ has sent a `Connection.Blocked` frame indicating that RabbitMQ is low on resources. Publishers can use this to voluntarily suspend publishing, instead of relying on back pressure throttling. The callback will be passed the `Connection.Blocked` method frame. :param method callback_method: Callback to call on `Connection.Blocked`, having the signature callback_method(pika.frame.Method), where the method frame's `method` member is of type `pika.spec.Connection.Blocked` """ self._impl.add_on_connection_blocked_callback( functools.partial(self._on_connection_blocked, callback_method)) def add_on_connection_unblocked_callback(self, # pylint: disable=C0103 callback_method): """Add a callback to be notified when RabbitMQ has sent a `Connection.Unblocked` frame letting publishers know it's ok to start publishing again. The callback will be passed the `Connection.Unblocked` method frame. :param method callback_method: Callback to call on `Connection.Unblocked`, having the signature callback_method(pika.frame.Method), where the method frame's `method` member is of type `pika.spec.Connection.Unblocked` """ self._impl.add_on_connection_unblocked_callback( functools.partial(self._on_connection_unblocked, callback_method)) def add_timeout(self, deadline, callback_method): """Create a single-shot timer to fire after deadline seconds. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. NOTE: the timer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events` and `BlockingChannel.start_consuming`. :param float deadline: The number of seconds to wait to call callback :param callable callback_method: The callback method with the signature callback_method() :returns: opaque timer id """ if not callable(callback_method): raise ValueError( 'callback_method parameter must be callable, but got %r' % (callback_method,)) evt = _TimerEvt(callback=callback_method) timer_id = self._impl.add_timeout( deadline, functools.partial(self._on_timer_ready, evt)) evt.timer_id = timer_id return timer_id def remove_timeout(self, timeout_id): """Remove a timer if it's still in the timeout stack :param timeout_id: The opaque timer id to remove """ # Remove from the impl's timeout stack self._impl.remove_timeout(timeout_id) # Remove from ready events, if the timer fired already for i, evt in enumerate(self._ready_events): if isinstance(evt, _TimerEvt) and evt.timer_id == timeout_id: index_to_remove = i break else: # Not found return del self._ready_events[index_to_remove] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ LOGGER.info('Closing connection (%s): %s', reply_code, reply_text) self._user_initiated_close = True # Close channels that remain opened for impl_channel in pika.compat.dictvalues(self._impl._channels): channel = impl_channel._get_cookie() if channel.is_open: channel.close(reply_code, reply_text) # Close the connection self._impl.close(reply_code, reply_text) self._flush_output(self._closed_result.is_ready) def process_data_events(self, time_limit=0): """Will make sure that data events are processed. Dispatches timer and channel callbacks if not called from the scope of BlockingConnection or BlockingChannel callback. Your app can block on this method. :param float time_limit: suggested upper bound on processing time in seconds. The actual blocking time depends on the granularity of the underlying ioloop. Zero means return as soon as possible. None means there is no limit on processing time and the function will block until I/O produces actionalable events. Defaults to 0 for backward compatibility. This parameter is NEW in pika 0.10.0. """ common_terminator = lambda: bool( self._channels_pending_dispatch or self._ready_events) if time_limit is None: self._flush_output(common_terminator) else: with _IoloopTimerContext(time_limit, self._impl) as timer: self._flush_output(timer.is_ready, common_terminator) if self._ready_events: self._dispatch_connection_events() if self._channels_pending_dispatch: self._dispatch_channel_events() def sleep(self, duration): """A safer way to sleep than calling time.sleep() directly that would keep the adapter from ignoring frames sent from the broker. The connection will "sleep" or block the number of seconds specified in duration in small intervals. :param float duration: The time to sleep in seconds """ assert duration >= 0, duration deadline = time.time() + duration time_limit = duration # Process events at least once while True: self.process_data_events(time_limit) time_limit = deadline - time.time() if time_limit <= 0: break def channel(self, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :rtype: pika.synchronous_connection.BlockingChannel """ with _CallbackResult(self._OnChannelOpenedArgs) as opened_args: impl_channel = self._impl.channel( on_open_callback=opened_args.set_value_once, channel_number=channel_number) # Create our proxy channel channel = BlockingChannel(impl_channel, self) # Link implementation channel with our proxy channel impl_channel._set_cookie(channel) # Drive I/O until Channel.Open-ok channel._flush_output(opened_args.is_ready) return channel def __enter__(self): # Prepare `with` context return self def __exit__(self, tp, value, traceback): # Close connection after `with` context self.close() # # Connections state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_closed @property def is_closing(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_closing @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_open # # Properties that reflect server capabilities for the current connection # @property def basic_nack_supported(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self._impl.basic_nack @property def consumer_cancel_notify_supported(self): # pylint: disable=C0103 """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self._impl.consumer_cancel_notify @property def exchange_exchange_bindings_supported(self): # pylint: disable=C0103 """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self._impl.exchange_exchange_bindings @property def publisher_confirms_supported(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self._impl.publisher_confirms # Legacy property names for backward compatibility basic_nack = basic_nack_supported consumer_cancel_notify = consumer_cancel_notify_supported exchange_exchange_bindings = exchange_exchange_bindings_supported publisher_confirms = publisher_confirms_supported class _ChannelPendingEvt(object): # pylint: disable=R0903 """Base class for BlockingChannel pending events""" pass class _ConsumerDeliveryEvt(_ChannelPendingEvt): # pylint: disable=R0903 """This event represents consumer message delivery `Basic.Deliver`; it contains method, properties, and body of the delivered message. """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Deliver method: NOTE: consumer_tag and delivery_tag are valid only within source channel :param spec.BasicProperties properties: message properties :param body: message body; empty string if no body :type body: str or unicode """ self.method = method self.properties = properties self.body = body class _ConsumerCancellationEvt(_ChannelPendingEvt): # pylint: disable=R0903 """This event represents server-initiated consumer cancellation delivered to client via Basic.Cancel. After receiving Basic.Cancel, there will be no further deliveries for the consumer identified by `consumer_tag` in `Basic.Cancel` """ __slots__ = ('method_frame') def __init__(self, method_frame): """ :param pika.frame.Method method_frame: method frame with method of type `spec.Basic.Cancel` """ self.method_frame = method_frame def __repr__(self): return '%s(method_frame=%r)' % (self.__class__.__name__, self.method_frame) @property def method(self): """method of type spec.Basic.Cancel""" return self.method_frame.method class _ReturnedMessageEvt(_ChannelPendingEvt): # pylint: disable=R0903 """This event represents a message returned by broker via `Basic.Return`""" __slots__ = ('callback', 'channel', 'method', 'properties', 'body') def __init__(self, callback, channel, method, properties, body): # pylint: disable=R0913 """ :param callable callback: user's callback, having the signature callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param pika.Channel channel: :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: :param body: str, unicode, or bytes (python 3.x) """ self.callback = callback self.channel = channel self.method = method self.properties = properties self.body = body def __repr__(self): return ('%s(callback=%r, channel=%r, method=%r, properties=%r, ' 'body=%.300r') % (self.__class__.__name__, self.callback, self.channel, self.method, self.properties, self.body) def dispatch(self): """Dispatch user's callback""" self.callback(self.channel, self.method, self.properties, self.body) class ReturnedMessage(object): # pylint: disable=R0903 """Represents a message returned via Basic.Return in publish-acknowledgments mode """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Return method: :param spec.BasicProperties properties: message properties :param body: message body; empty string if no body :type body: str or unicode """ self.method = method self.properties = properties self.body = body class _ConsumerInfo(object): """Information about an active consumer""" __slots__ = ('consumer_tag', 'no_ack', 'consumer_cb', 'alternate_event_sink', 'state') # Consumer states SETTING_UP = 1 ACTIVE = 2 TEARING_DOWN = 3 CANCELLED_BY_BROKER = 4 def __init__(self, consumer_tag, no_ack, consumer_cb=None, alternate_event_sink=None): """ NOTE: exactly one of consumer_cb/alternate_event_sink musts be non-None. :param str consumer_tag: :param bool no_ack: the no-ack value for the consumer :param callable consumer_cb: The function for dispatching messages to user, having the signature: consumer_callback(channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) """ assert (consumer_cb is None) != (alternate_event_sink is None), ( 'exactly one of consumer_cb/alternate_event_sink must be non-None', consumer_cb, alternate_event_sink) self.consumer_tag = consumer_tag self.no_ack = no_ack self.consumer_cb = consumer_cb self.alternate_event_sink = alternate_event_sink self.state = self.SETTING_UP @property def setting_up(self): """True if in SETTING_UP state""" return self.state == self.SETTING_UP @property def active(self): """True if in ACTIVE state""" return self.state == self.ACTIVE @property def tearing_down(self): """True if in TEARING_DOWN state""" return self.state == self.TEARING_DOWN @property def cancelled_by_broker(self): """True if in CANCELLED_BY_BROKER state""" return self.state == self.CANCELLED_BY_BROKER class _QueueConsumerGeneratorInfo(object): # pylint: disable=R0903 """Container for information about the active queue consumer generator """ __slots__ = ('params', 'consumer_tag', 'pending_events') def __init__(self, params, consumer_tag): """ :params tuple params: a three-tuple (queue, no_ack, exclusive) that were used to create the queue consumer :param str consumer_tag: consumer tag """ self.params = params self.consumer_tag = consumer_tag #self.messages = deque() # Holds pending events of types _ConsumerDeliveryEvt and # _ConsumerCancellationEvt self.pending_events = deque() def __repr__(self): return '%s(params=%r, consumer_tag=%r)' % ( self.__class__.__name__, self.params, self.consumer_tag) class BlockingChannel(object): # pylint: disable=R0904,R0902 """The BlockingChannel implements blocking semantics for most things that one would use callback-passing-style for with the :py:class:`~pika.channel.Channel` class. In addition, the `BlockingChannel` class implements a :term:`generator` that allows you to :doc:`consume messages ` without using callbacks. Example of creating a BlockingChannel:: import pika # Create our connection object connection = pika.BlockingConnection() # The returned object will be a synchronous channel channel = connection.channel() """ # Used as value_class with _CallbackResult for receiving Basic.GetOk args _RxMessageArgs = namedtuple( 'BlockingChannel__RxMessageArgs', [ 'channel', # implementation pika.Channel instance 'method', # Basic.GetOk 'properties', # pika.spec.BasicProperties 'body' # str, unicode, or bytes (python 3.x) ]) # For use as value_class with any _CallbackResult that expects method_frame # as the only arg _MethodFrameCallbackResultArgs = namedtuple( 'BlockingChannel__MethodFrameCallbackResultArgs', 'method_frame') # Broker's basic-ack/basic-nack args when delivery confirmation is enabled; # may concern a single or multiple messages _OnMessageConfirmationReportArgs = namedtuple( # pylint: disable=C0103 'BlockingChannel__OnMessageConfirmationReportArgs', 'method_frame') # Parameters for broker-inititated Channel.Close request: reply_code # holds the broker's non-zero error code and reply_text holds the # corresponding error message text. _OnChannelClosedByBrokerArgs = namedtuple( 'BlockingChannel__OnChannelClosedByBrokerArgs', 'method_frame') # For use as value_class with _CallbackResult expecting Channel.Flow # confirmation. _FlowOkCallbackResultArgs = namedtuple( 'BlockingChannel__FlowOkCallbackResultArgs', 'active' # True if broker will start or continue sending; False if not ) _CONSUMER_CANCELLED_CB_KEY = 'blocking_channel_consumer_cancelled' def __init__(self, channel_impl, connection): """Create a new instance of the Channel :param channel_impl: Channel implementation object as returned from SelectConnection.channel() :param BlockingConnection connection: The connection object """ self._impl = channel_impl self._connection = connection # A mapping of consumer tags to _ConsumerInfo for active consumers self._consumer_infos = dict() # Queue consumer generator generator info of type # _QueueConsumerGeneratorInfo created by BlockingChannel.consume self._queue_consumer_generator = None # Whether RabbitMQ delivery confirmation has been enabled self._delivery_confirmation = False # Receives message delivery confirmation report (Basic.ack or # Basic.nack) from broker when delivery confirmations are enabled self._message_confirmation_result = _CallbackResult( self._OnMessageConfirmationReportArgs) # deque of pending events: _ConsumerDeliveryEvt and # _ConsumerCancellationEvt objects that will be returned by # `BlockingChannel.get_event()` self._pending_events = deque() # Holds a ReturnedMessage object representing a message received via # Basic.Return in publisher-acknowledgments mode. self._puback_return = None # Receives Basic.ConsumeOk reply from server self._basic_consume_ok_result = _CallbackResult() # Receives the broker-inititated Channel.Close parameters self._channel_closed_by_broker_result = _CallbackResult( # pylint: disable=C0103 self._OnChannelClosedByBrokerArgs) # Receives args from Basic.GetEmpty response # http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get self._basic_getempty_result = _CallbackResult( self._MethodFrameCallbackResultArgs) self._impl.add_on_cancel_callback(self._on_consumer_cancelled_by_broker) self._impl.add_callback( self._basic_consume_ok_result.signal_once, replies=[pika.spec.Basic.ConsumeOk], one_shot=False) self._impl.add_callback( self._channel_closed_by_broker_result.set_value_once, replies=[pika.spec.Channel.Close], one_shot=True) self._impl.add_callback( self._basic_getempty_result.set_value_once, replies=[pika.spec.Basic.GetEmpty], one_shot=False) LOGGER.info("Created channel=%s", self.channel_number) def _cleanup(self): """Clean up members that might inhibit garbage collection""" self._message_confirmation_result.reset() self._pending_events = deque() self._consumer_infos = dict() def __int__(self): """Return the channel object as its channel number :rtype: int """ return self.channel_number @property def channel_number(self): """Channel number""" return self._impl.channel_number @property def connection(self): """The channel's BlockingConnection instance""" return self._connection @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._impl.is_closed @property def is_closing(self): """Returns True if the channel is closing. :rtype: bool """ return self._impl.is_closing @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._impl.is_open _ALWAYS_READY_WAITERS = ((lambda: True), ) def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon channel-close or connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. """ if self._impl.is_closed: raise exceptions.ChannelClosed() if not waiters: waiters = self._ALWAYS_READY_WAITERS self._connection._flush_output( self._channel_closed_by_broker_result.is_ready, *waiters) if self._channel_closed_by_broker_result: # Channel was force-closed by broker self._cleanup() method = ( self._channel_closed_by_broker_result.value.method_frame.method) raise exceptions.ChannelClosed(method.reply_code, method.reply_text) def _on_puback_message_returned(self, channel, method, properties, body): """Called as the result of Basic.Return from broker in publisher-acknowledgements mode. Saves the info as a ReturnedMessage instance in self._puback_return. :param pika.Channel channel: our self._impl channel :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: message properties :param body: returned message body; empty string if no body :type body: str, unicode """ assert channel is self._impl, ( channel.channel_number, self.channel_number) assert isinstance(method, pika.spec.Basic.Return), method assert isinstance(properties, pika.spec.BasicProperties), ( properties) LOGGER.warn( "Published message was returned: _delivery_confirmation=%s; " "channel=%s; method=%r; properties=%r; body_size=%d; " "body_prefix=%.255r", self._delivery_confirmation, channel.channel_number, method, properties, len(body) if body is not None else None, body) self._puback_return = ReturnedMessage(method, properties, body) def _add_pending_event(self, evt): """Append an event to the channel's list of events that are ready for dispatch to user and signal our connection that this channel is ready for event dispatch :param _ChannelPendingEvt evt: an event derived from _ChannelPendingEvt """ self._pending_events.append(evt) self.connection._request_channel_dispatch(self.channel_number) def _on_consumer_cancelled_by_broker(self, # pylint: disable=C0103 method_frame): """Called by impl when broker cancels consumer via Basic.Cancel. This is a RabbitMQ-specific feature. The circumstances include deletion of queue being consumed as well as failure of a HA node responsible for the queue being consumed. :param pika.frame.Method method_frame: method frame with the `spec.Basic.Cancel` method """ evt = _ConsumerCancellationEvt(method_frame) consumer = self._consumer_infos[method_frame.method.consumer_tag] # Don't interfere with client-initiated cancellation flow if not consumer.tearing_down: consumer.state = _ConsumerInfo.CANCELLED_BY_BROKER if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_message_delivery(self, channel, # pylint: disable=W0613 method, properties, body): """Called by impl when a message is delivered for a consumer :param Channel channel: The implementation channel object :param spec.Basic.Deliver method: :param pika.spec.BasicProperties properties: message properties :param body: delivered message body; empty string if no body :type body: str, unicode, or bytes (python 3.x) """ evt = _ConsumerDeliveryEvt(method, properties, body) consumer = self._consumer_infos[method.consumer_tag] if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_generator_event(self, evt): """Sink for the queue consumer generator's consumer events; append the event to queue consumer generator's pending events buffer. :param evt: an object of type _ConsumerDeliveryEvt or _ConsumerCancellationEvt """ self._queue_consumer_generator.pending_events.append(evt) # Schedule termination of connection.process_data_events using a # negative channel number self.connection._request_channel_dispatch(-self.channel_number) def _cancel_all_consumers(self): """Cancel all consumers. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if self._consumer_infos: LOGGER.debug('Cancelling %i consumers', len(self._consumer_infos)) if self._queue_consumer_generator is not None: # Cancel queue consumer generator self.cancel() # Cancel consumers created via basic_consume for consumer_tag in pika.compat.dictkeys(self._consumer_infos): self.basic_cancel(consumer_tag) def _dispatch_events(self): """Called by BlockingConnection to dispatch pending events. `BlockingChannel` schedules this callback via `BlockingConnection._request_channel_dispatch` """ while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: consumer_info = self._consumer_infos[evt.method.consumer_tag] consumer_info.consumer_cb(self, evt.method, evt.properties, evt.body) elif type(evt) is _ConsumerCancellationEvt: del self._consumer_infos[evt.method_frame.method.consumer_tag] self._impl.callbacks.process(self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, self, evt.method_frame) else: evt.dispatch() def close(self, reply_code=0, reply_text="Normal Shutdown"): """Will invoke a clean shutdown of the channel with the AMQP Broker. :param int reply_code: The reply code to close the channel with :param str reply_text: The reply text to close the channel with """ LOGGER.info('Channel.close(%s, %s)', reply_code, reply_text) # Cancel remaining consumers self._cancel_all_consumers() # Close the channel try: with _CallbackResult() as close_ok_result: self._impl.add_callback(callback=close_ok_result.signal_once, replies=[pika.spec.Channel.CloseOk], one_shot=True) self._impl.close(reply_code=reply_code, reply_text=reply_text) self._flush_output(close_ok_result.is_ready) finally: self._cleanup() def flow(self, active): """Turn Channel flow control off and on. NOTE: RabbitMQ doesn't support active=False; per https://www.rabbitmq.com/specification.html: "active=false is not supported by the server. Limiting prefetch with basic.qos provides much better control" For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param bool active: Turn flow on (True) or off (False) :returns: True if broker will start or continue sending; False if not :rtype: bool """ with _CallbackResult(self._FlowOkCallbackResultArgs) as flow_ok_result: self._impl.flow(callback=flow_ok_result.set_value_once, active=active) self._flush_output(flow_ok_result.is_ready) return flow_ok_result.value.active def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when Basic.Cancel is sent by the broker. The callback function should receive a method frame parameter. :param callable callback: a callable for handling broker's Basic.Cancel notification with the call signature: callback(method_frame) where method_frame is of type `pika.frame.Method` with method of type `spec.Basic.Cancel` """ self._impl.callbacks.add(self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, callback, one_shot=False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when a published message is rejected and returned by the server via `Basic.Return`. :param callable callback: The method to call on callback with the signature callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) """ self._impl.add_on_return_callback( lambda _channel, method, properties, body: ( self._add_pending_event( _ReturnedMessageEvt( callback, self, method, properties, body)))) def basic_consume(self, # pylint: disable=R0913 consumer_callback, queue, no_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Sends the AMQP command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. NOTE: the consumer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events` and `BlockingChannel.start_consuming`. For more information about Basic.Consume, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param callable consumer_callback: The function for dispatching messages to user, having the signature: consumer_callback(channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode :param queue: The queue to consume from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a response (i.e., no ack/nack) :param bool exclusive: Don't allow other consumers on the queue :param consumer_tag: You may specify your own consumer tag; if left empty, a consumer tag will be generated automatically :type consumer_tag: str or unicode :param dict arguments: Custom key/value pair arguments for the consumer :returns: consumer tag :rtype: str :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ if not callable(consumer_callback): raise ValueError('consumer callback must be callable; got %r' % consumer_callback) return self._basic_consume_impl( queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, consumer_callback=consumer_callback) def _basic_consume_impl(self, # pylint: disable=R0913 queue, no_ack, exclusive, consumer_tag, arguments=None, consumer_callback=None, alternate_event_sink=None): """The low-level implementation used by `basic_consume` and `consume`. See `basic_consume` docstring for more info. NOTE: exactly one of consumer_callback/alternate_event_sink musts be non-None. This method has one additional parameter alternate_event_sink over the args described in `basic_consume`. :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ if (consumer_callback is None) == (alternate_event_sink is None): raise ValueError( ('exactly one of consumer_callback/alternate_event_sink must ' 'be non-None', consumer_callback, alternate_event_sink)) if not consumer_tag: # Need a consumer tag to register consumer info before sending # request to broker, because I/O might dispatch incoming messages # immediately following Basic.Consume-ok before _flush_output # returns consumer_tag = self._impl._generate_consumer_tag() if consumer_tag in self._consumer_infos: raise exceptions.DuplicateConsumerTag(consumer_tag) # Create new consumer self._consumer_infos[consumer_tag] = _ConsumerInfo( consumer_tag, no_ack=no_ack, consumer_cb=consumer_callback, alternate_event_sink=alternate_event_sink) try: with self._basic_consume_ok_result as ok_result: tag = self._impl.basic_consume( consumer_callback=self._on_consumer_message_delivery, queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments) assert tag == consumer_tag, (tag, consumer_tag) self._flush_output(ok_result.is_ready) except Exception: # If channel was closed, self._consumer_infos will be empty if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] raise # NOTE: Consumer could get cancelled by broker immediately after opening # (e.g., queue getting deleted externally) if self._consumer_infos[consumer_tag].setting_up: self._consumer_infos[consumer_tag].state = _ConsumerInfo.ACTIVE return consumer_tag def basic_cancel(self, consumer_tag): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. NOTE: When cancelling a no_ack=False consumer, this implementation automatically Nacks and suppresses any incoming messages that have not yet been dispatched to the consumer's callback. However, when cancelling a no_ack=True consumer, this method will return any pending messages that arrived before broker confirmed the cancellation. :param str consumer_tag: Identifier for the consumer; the result of passing a consumer_tag that was created on another channel is undefined (bad things will happen) :returns: (NEW IN pika 0.10.0) empty sequence for a no_ack=False consumer; for a no_ack=True consumer, returns a (possibly empty) sequence of pending messages that arrived before broker confirmed the cancellation (this is done instead of via consumer's callback in order to prevent reentrancy/recursion. Each message is four-tuple: (channel, method, properties, body) channel: BlockingChannel method: spec.Basic.Deliver properties: spec.BasicProperties body: str or unicode """ try: consumer_info = self._consumer_infos[consumer_tag] except KeyError: LOGGER.warn("User is attempting to cancel an unknown consumer=%s; " "already cancelled by user or broker?", consumer_tag) return [] try: # Assertion failure here is most likely due to reentrance assert consumer_info.active or consumer_info.cancelled_by_broker, ( consumer_info.state) # Assertion failure here signals disconnect between consumer state # in BlockingConnection and Connection assert (consumer_info.cancelled_by_broker or consumer_tag in self._impl._consumers), consumer_tag no_ack = consumer_info.no_ack consumer_info.state = _ConsumerInfo.TEARING_DOWN with _CallbackResult() as cancel_ok_result: # Nack pending messages for no_ack=False consumer if not no_ack: pending_messages = self._remove_pending_deliveries( consumer_tag) if pending_messages: # NOTE: we use impl's basic_reject to avoid the # possibility of redelivery before basic_cancel takes # control of nacking. # NOTE: we can't use basic_nack with the multiple option # to avoid nacking messages already held by our client. for message in pending_messages: self._impl.basic_reject(message.method.delivery_tag, requeue=True) # Cancel the consumer; impl takes care of rejecting any # additional deliveries that arrive for a no_ack=False # consumer self._impl.basic_cancel( callback=cancel_ok_result.signal_once, consumer_tag=consumer_tag, nowait=False) # Flush output and wait for Basic.Cancel-ok or # broker-initiated Basic.Cancel self._flush_output( cancel_ok_result.is_ready, lambda: consumer_tag not in self._impl._consumers) if no_ack: # Return pending messages for no_ack=True consumer return [ (evt.method, evt.properties, evt.body) for evt in self._remove_pending_deliveries(consumer_tag)] else: # impl takes care of rejecting any incoming deliveries during # cancellation messages = self._remove_pending_deliveries(consumer_tag) assert not messages, messages return [] finally: # NOTE: The entry could be purged if channel or connection closes if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] def _remove_pending_deliveries(self, consumer_tag): """Extract _ConsumerDeliveryEvt objects destined for the given consumer from pending events, discarding the _ConsumerCancellationEvt, if any :param str consumer_tag: :returns: a (possibly empty) sequence of _ConsumerDeliveryEvt destined for the given consumer tag """ remaining_events = deque() unprocessed_messages = [] while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: if evt.method.consumer_tag == consumer_tag: unprocessed_messages.append(evt) continue if type(evt) is _ConsumerCancellationEvt: if evt.method_frame.method.consumer_tag == consumer_tag: # A broker-initiated Basic.Cancel must have arrived # before our cancel request completed continue remaining_events.append(evt) self._pending_events = remaining_events return unprocessed_messages def start_consuming(self): """Processes I/O events and dispatches timers and `basic_consume` callbacks until all consumers are cancelled. NOTE: this blocking function may not be called from the scope of a pika callback, because dispatching `basic_consume` callbacks from this context would constitute recursion. :raises pika.exceptions.RecursionError: if called from the scope of a `BlockingConnection` or `BlockingChannel` callback """ # Check if called from the scope of an event dispatch callback with self.connection._acquire_event_dispatch() as dispatch_allowed: if not dispatch_allowed: raise exceptions.RecursionError( 'start_consuming may not be called from the scope of ' 'another BlockingConnection or BlockingChannel callback') # Process events as long as consumers exist on this channel while self._consumer_infos: self.connection.process_data_events(time_limit=None) def stop_consuming(self, consumer_tag=None): """ Cancels all consumers, signalling the `start_consuming` loop to exit. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if consumer_tag: self.basic_cancel(consumer_tag) else: self._cancel_all_consumers() def consume(self, queue, no_ack=False, # pylint: disable=R0913 exclusive=False, arguments=None, inactivity_timeout=None): """Blocking consumption of a queue instead of via a callback. This method is a generator that yields each message as a tuple of method, properties, and body. The active generator iterator terminates when the consumer is cancelled by client or broker. Example: for method, properties, body in channel.consume('queue'): print body channel.basic_ack(method.delivery_tag) You should call `BlockingChannel.cancel()` when you escape out of the generator loop. If you don't cancel this consumer, then next call on the same channel to `consume()` with the exact same (queue, no_ack, exclusive) parameters will resume the existing consumer generator; however, calling with different parameters will result in an exception. :param queue: The queue name to consume :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a ack/nack response :param bool exclusive: Don't allow other consumers on the queue :param dict arguments: Custom key/value pair arguments for the consumer :param float inactivity_timeout: if a number is given (in seconds), will cause the method to yield None after the given period of inactivity; this permits for pseudo-regular maintenance activities to be carried out by the user while waiting for messages to arrive. If None is given (default), then the method blocks until the next event arrives. NOTE that timing granularity is limited by the timer resolution of the underlying implementation. NEW in pika 0.10.0. :yields: tuple(spec.Basic.Deliver, spec.BasicProperties, str or unicode) :raises ValueError: if consumer-creation parameters don't match those of the existing queue consumer generator, if any. NEW in pika 0.10.0 """ params = (queue, no_ack, exclusive) if self._queue_consumer_generator is not None: if params != self._queue_consumer_generator.params: raise ValueError( 'Consume with different params not allowed on existing ' 'queue consumer generator; previous params: %r; ' 'new params: %r' % (self._queue_consumer_generator.params, (queue, no_ack, exclusive))) else: LOGGER.debug('Creating new queue consumer generator; params: %r', params) # Need a consumer tag to register consumer info before sending # request to broker, because I/O might pick up incoming messages # in addition to Basic.Consume-ok consumer_tag = self._impl._generate_consumer_tag() self._queue_consumer_generator = _QueueConsumerGeneratorInfo( params, consumer_tag) try: self._basic_consume_impl( queue=queue, no_ack=no_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, alternate_event_sink=self._on_consumer_generator_event) except Exception: self._queue_consumer_generator = None raise LOGGER.info('Created new queue consumer generator %r', self._queue_consumer_generator) while self._queue_consumer_generator is not None: if self._queue_consumer_generator.pending_events: evt = self._queue_consumer_generator.pending_events.popleft() if type(evt) is _ConsumerCancellationEvt: # Consumer was cancelled by broker self._queue_consumer_generator = None break else: yield (evt.method, evt.properties, evt.body) continue # Wait for a message to arrive if inactivity_timeout is None: self.connection.process_data_events(time_limit=None) continue # Wait with inactivity timeout wait_start_time = time.time() wait_deadline = wait_start_time + inactivity_timeout delta = inactivity_timeout while (self._queue_consumer_generator is not None and not self._queue_consumer_generator.pending_events): self.connection.process_data_events(time_limit=delta) if not self._queue_consumer_generator: # Consumer was cancelled by client break if self._queue_consumer_generator.pending_events: # Got message(s) break delta = wait_deadline - time.time() if delta <= 0.0: # Signal inactivity timeout yield None break def get_waiting_message_count(self): """Returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. NEW in pika 0.10.0 :rtype: int """ if self._queue_consumer_generator is not None: pending_events = self._queue_consumer_generator.pending_events count = len(pending_events) if count and type(pending_events[-1]) is _ConsumerCancellationEvt: count -= 1 else: count = 0 return count def cancel(self): """Cancel the queue consumer created by `BlockingChannel.consume`, rejecting all pending ackable messages. NOTE: If you're looking to cancel a consumer issued with BlockingChannel.basic_consume then you should call BlockingChannel.basic_cancel. :return int: The number of messages requeued by Basic.Nack. NEW in 0.10.0: returns 0 """ if self._queue_consumer_generator is None: LOGGER.warning('cancel: queue consumer generator is inactive ' '(already cancelled by client or broker?)') return 0 try: _, no_ack, _ = self._queue_consumer_generator.params if not no_ack: # Reject messages held by queue consumer generator; NOTE: we # can't use basic_nack with the multiple option to avoid nacking # messages already held by our client. pending_events = self._queue_consumer_generator.pending_events for _ in compat.xrange(self.get_waiting_message_count()): evt = pending_events.popleft() self._impl.basic_reject(evt.method.delivery_tag, requeue=True) self.basic_cancel(self._queue_consumer_generator.consumer_tag) finally: self._queue_consumer_generator = None # Return 0 for compatibility with legacy implementation; the number of # nacked messages is not meaningful since only messages consumed with # no_ack=False may be nacked, and those arriving after calling # basic_cancel will be rejected automatically by impl channel, so we'll # never know how many of those were nacked. return 0 def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ self._impl.basic_ack(delivery_tag=delivery_tag, multiple=multiple) self._flush_output() def basic_nack(self, delivery_tag=None, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_nack(delivery_tag=delivery_tag, multiple=multiple, requeue=requeue) self._flush_output() def basic_get(self, queue=None, no_ack=False): """Get a single message from the AMQP broker. Returns a sequence with the method frame, message properties, and body. :param queue: Name of queue to get a message from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a reply :returns: a three-tuple; (None, None, None) if the queue was empty; otherwise (method, properties, body); NOTE: body may be None :rtype: (None, None, None)|(spec.Basic.GetOk, spec.BasicProperties, str or unicode or None) """ assert not self._basic_getempty_result # NOTE: nested with for python 2.6 compatibility with _CallbackResult(self._RxMessageArgs) as get_ok_result: with self._basic_getempty_result: self._impl.basic_get(callback=get_ok_result.set_value_once, queue=queue, no_ack=no_ack) self._flush_output(get_ok_result.is_ready, self._basic_getempty_result.is_ready) if get_ok_result: evt = get_ok_result.value return (evt.method, evt.properties, evt.body) else: assert self._basic_getempty_result, ( "wait completed without GetOk and GetEmpty") return None, None, None def basic_publish(self, exchange, routing_key, body, # pylint: disable=R0913 properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key and body. Returns a boolean value indicating the success of the operation. This is the legacy BlockingChannel method for publishing. See also `BasicChannel.publish` that provides more information about failures. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish NOTE: mandatory and immediate may be enabled even without delivery confirmation, but in the absence of delivery confirmation the synchronous implementation has no way to know how long to wait for the Basic.Return or lack thereof. :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body; empty string if no body :type body: str or unicode :param pika.spec.BasicProperties properties: message properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag :returns: True if delivery confirmation is not enabled (NEW in pika 0.10.0); otherwise returns False if the message could not be deliveved (Basic.nack and/or Basic.Return) and True if the message was delivered (Basic.ack and no Basic.Return) """ try: self.publish(exchange, routing_key, body, properties, mandatory, immediate) except (exceptions.NackError, exceptions.UnroutableError): return False else: return True def publish(self, exchange, routing_key, body, # pylint: disable=R0913 properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key, and body. Unlike the legacy `BlockingChannel.basic_publish`, this method provides more information about failures via exceptions. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish NOTE: mandatory and immediate may be enabled even without delivery confirmation, but in the absence of delivery confirmation the synchronous implementation has no way to know how long to wait for the Basic.Return. :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body; empty string if no body :type body: str or unicode :param pika.spec.BasicProperties properties: message properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`. :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`. """ if self._delivery_confirmation: # In publisher-acknowledgments mode with self._message_confirmation_result: self._impl.basic_publish(exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory, immediate=immediate) self._flush_output(self._message_confirmation_result.is_ready) conf_method = (self._message_confirmation_result.value .method_frame .method) if isinstance(conf_method, pika.spec.Basic.Nack): # Broker was unable to process the message due to internal # error LOGGER.warn( "Message was Nack'ed by broker: nack=%r; channel=%s; " "exchange=%s; routing_key=%s; mandatory=%r; " "immediate=%r", conf_method, self.channel_number, exchange, routing_key, mandatory, immediate) if self._puback_return is not None: returned_messages = [self._puback_return] self._puback_return = None else: returned_messages = [] raise exceptions.NackError(returned_messages) else: assert isinstance(conf_method, pika.spec.Basic.Ack), ( conf_method) if self._puback_return is not None: # Unroutable message was returned messages = [self._puback_return] self._puback_return = None raise exceptions.UnroutableError(messages) else: # In non-publisher-acknowledgments mode self._impl.basic_publish(exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory, immediate=immediate) self._flush_output() def basic_qos(self, prefetch_size=0, prefetch_count=0, all_channels=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set in the consumer. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored if the no-ack option is set in the consumer. :param bool all_channels: Should the QoS apply to all channels """ with _CallbackResult() as qos_ok_result: self._impl.basic_qos(callback=qos_ok_result.signal_once, prefetch_size=prefetch_size, prefetch_count=prefetch_count, all_channels=all_channels) self._flush_output(qos_ok_result.is_ready) def basic_recover(self, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ with _CallbackResult() as recover_ok_result: self._impl.basic_recover(callback=recover_ok_result.signal_once, requeue=requeue) self._flush_output(recover_ok_result.is_ready) def basic_reject(self, delivery_tag=None, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_reject(delivery_tag=delivery_tag, requeue=requeue) self._flush_output() def confirm_delivery(self): """Turn on RabbitMQ-proprietary Confirm mode in the channel. For more information see: http://www.rabbitmq.com/extensions.html#confirms """ if self._delivery_confirmation: LOGGER.error('confirm_delivery: confirmation was already enabled ' 'on channel=%s', self.channel_number) return with _CallbackResult() as select_ok_result: self._impl.add_callback(callback=select_ok_result.signal_once, replies=[pika.spec.Confirm.SelectOk], one_shot=True) self._impl.confirm_delivery( callback=self._message_confirmation_result.set_value_once, nowait=False) self._flush_output(select_ok_result.is_ready) self._delivery_confirmation = True # Unroutable messages returned after this point will be in the context # of publisher acknowledgments self._impl.add_on_return_callback(self._on_puback_message_returned) def exchange_declare(self, exchange=None, # pylint: disable=R0913 exchange_type='direct', passive=False, durable=False, auto_delete=False, internal=False, arguments=None, **kwargs): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param exchange: The exchange name consists of a non-empty sequence of these characters: letters, digits, hyphen, underscore, period, or colon. :type exchange: str or unicode :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param dict arguments: Custom key/value pair arguments for the exchange :param str type: via kwargs: the deprecated exchange type parameter :returns: Method frame from the Exchange.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeclareOk` """ assert len(kwargs) <= 1, kwargs with _CallbackResult( self._MethodFrameCallbackResultArgs) as declare_ok_result: self._impl.exchange_declare( callback=declare_ok_result.set_value_once, exchange=exchange, exchange_type=exchange_type, passive=passive, durable=durable, auto_delete=auto_delete, internal=internal, nowait=False, arguments=arguments, type=kwargs["type"] if kwargs else None) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def exchange_delete(self, exchange=None, if_unused=False): """Delete the exchange. :param exchange: The exchange name :type exchange: str or unicode :param bool if_unused: only delete if the exchange is unused :returns: Method frame from the Exchange.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeleteOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as delete_ok_result: self._impl.exchange_delete( callback=delete_ok_result.set_value_once, exchange=exchange, if_unused=if_unused, nowait=False) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def exchange_bind(self, destination=None, source=None, routing_key='', arguments=None): """Bind an exchange to another exchange. :param destination: The destination exchange to bind :type destination: str or unicode :param source: The source exchange to bind to :type source: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.BindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as bind_ok_result: self._impl.exchange_bind( callback=bind_ok_result.set_value_once, destination=destination, source=source, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def exchange_unbind(self, destination=None, source=None, routing_key='', arguments=None): """Unbind an exchange from another exchange. :param destination: The destination exchange to unbind :type destination: str or unicode :param source: The source exchange to unbind from :type source: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.UnbindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as unbind_ok_result: self._impl.exchange_unbind( callback=unbind_ok_result.set_value_once, destination=destination, source=source, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def queue_declare(self, queue='', passive=False, durable=False, # pylint: disable=R0913 exclusive=False, auto_delete=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Leave the queue name empty for a auto-named queue in RabbitMQ :param queue: The queue name :type queue: str or unicode; if empty string, the broker will create a unique queue name; :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param dict arguments: Custom key/value arguments for the queue :returns: Method frame from the Queue.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeclareOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as declare_ok_result: self._impl.queue_declare( callback=declare_ok_result.set_value_once, queue=queue, passive=passive, durable=durable, exclusive=exclusive, auto_delete=auto_delete, nowait=False, arguments=arguments) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def queue_delete(self, queue='', if_unused=False, if_empty=False): """Delete a queue from the broker. :param queue: The queue to delete :type queue: str or unicode :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :returns: Method frame from the Queue.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeleteOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as delete_ok_result: self._impl.queue_delete(callback=delete_ok_result.set_value_once, queue=queue, if_unused=if_unused, if_empty=if_empty, nowait=False) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def queue_purge(self, queue=''): """Purge all of the messages from the specified queue :param queue: The queue to purge :type queue: str or unicode :returns: Method frame from the Queue.Purge-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.PurgeOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as purge_ok_result: self._impl.queue_purge(callback=purge_ok_result.set_value_once, queue=queue, nowait=False) self._flush_output(purge_ok_result.is_ready) return purge_ok_result.value.method_frame def queue_bind(self, queue, exchange, routing_key=None, arguments=None): """Bind the queue to the specified exchange :param queue: The queue to bind to the exchange :type queue: str or unicode :param exchange: The source exchange to bind to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.BindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as bind_ok_result: self._impl.queue_bind(callback=bind_ok_result.set_value_once, queue=queue, exchange=exchange, routing_key=routing_key, nowait=False, arguments=arguments) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def queue_unbind(self, queue='', exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param queue: The queue to unbind from the exchange :type queue: str or unicode :param exchange: The source exchange to bind from :type exchange: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.UnbindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as unbind_ok_result: self._impl.queue_unbind(callback=unbind_ok_result.set_value_once, queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def tx_select(self): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :returns: Method frame from the Tx.Select-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.SelectOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as select_ok_result: self._impl.tx_select(select_ok_result.set_value_once) self._flush_output(select_ok_result.is_ready) return select_ok_result.value.method_frame def tx_commit(self): """Commit a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as commit_ok_result: self._impl.tx_commit(commit_ok_result.set_value_once) self._flush_output(commit_ok_result.is_ready) return commit_ok_result.value.method_frame def tx_rollback(self): """Rollback a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as rollback_ok_result: self._impl.tx_rollback(rollback_ok_result.set_value_once) self._flush_output(rollback_ok_result.is_ready) return rollback_ok_result.value.method_frame pika-0.10.0/pika/adapters/libev_connection.py000066400000000000000000000260401257163076400211520ustar00rootroot00000000000000"""Use pika with the libev IOLoop via pyev""" import pyev import signal import array import logging import warnings from collections import deque from pika.adapters.base_connection import BaseConnection LOGGER = logging.getLogger(__name__) global_sigint_watcher, global_sigterm_watcher = None, None class LibevConnection(BaseConnection): """The LibevConnection runs on the libev IOLoop. If you're running the connection in a web app, make sure you set stop_ioloop_on_close to False, which is the default behavior for this adapter, otherwise the web app will stop taking requests. You should be familiar with pyev and libev to use this adapter, esp. with regard to the use of libev ioloops. If an on_signal_callback method is provided, the adapter creates signal watchers the first time; subsequent instantiations with a provided method reuse the same watchers but will call the new method upon receiving a signal. See pyev/libev signal handling to understand why this is done. :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection can't be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the default_loop in libev :param on_signal_callback: Method to call if SIGINT or SIGTERM occur :type on_signal_callback: method """ WARN_ABOUT_IOLOOP = True # use static arrays to translate masks between pika and libev _PIKA_TO_LIBEV_ARRAY = array.array('i', [0] * ( (BaseConnection.READ | BaseConnection.WRITE | BaseConnection.ERROR) + 1 )) _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ] = pyev.EV_READ _PIKA_TO_LIBEV_ARRAY[BaseConnection.WRITE] = pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.WRITE] = pyev.EV_READ | pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.ERROR] = pyev.EV_READ _PIKA_TO_LIBEV_ARRAY[BaseConnection.WRITE | BaseConnection.ERROR] = pyev.EV_WRITE _PIKA_TO_LIBEV_ARRAY[BaseConnection.READ | BaseConnection.WRITE | BaseConnection.ERROR] = pyev.EV_READ | pyev.EV_WRITE _LIBEV_TO_PIKA_ARRAY = array.array('i', [0] * ((pyev.EV_READ | pyev.EV_WRITE) + 1)) _LIBEV_TO_PIKA_ARRAY[pyev.EV_READ] = BaseConnection.READ _LIBEV_TO_PIKA_ARRAY[pyev.EV_WRITE] = BaseConnection.WRITE _LIBEV_TO_PIKA_ARRAY[pyev.EV_READ | pyev.EV_WRITE] = \ BaseConnection.READ | BaseConnection.WRITE def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False, custom_ioloop=None, on_signal_callback=None): """Create a new instance of the LibevConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cannot be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the default IOLoop in libev :param on_signal_callback: Method to call if SIGINT or SIGTERM occur :type on_signal_callback: method """ if custom_ioloop: self.ioloop = custom_ioloop else: with warnings.catch_warnings(): warnings.simplefilter("ignore", RuntimeWarning) self.ioloop = pyev.default_loop() self.async = None self._on_signal_callback = on_signal_callback self._io_watcher = None self._active_timers = {} self._stopped_timers = deque() super(LibevConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, self.ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the remote socket, adding the socket to the IOLoop if connected :rtype: bool """ LOGGER.debug('init io and signal watchers if any') # reuse existing signal watchers, can only be declared for 1 ioloop global global_sigint_watcher, global_sigterm_watcher error = super(LibevConnection, self)._adapter_connect() if not error: if self._on_signal_callback and not global_sigterm_watcher: global_sigterm_watcher = \ self.ioloop.signal(signal.SIGTERM, self._handle_sigterm) if self._on_signal_callback and not global_sigint_watcher: global_sigint_watcher = self.ioloop.signal(signal.SIGINT, self._handle_sigint) if not self._io_watcher: self._io_watcher = \ self.ioloop.io(self.socket.fileno(), self._PIKA_TO_LIBEV_ARRAY[self.event_state], self._handle_events) self.async = pyev.Async(self.ioloop, self._noop_callable) self.async.start() if self._on_signal_callback: global_sigterm_watcher.start() if self._on_signal_callback: global_sigint_watcher.start() self._io_watcher.start() return error def _noop_callable(self, *args, **kwargs): pass def _init_connection_state(self): """Initialize or reset all of our internal state variables for a given connection. If we disconnect and reconnect, all of our state needs to be wiped. """ for timer in self._active_timers.keys(): self.remove_timeout(timer) if global_sigint_watcher: global_sigint_watcher.stop() if global_sigterm_watcher: global_sigterm_watcher.stop() if self._io_watcher: self._io_watcher.stop() super(LibevConnection, self)._init_connection_state() def _handle_sigint(self, signal_watcher, libev_events): """If an on_signal_callback has been defined, call it returning the string 'SIGINT'. """ LOGGER.debug('SIGINT') self._on_signal_callback('SIGINT') def _handle_sigterm(self, signal_watcher, libev_events): """If an on_signal_callback has been defined, call it returning the string 'SIGTERM'. """ LOGGER.debug('SIGTERM') self._on_signal_callback('SIGTERM') def _handle_events(self, io_watcher, libev_events, **kwargs): """Handle IO events by efficiently translating to BaseConnection events and calling super. """ super(LibevConnection, self)._handle_events(io_watcher.fd, self._LIBEV_TO_PIKA_ARRAY[libev_events], **kwargs) def _reset_io_watcher(self): """Reset the IO watcher; retry as necessary """ self._io_watcher.stop() retries = 0 while True: try: self._io_watcher.set( self._io_watcher.fd, self._PIKA_TO_LIBEV_ARRAY[self.event_state]) break except: # sometimes the stop() doesn't complete in time if retries > 5: raise self._io_watcher.stop() # so try it again retries += 1 self._io_watcher.start() def _manage_event_state(self): """Manage the bitmask for reading/writing/error which is used by the io/event handler to specify when there is an event such as a read or write. """ if self.outbound_buffer: if not self.event_state & self.WRITE: self.event_state |= self.WRITE self._reset_io_watcher() elif self.event_state & self.WRITE: self.event_state = self.base_events self._reset_io_watcher() def _timer_callback(self, timer, libev_events): """Manage timer callbacks indirectly.""" if timer in self._active_timers: (callback_method, callback_timeout, kwargs) = self._active_timers[timer] if callback_timeout: callback_method(timeout=timer, **kwargs) else: callback_method(**kwargs) self.remove_timeout(timer) else: LOGGER.warning('Timer callback_method not found') def _get_timer(self, deadline): """Get a timer from the pool or allocate a new one.""" if self._stopped_timers: timer = self._stopped_timers.pop() timer.set(deadline, 0.0) else: timer = self.ioloop.timer(deadline, 0.0, self._timer_callback) return timer def add_timeout(self, deadline, callback_method, callback_timeout=False, **callback_kwargs): """Add the callback_method indirectly to the IOLoop timer to fire after deadline seconds. Returns the timer handle. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :param callback_timeout: Whether timeout kwarg is passed on callback :type callback_timeout: boolean :param kwargs callback_kwargs: additional kwargs to pass on callback :rtype: timer instance handle. """ LOGGER.debug('deadline: {0}'.format(deadline)) timer = self._get_timer(deadline) self._active_timers[timer] = (callback_method, callback_timeout, callback_kwargs) timer.start() return timer def remove_timeout(self, timer): """Remove the timer from the IOLoop using the handle returned from add_timeout. param: timer instance handle """ LOGGER.debug('stop') self._active_timers.pop(timer, None) timer.stop() self._stopped_timers.append(timer) def _create_and_connect_to_socket(self, sock_addr_tuple): """Call super and then set the socket to nonblocking.""" result = super(LibevConnection, self)._create_and_connect_to_socket(sock_addr_tuple) if result: self.socket.setblocking(0) return result pika-0.10.0/pika/adapters/select_connection.py000066400000000000000000000517351257163076400213410ustar00rootroot00000000000000"""A connection adapter that tries to use the best polling method for the platform pika is running on. """ import os import logging import socket import select import errno import time from collections import defaultdict import threading import pika.compat from pika.compat import dictkeys from pika.adapters.base_connection import BaseConnection LOGGER = logging.getLogger(__name__) # One of select, epoll, kqueue or poll SELECT_TYPE = None # Use epoll's constants to keep life easy READ = 0x0001 WRITE = 0x0004 ERROR = 0x0008 if pika.compat.PY2: _SELECT_ERROR = select.error else: # select.error was deprecated and replaced by OSError in python 3.3 _SELECT_ERROR = OSError def _get_select_errno(error): if pika.compat.PY2: assert isinstance(error, select.error), repr(error) return error.args[0] else: assert isinstance(error, OSError), repr(error) return error.errno class SelectConnection(BaseConnection): """An asynchronous connection adapter that attempts to use the fastest event loop adapter for the given platform. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=True, custom_ioloop=None): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Method to call on connection open :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param method on_close_callback: Method to call on connection close :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado :raises: RuntimeError """ ioloop = custom_ioloop or IOLoop() super(SelectConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the RabbitMQ broker, returning True on success, False on failure. :rtype: bool """ error = super(SelectConnection, self)._adapter_connect() if not error: self.ioloop.add_handler(self.socket.fileno(), self._handle_events, self.event_state) return error def _adapter_disconnect(self): """Disconnect from the RabbitMQ broker""" if self.socket: self.ioloop.remove_handler(self.socket.fileno()) super(SelectConnection, self)._adapter_disconnect() class IOLoop(object): """Singlton wrapper that decides which type of poller to use, creates an instance of it in start_poller and keeps the invoking application in a blocking state by calling the pollers start method. Poller should keep looping until IOLoop.instance().stop() is called or there is a socket error. Passes through all operations to the loaded poller object. """ def __init__(self): self._poller = self._get_poller() def __getattr__(self, attr): return getattr(self._poller, attr) def _get_poller(self): """Determine the best poller to use for this enviroment.""" poller = None if hasattr(select, 'epoll'): if not SELECT_TYPE or SELECT_TYPE == 'epoll': LOGGER.debug('Using EPollPoller') poller = EPollPoller() if not poller and hasattr(select, 'kqueue'): if not SELECT_TYPE or SELECT_TYPE == 'kqueue': LOGGER.debug('Using KQueuePoller') poller = KQueuePoller() if (not poller and hasattr(select, 'poll') and hasattr(select.poll(), 'modify')): # pylint: disable=E1101 if not SELECT_TYPE or SELECT_TYPE == 'poll': LOGGER.debug('Using PollPoller') poller = PollPoller() if not poller: LOGGER.debug('Using SelectPoller') poller = SelectPoller() return poller class SelectPoller(object): """Default behavior is to use Select since it's the widest supported and has all of the methods we need for child classes as well. One should only need to override the update_handler and start methods for additional types. """ # Drop out of the poll loop every POLL_TIMEOUT secs as a worst case, this # is only a backstop value. We will run timeouts when they are scheduled. POLL_TIMEOUT = 5 # if the poller uses MS specify 1000 POLL_TIMEOUT_MULT = 1 def __init__(self): """Create an instance of the SelectPoller """ # fd-to-handler function mappings self._fd_handlers = dict() # event-to-fdset mappings self._fd_events = {READ: set(), WRITE: set(), ERROR: set()} self._stopping = False self._timeouts = {} self._next_timeout = None self._processing_fd_event_map = {} # Mutex for controlling critical sections where ioloop-interrupt sockets # are created, used, and destroyed. Needed in case `stop()` is called # from a thread. self._mutex = threading.Lock() # ioloop-interrupt socket pair; initialized in start() self._r_interrupt = None self._w_interrupt = None def get_interrupt_pair(self): """ Use a socketpair to be able to interrupt the ioloop if called from another thread. Socketpair() is not supported on some OS (Win) so use a pair of simple UDP sockets instead. The sockets will be closed and garbage collected by python when the ioloop itself is. """ try: read_sock, write_sock = socket.socketpair() except AttributeError: LOGGER.debug("Using custom socketpair for interrupt") read_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) read_sock.bind(('localhost', 0)) write_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) write_sock.connect(read_sock.getsockname()) read_sock.setblocking(0) write_sock.setblocking(0) return read_sock, write_sock def read_interrupt(self, interrupt_sock, events, write_only): # pylint: disable=W0613 """ Read the interrupt byte(s). We ignore the event mask and write_only flag as we can ony get here if there's data to be read on our fd. :param int interrupt_sock: The file descriptor to read from :param int events: (unused) The events generated for this fd :param bool write_only: (unused) True if poll was called to trigger a write """ try: os.read(interrupt_sock, 512) except OSError as err: if err.errno != errno.EAGAIN: raise def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ timeout_at = time.time() + deadline value = {'deadline': timeout_at, 'callback': callback_method} timeout_id = hash(frozenset(value.items())) self._timeouts[timeout_id] = value if not self._next_timeout or timeout_at < self._next_timeout: self._next_timeout = timeout_at return timeout_id def remove_timeout(self, timeout_id): """Remove a timeout if it's still in the timeout stack :param str timeout_id: The timeout id to remove """ try: timeout = self._timeouts.pop(timeout_id) if timeout['deadline'] == self._next_timeout: self._next_timeout = None except KeyError: pass def get_next_deadline(self): """Get the interval to the next timeout event, or a default interval """ if self._next_timeout: timeout = max((self._next_timeout - time.time(), 0)) elif self._timeouts: deadlines = [t['deadline'] for t in self._timeouts.values()] self._next_timeout = min(deadlines) timeout = max((self._next_timeout - time.time(), 0)) else: timeout = SelectPoller.POLL_TIMEOUT timeout = min((timeout, SelectPoller.POLL_TIMEOUT)) return timeout * SelectPoller.POLL_TIMEOUT_MULT def process_timeouts(self): """Process the self._timeouts event stack""" now = time.time() # Run the timeouts in order of deadlines. Although this shouldn't # be strictly necessary it preserves old behaviour when timeouts # were only run periodically. to_run = sorted([(k, timer) for (k, timer) in self._timeouts.items() if timer['deadline'] <= now], key=lambda item: item[1]['deadline']) for k, timer in to_run: if k not in self._timeouts: # Previous invocation(s) should have deleted the timer. continue try: timer['callback']() finally: # Don't do 'del self._timeout[k]' as the key might # have been deleted just now. if self._timeouts.pop(k, None) is not None: self._next_timeout = None def add_handler(self, fileno, handler, events): """Add a new fileno to the set to be monitored :param int fileno: The file descriptor :param method handler: What is called when an event happens :param int events: The event mask """ self._fd_handlers[fileno] = handler self.update_handler(fileno, events) def update_handler(self, fileno, events): """Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask """ for ev in (READ, WRITE, ERROR): if events & ev: self._fd_events[ev].add(fileno) else: self._fd_events[ev].discard(fileno) def remove_handler(self, fileno): """Remove a file descriptor from the set :param int fileno: The file descriptor """ try: del self._processing_fd_event_map[fileno] except KeyError: pass self.update_handler(fileno, 0) del self._fd_handlers[fileno] def start(self): """Start the main poller loop. It will loop here until self._stopping""" LOGGER.debug('Starting IOLoop') self._stopping = False with self._mutex: # Watch out for reentry if self._r_interrupt is None: # Create ioloop-interrupt socket pair and register read handler. # NOTE: we defer their creation because some users (e.g., # BlockingConnection adapter) don't use the event loop and these # sockets would get reported as leaks self._r_interrupt, self._w_interrupt = self.get_interrupt_pair() self.add_handler(self._r_interrupt.fileno(), self.read_interrupt, READ) interrupt_sockets_created = True else: interrupt_sockets_created = False try: # Run event loop while not self._stopping: self.poll() self.process_timeouts() finally: # Unregister and close ioloop-interrupt socket pair if interrupt_sockets_created: with self._mutex: self.remove_handler(self._r_interrupt.fileno()) self._r_interrupt.close() self._r_interrupt = None self._w_interrupt.close() self._w_interrupt = None def stop(self): """Request exit from the ioloop.""" LOGGER.debug('Stopping IOLoop') self._stopping = True with self._mutex: if self._w_interrupt is None: return try: # Send byte to interrupt the poll loop, use write() for # consitency. os.write(self._w_interrupt.fileno(), b'X') except OSError as err: if err.errno != errno.EWOULDBLOCK: raise except Exception as err: # There's nothing sensible to do here, we'll exit the interrupt # loop after POLL_TIMEOUT secs in worst case anyway. LOGGER.warning("Failed to send ioloop interrupt: %s", err) raise def poll(self, write_only=False): """Wait for events on interested filedescriptors. :param bool write_only: Passed through to the hadnlers to indicate that they should only process write events. """ while True: try: read, write, error = select.select(self._fd_events[READ], self._fd_events[WRITE], self._fd_events[ERROR], self.get_next_deadline()) break except _SELECT_ERROR as error: if _get_select_errno(error) == errno.EINTR: continue else: raise # Build an event bit mask for each fileno we've recieved an event for fd_event_map = defaultdict(int) for fd_set, ev in zip((read, write, error), (READ, WRITE, ERROR)): for fileno in fd_set: fd_event_map[fileno] |= ev self._process_fd_events(fd_event_map, write_only) def _process_fd_events(self, fd_event_map, write_only): """ Processes the callbacks for each fileno we've recieved events. Before doing so we re-calculate the event mask based on what is currently set in case it has been changed under our feet by a previous callback. We also take a store a refernce to the fd_event_map in the class so that we can detect removal of an fileno during processing of another callback and not generate spurious callbacks on it. :param dict fd_event_map: Map of fds to events recieved on them. """ self._processing_fd_event_map = fd_event_map for fileno in dictkeys(fd_event_map): if fileno not in fd_event_map: # the fileno has been removed from the map under our feet. continue events = fd_event_map[fileno] for ev in [READ, WRITE, ERROR]: if fileno not in self._fd_events[ev]: events &= ~ev if events: handler = self._fd_handlers[fileno] handler(fileno, events, write_only=write_only) class KQueuePoller(SelectPoller): """KQueuePoller works on BSD based systems and is faster than select""" def __init__(self): """Create an instance of the KQueuePoller :param int fileno: The file descriptor to check events for :param method handler: What is called when an event happens :param int events: The events to look for """ self._kqueue = select.kqueue() super(KQueuePoller, self).__init__() def update_handler(self, fileno, events): """Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask """ kevents = list() if not events & READ: if fileno in self._fd_events[READ]: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_DELETE)) else: if fileno not in self._fd_events[READ]: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD)) if not events & WRITE: if fileno in self._fd_events[WRITE]: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_DELETE)) else: if fileno not in self._fd_events[WRITE]: kevents.append(select.kevent(fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_ADD)) for event in kevents: self._kqueue.control([event], 0) super(KQueuePoller, self).update_handler(fileno, events) def _map_event(self, kevent): """return the event type associated with a kevent object :param kevent kevent: a kevent object as returned by kqueue.control() """ if kevent.filter == select.KQ_FILTER_READ: return READ elif kevent.filter == select.KQ_FILTER_WRITE: return WRITE elif kevent.flags & select.KQ_EV_ERROR: return ERROR def poll(self, write_only=False): """Check to see if the events that are cared about have fired. :param bool write_only: Don't look at self.events, just look to see if the adapter can write. """ while True: try: kevents = self._kqueue.control(None, 1000, self.get_next_deadline()) break except _SELECT_ERROR as error: if _get_select_errno(error) == errno.EINTR: continue else: raise fd_event_map = defaultdict(int) for event in kevents: fileno = event.ident fd_event_map[fileno] |= self._map_event(event) self._process_fd_events(fd_event_map, write_only) class PollPoller(SelectPoller): """Poll works on Linux and can have better performance than EPoll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1000 def __init__(self): """Create an instance of the KQueuePoller :param int fileno: The file descriptor to check events for :param method handler: What is called when an event happens :param int events: The events to look for """ self._poll = self.create_poller() super(PollPoller, self).__init__() def create_poller(self): return select.poll() # pylint: disable=E1101 def add_handler(self, fileno, handler, events): """Add a file descriptor to the poll set :param int fileno: The file descriptor to check events for :param method handler: What is called when an event happens :param int events: The events to look for """ self._poll.register(fileno, events) super(PollPoller, self).add_handler(fileno, handler, events) def update_handler(self, fileno, events): """Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask """ super(PollPoller, self).update_handler(fileno, events) self._poll.modify(fileno, events) def remove_handler(self, fileno): """Remove a fileno to the set :param int fileno: The file descriptor """ super(PollPoller, self).remove_handler(fileno) self._poll.unregister(fileno) def poll(self, write_only=False): """Poll until the next timeout waiting for an event :param bool write_only: Only process write events """ while True: try: events = self._poll.poll(self.get_next_deadline()) break except _SELECT_ERROR as error: if _get_select_errno(error) == errno.EINTR: continue else: raise fd_event_map = defaultdict(int) for fileno, event in events: fd_event_map[fileno] |= event self._process_fd_events(fd_event_map, write_only) class EPollPoller(PollPoller): """EPoll works on Linux and can have better performance than Poll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1 def create_poller(self): return select.epoll() # pylint: disable=E1101 pika-0.10.0/pika/adapters/tornado_connection.py000066400000000000000000000073111257163076400215170ustar00rootroot00000000000000"""Use pika with the Tornado IOLoop""" from tornado import ioloop import logging import time from pika.adapters import base_connection LOGGER = logging.getLogger(__name__) class TornadoConnection(base_connection.BaseConnection): """The TornadoConnection runs on the Tornado IOLoop. If you're running the connection in a web app, make sure you set stop_ioloop_on_close to False, which is the default behavior for this adapter, otherwise the web app will stop taking requests. :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado """ WARN_ABOUT_IOLOOP = True def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False, custom_ioloop=None): """Create a new instance of the TornadoConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param on_open_callback: The method to call when the connection is open :type on_open_callback: method :param on_open_error_callback: Method to call if the connection cant be opened :type on_open_error_callback: method :param bool stop_ioloop_on_close: Call ioloop.stop() if disconnected :param custom_ioloop: Override using the global IOLoop in Tornado """ self.sleep_counter = 0 self.ioloop = custom_ioloop or ioloop.IOLoop.instance() super(TornadoConnection, self).__init__(parameters, on_open_callback, on_open_error_callback, on_close_callback, self.ioloop, stop_ioloop_on_close) def _adapter_connect(self): """Connect to the remote socket, adding the socket to the IOLoop if connected. :rtype: bool """ error = super(TornadoConnection, self)._adapter_connect() if not error: self.ioloop.add_handler(self.socket.fileno(), self._handle_events, self.event_state) return error def _adapter_disconnect(self): """Disconnect from the RabbitMQ broker""" if self.socket: self.ioloop.remove_handler(self.socket.fileno()) super(TornadoConnection, self)._adapter_disconnect() def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: str """ return self.ioloop.add_timeout(time.time() + deadline, callback_method) def remove_timeout(self, timeout_id): """Remove the timeout from the IOLoop by the ID returned from add_timeout. :rtype: str """ return self.ioloop.remove_timeout(timeout_id) pika-0.10.0/pika/adapters/twisted_connection.py000066400000000000000000000377771257163076400215570ustar00rootroot00000000000000"""Using Pika with a Twisted reactor. Supports two methods of establishing the connection, using TwistedConnection or TwistedProtocolConnection. For details about each method, see the docstrings of the corresponding classes. The interfaces in this module are Deferred-based when possible. This means that the connection.channel() method and most of the channel methods return Deferreds instead of taking a callback argument and that basic_consume() returns a Twisted DeferredQueue where messages from the server will be stored. Refer to the docstrings for TwistedConnection.channel() and the TwistedChannel class for details. """ import functools from twisted.internet import defer, error, reactor from twisted.python import log from pika import exceptions from pika.adapters import base_connection class ClosableDeferredQueue(defer.DeferredQueue): """ Like the normal Twisted DeferredQueue, but after close() is called with an Exception instance all pending Deferreds are errbacked and further attempts to call get() or put() return a Failure wrapping that exception. """ def __init__(self, size=None, backlog=None): self.closed = None super(ClosableDeferredQueue, self).__init__(size, backlog) def put(self, obj): if self.closed: return defer.fail(self.closed) return defer.DeferredQueue.put(self, obj) def get(self): if self.closed: return defer.fail(self.closed) return defer.DeferredQueue.get(self) def close(self, reason): self.closed = reason while self.waiting: self.waiting.pop().errback(reason) self.pending = [] class TwistedChannel(object): """A wrapper wround Pika's Channel. Channel methods that normally take a callback argument are wrapped to return a Deferred that fires with whatever would be passed to the callback. If the channel gets closed, all pending Deferreds are errbacked with a ChannelClosed exception. The returned Deferreds fire with whatever arguments the callback to the original method would receive. The basic_consume method is wrapped in a special way, see its docstring for details. """ WRAPPED_METHODS = ('exchange_declare', 'exchange_delete', 'queue_declare', 'queue_bind', 'queue_purge', 'queue_unbind', 'basic_qos', 'basic_get', 'basic_recover', 'tx_select', 'tx_commit', 'tx_rollback', 'flow', 'basic_cancel') def __init__(self, channel): self.__channel = channel self.__closed = None self.__calls = set() self.__consumers = {} channel.add_on_close_callback(self.channel_closed) def channel_closed(self, channel, reply_code, reply_text): # enter the closed state self.__closed = exceptions.ChannelClosed(reply_code, reply_text) # errback all pending calls for d in self.__calls: d.errback(self.__closed) # close all open queues for consumers in self.__consumers.values(): for c in consumers: c.close(self.__closed) # release references to stored objects self.__calls = set() self.__consumers = {} def basic_consume(self, *args, **kwargs): """Consume from a server queue. Returns a Deferred that fires with a tuple: (queue_object, consumer_tag). The queue object is an instance of ClosableDeferredQueue, where data received from the queue will be stored. Clients should use its get() method to fetch individual message. """ if self.__closed: return defer.fail(self.__closed) queue = ClosableDeferredQueue() queue_name = kwargs['queue'] kwargs['consumer_callback'] = lambda *args: queue.put(args) self.__consumers.setdefault(queue_name, set()).add(queue) try: consumer_tag = self.__channel.basic_consume(*args, **kwargs) except: return defer.fail() return defer.succeed((queue, consumer_tag)) def queue_delete(self, *args, **kwargs): """Wraps the method the same way all the others are wrapped, but removes the reference to the queue object after it gets deleted on the server. """ wrapped = self.__wrap_channel_method('queue_delete') queue_name = kwargs['queue'] d = wrapped(*args, **kwargs) return d.addCallback(self.__clear_consumer, queue_name) def basic_publish(self, *args, **kwargs): """Make sure the channel is not closed and then publish. Return a Deferred that fires with the result of the channel's basic_publish. """ if self.__closed: return defer.fail(self.__closed) return defer.succeed(self.__channel.basic_publish(*args, **kwargs)) def __wrap_channel_method(self, name): """Wrap Pika's Channel method to make it return a Deferred that fires when the method completes and errbacks if the channel gets closed. If the original method's callback would receive more than one argument, the Deferred fires with a tuple of argument values. """ method = getattr(self.__channel, name) @functools.wraps(method) def wrapped(*args, **kwargs): if self.__closed: return defer.fail(self.__closed) d = defer.Deferred() self.__calls.add(d) d.addCallback(self.__clear_call, d) def single_argument(*args): """ Make sure that the deferred is called with a single argument. In case the original callback fires with more than one, convert to a tuple. """ if len(args) > 1: d.callback(tuple(args)) else: d.callback(*args) kwargs['callback'] = single_argument try: method(*args, **kwargs) except: return defer.fail() return d return wrapped def __clear_consumer(self, ret, queue_name): self.__consumers.pop(queue_name, None) return ret def __clear_call(self, ret, d): self.__calls.discard(d) return ret def __getattr__(self, name): # Wrap methods defined in WRAPPED_METHODS, forward the rest of accesses # to the channel. if name in self.WRAPPED_METHODS: return self.__wrap_channel_method(name) return getattr(self.__channel, name) class IOLoopReactorAdapter(object): """An adapter providing Pika's IOLoop interface using a Twisted reactor. Accepts a TwistedConnection object and a Twisted reactor object. """ def __init__(self, connection, reactor): self.connection = connection self.reactor = reactor self.started = False def add_timeout(self, deadline, callback_method): """Add the callback_method to the IOLoop timer to fire after deadline seconds. Returns a handle to the timeout. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method :rtype: twisted.internet.interfaces.IDelayedCall """ return self.reactor.callLater(deadline, callback_method) def remove_timeout(self, call): """Remove a call :param twisted.internet.interfaces.IDelayedCall call: The call to cancel """ call.cancel() def stop(self): # Guard against stopping the reactor multiple times if not self.started: return self.started = False self.reactor.stop() def start(self): # Guard against starting the reactor multiple times if self.started: return self.started = True self.reactor.run() def remove_handler(self, _): # The fileno is irrelevant, as it's the connection's job to provide it # to the reactor when asked to do so. Removing the handler from the # ioloop is removing it from the reactor in Twisted's parlance. self.reactor.removeReader(self.connection) self.reactor.removeWriter(self.connection) def update_handler(self, _, event_state): # Same as in remove_handler, the fileno is irrelevant. First remove the # connection entirely from the reactor, then add it back depending on # the event state. self.reactor.removeReader(self.connection) self.reactor.removeWriter(self.connection) if event_state & self.connection.READ: self.reactor.addReader(self.connection) if event_state & self.connection.WRITE: self.reactor.addWriter(self.connection) class TwistedConnection(base_connection.BaseConnection): """A standard Pika connection adapter. You instantiate the class passing the connection parameters and the connected callback and when it gets called you can start using it. The problem is that connection establishing is done using the blocking socket module. For instance, if the host you are connecting to is behind a misconfigured firewall that just drops packets, the whole process will freeze until the connection timeout passes. To work around that problem, use TwistedProtocolConnection, but read its docstring first. Objects of this class get put in the Twisted reactor which will notify them when the socket connection becomes readable or writable, so apart from implementing the BaseConnection interface, they also provide Twisted's IReadWriteDescriptor interface. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, stop_ioloop_on_close=False): super(TwistedConnection, self).__init__( parameters=parameters, on_open_callback=on_open_callback, on_open_error_callback=on_open_error_callback, on_close_callback=on_close_callback, ioloop=IOLoopReactorAdapter(self, reactor), stop_ioloop_on_close=stop_ioloop_on_close) def _adapter_connect(self): """Connect to the RabbitMQ broker""" # Connect (blockignly!) to the server error = super(TwistedConnection, self)._adapter_connect() if not error: # Set the I/O events we're waiting for (see IOLoopReactorAdapter # docstrings for why it's OK to pass None as the file descriptor) self.ioloop.update_handler(None, self.event_state) return error def _adapter_disconnect(self): """Called when the adapter should disconnect""" self.ioloop.remove_handler(None) self._cleanup_socket() def _handle_disconnect(self): """Do not stop the reactor, this would cause the entire process to exit, just fire the disconnect callbacks """ self._on_connection_closed(None, True) def _on_connected(self): """Call superclass and then update the event state to flush the outgoing frame out. Commit 50d842526d9f12d32ad9f3c4910ef60b8c301f59 removed a self._flush_outbound call that was in _send_frame which previously made this step unnecessary. """ super(TwistedConnection, self)._on_connected() self._manage_event_state() def channel(self, channel_number=None): """Return a Deferred that fires with an instance of a wrapper around the Pika Channel class. """ d = defer.Deferred() base_connection.BaseConnection.channel(self, d.callback, channel_number) return d.addCallback(TwistedChannel) # IReadWriteDescriptor methods def fileno(self): return self.socket.fileno() def logPrefix(self): return "twisted-pika" def connectionLost(self, reason): # If the connection was not closed cleanly, log the error if not reason.check(error.ConnectionDone): log.err(reason) self._handle_disconnect() def doRead(self): self._handle_read() def doWrite(self): self._handle_write() self._manage_event_state() class TwistedProtocolConnection(base_connection.BaseConnection): """A hybrid between a Pika Connection and a Twisted Protocol. Allows using Twisted's non-blocking connectTCP/connectSSL methods for connecting to the server. It has one caveat: TwistedProtocolConnection objects have a ready instance variable that's a Deferred which fires when the connection is ready to be used (the initial AMQP handshaking has been done). You *have* to wait for this Deferred to fire before requesting a channel. Since it's Twisted handling connection establishing it does not accept connect callbacks, you have to implement that within Twisted. Also remember that the host, port and ssl values of the connection parameters are ignored because, yet again, it's Twisted who manages the connection. """ def __init__(self, parameters): self.ready = defer.Deferred() super(TwistedProtocolConnection, self).__init__( parameters=parameters, on_open_callback=self.connectionReady, on_open_error_callback=self.connectionFailed, on_close_callback=None, ioloop=IOLoopReactorAdapter(self, reactor), stop_ioloop_on_close=False) def connect(self): # The connection is open asynchronously by Twisted, so skip the whole # connect() part, except for setting the connection state self._set_connection_state(self.CONNECTION_INIT) def _adapter_connect(self): # Should never be called, as we override connect() and leave the # building of a TCP connection to Twisted, but implement anyway to keep # the interface return False def _adapter_disconnect(self): # Disconnect from the server self.transport.loseConnection() def _flush_outbound(self): """Override BaseConnection._flush_outbound to send all bufferred data the Twisted way, by writing to the transport. No need for buffering, Twisted handles that for us. """ while self.outbound_buffer: self.transport.write(self.outbound_buffer.popleft()) def channel(self, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. Return a Deferred that fires with an instance of a wrapper around the Pika Channel class. :param int channel_number: The channel number to use, defaults to the next available. """ d = defer.Deferred() base_connection.BaseConnection.channel(self, d.callback, channel_number) return d.addCallback(TwistedChannel) # IProtocol methods def dataReceived(self, data): # Pass the bytes to Pika for parsing self._on_data_available(data) def connectionLost(self, reason): # Let the caller know there's been an error d, self.ready = self.ready, None if d: d.errback(reason) def makeConnection(self, transport): self.transport = transport self.connectionMade() def connectionMade(self): # Tell everyone we're connected self._on_connected() # Our own methods def connectionReady(self, res): d, self.ready = self.ready, None if d: d.callback(res) def connectionFailed(self, connection_unused, error_message=None): d, self.ready = self.ready, None if d: attempts = self.params.connection_attempts exc = exceptions.AMQPConnectionError(attempts) d.errback(exc) pika-0.10.0/pika/amqp_object.py000066400000000000000000000031731257163076400163150ustar00rootroot00000000000000"""Base classes that are extended by low level AMQP frames and higher level AMQP classes and methods. """ class AMQPObject(object): """Base object that is extended by AMQP low level frames and AMQP classes and methods. """ NAME = 'AMQPObject' INDEX = None def __repr__(self): items = list() for key, value in self.__dict__.items(): if getattr(self.__class__, key, None) != value: items.append('%s=%s' % (key, value)) if not items: return "<%s>" % self.NAME return "<%s(%s)>" % (self.NAME, sorted(items)) class Class(AMQPObject): """Is extended by AMQP classes""" NAME = 'Unextended Class' class Method(AMQPObject): """Is extended by AMQP methods""" NAME = 'Unextended Method' synchronous = False def _set_content(self, properties, body): """If the method is a content frame, set the properties and body to be carried as attributes of the class. :param pika.frame.Properties properties: AMQP Basic Properties :param body: The message body :type body: str or unicode """ self._properties = properties self._body = body def get_properties(self): """Return the properties if they are set. :rtype: pika.frame.Properties """ return self._properties def get_body(self): """Return the message body if it is set. :rtype: str|unicode """ return self._body class Properties(AMQPObject): """Class to encompass message properties (AMQP Basic.Properties)""" NAME = 'Unextended Properties' pika-0.10.0/pika/callback.py000066400000000000000000000350611257163076400155660ustar00rootroot00000000000000"""Callback management class, common area for keeping track of all callbacks in the Pika stack. """ import functools import logging from pika import frame from pika import amqp_object from pika.compat import xrange, canonical_str LOGGER = logging.getLogger(__name__) def name_or_value(value): """Will take Frame objects, classes, etc and attempt to return a valid string identifier for them. :param value: The value to sanitize :type value: pika.amqp_object.AMQPObject|pika.frame.Frame|int|unicode|str :rtype: str """ # Is it subclass of AMQPObject try: if issubclass(value, amqp_object.AMQPObject): return value.NAME except TypeError: pass # Is it a Pika frame object? if isinstance(value, frame.Method): return value.method.NAME # Is it a Pika frame object (go after Method since Method extends this) if isinstance(value, amqp_object.AMQPObject): return value.NAME # Cast the value to a str (python 2 and python 3); encoding as UTF-8 on Python 2 return canonical_str(value) def sanitize_prefix(function): """Automatically call name_or_value on the prefix passed in.""" @functools.wraps(function) def wrapper(*args, **kwargs): args = list(args) offset = 1 if 'prefix' in kwargs: kwargs['prefix'] = name_or_value(kwargs['prefix']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) offset += 1 if 'key' in kwargs: kwargs['key'] = name_or_value(kwargs['key']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) return function(*tuple(args), **kwargs) return wrapper def check_for_prefix_and_key(function): """Automatically return false if the key or prefix is not in the callbacks for the instance. """ @functools.wraps(function) def wrapper(*args, **kwargs): offset = 1 # Sanitize the prefix if 'prefix' in kwargs: prefix = name_or_value(kwargs['prefix']) else: prefix = name_or_value(args[offset]) offset += 1 # Make sure to sanitize the key as well if 'key' in kwargs: key = name_or_value(kwargs['key']) else: key = name_or_value(args[offset]) # Make sure prefix and key are in the stack if prefix not in args[0]._stack or key not in args[0]._stack[prefix]: return False # Execute the method return function(*args, **kwargs) return wrapper class CallbackManager(object): """CallbackManager is a global callback system designed to be a single place where Pika can manage callbacks and process them. It should be referenced by the CallbackManager.instance() method instead of constructing new instances of it. """ CALLS = 'calls' ARGUMENTS = 'arguments' DUPLICATE_WARNING = 'Duplicate callback found for "%s:%s"' CALLBACK = 'callback' ONE_SHOT = 'one_shot' ONLY_CALLER = 'only' def __init__(self): """Create an instance of the CallbackManager""" self._stack = dict() @sanitize_prefix def add(self, prefix, key, callback, one_shot=True, only_caller=None, arguments=None): """Add a callback to the stack for the specified key. If the call is specified as one_shot, it will be removed after being fired The prefix is usually the channel number but the class is generic and prefix and key may be any value. If you pass in only_caller CallbackManager will restrict processing of the callback to only the calling function/object that you specify. :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :param method callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :param dict arguments: Arguments to validate when processing :rtype: tuple(prefix, key) """ # Prep the stack if prefix not in self._stack: self._stack[prefix] = dict() if key not in self._stack[prefix]: self._stack[prefix][key] = list() # Check for a duplicate for callback_dict in self._stack[prefix][key]: if (callback_dict[self.CALLBACK] == callback and callback_dict[self.ARGUMENTS] == arguments and callback_dict[self.ONLY_CALLER] == only_caller): if callback_dict[self.ONE_SHOT] is True: callback_dict[self.CALLS] += 1 LOGGER.debug('Incremented callback reference counter: %r', callback_dict) else: LOGGER.warning(self.DUPLICATE_WARNING, prefix, key) return prefix, key # Create the callback dictionary callback_dict = self._callback_dict(callback, one_shot, only_caller, arguments) self._stack[prefix][key].append(callback_dict) LOGGER.debug('Added: %r', callback_dict) return prefix, key def clear(self): """Clear all the callbacks if there are any defined.""" self._stack = dict() LOGGER.debug('Callbacks cleared') @sanitize_prefix def cleanup(self, prefix): """Remove all callbacks from the stack by a prefix. Returns True if keys were there to be removed :param str or int prefix: The prefix for keeping track of callbacks with :rtype: bool """ LOGGER.debug('Clearing out %r from the stack', prefix) if prefix not in self._stack or not self._stack[prefix]: return False del self._stack[prefix] return True @sanitize_prefix def pending(self, prefix, key): """Return count of callbacks for a given prefix or key or None :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :rtype: None or int """ if not prefix in self._stack or not key in self._stack[prefix]: return None return len(self._stack[prefix][key]) @sanitize_prefix @check_for_prefix_and_key def process(self, prefix, key, caller, *args, **keywords): """Run through and process all the callbacks for the specified keys. Caller should be specified at all times so that callbacks which require a specific function to call CallbackManager.process will not be processed. :param prefix: Categorize the callback :type prefix: str or int :param key: The key for the callback :type key: object or str or dict :param object caller: Who is firing the event :param list args: Any optional arguments :param dict keywords: Optional keyword arguments :rtype: bool """ LOGGER.debug('Processing %s:%s', prefix, key) if prefix not in self._stack or key not in self._stack[prefix]: return False callbacks = list() # Check each callback, append it to the list if it should be called for callback_dict in list(self._stack[prefix][key]): if self._should_process_callback(callback_dict, caller, list(args)): callbacks.append(callback_dict[self.CALLBACK]) if callback_dict[self.ONE_SHOT]: self._use_one_shot_callback(prefix, key, callback_dict) # Call each callback for callback in callbacks: LOGGER.debug('Calling %s for "%s:%s"', callback, prefix, key) try: callback(*args, **keywords) except: LOGGER.exception('Calling %s for "%s:%s" failed', callback, prefix, key) raise return True @sanitize_prefix @check_for_prefix_and_key def remove(self, prefix, key, callback_value=None, arguments=None): """Remove a callback from the stack by prefix, key and optionally the callback itself. If you only pass in prefix and key, all callbacks for that prefix and key will be removed. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param method callback_value: The method defined to call on callback :param dict arguments: Optional arguments to check :rtype: bool """ if callback_value: offsets_to_remove = list() for offset in xrange(len(self._stack[prefix][key]), 0, -1): callback_dict = self._stack[prefix][key][offset - 1] if (callback_dict[self.CALLBACK] == callback_value and self._arguments_match(callback_dict, [arguments])): offsets_to_remove.append(offset - 1) for offset in offsets_to_remove: try: LOGGER.debug('Removing callback #%i: %r', offset, self._stack[prefix][key][offset]) del self._stack[prefix][key][offset] except KeyError: pass self._cleanup_callback_dict(prefix, key) return True @sanitize_prefix @check_for_prefix_and_key def remove_all(self, prefix, key): """Remove all callbacks for the specified prefix and key. :param str prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ del self._stack[prefix][key] self._cleanup_callback_dict(prefix, key) def _arguments_match(self, callback_dict, args): """Validate if the arguments passed in match the expected arguments in the callback_dict. We expect this to be a frame passed in to *args for process or passed in as a list from remove. :param dict callback_dict: The callback dictionary to evaluate against :param list args: The arguments passed in as a list """ if callback_dict[self.ARGUMENTS] is None: return True if not args: return False if isinstance(args[0], dict): return self._dict_arguments_match(args[0], callback_dict[self.ARGUMENTS]) return self._obj_arguments_match(args[0].method if hasattr(args[0], 'method') else args[0], callback_dict[self.ARGUMENTS]) def _callback_dict(self, callback, one_shot, only_caller, arguments): """Return the callback dictionary. :param method callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :rtype: dict """ value = { self.CALLBACK: callback, self.ONE_SHOT: one_shot, self.ONLY_CALLER: only_caller, self.ARGUMENTS: arguments } if one_shot: value[self.CALLS] = 1 return value def _cleanup_callback_dict(self, prefix, key=None): """Remove empty dict nodes in the callback stack. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ if key and key in self._stack[prefix] and not self._stack[prefix][key]: del self._stack[prefix][key] if prefix in self._stack and not self._stack[prefix]: del self._stack[prefix] @staticmethod def _dict_arguments_match(value, expectation): """Checks an dict to see if it has attributes that meet the expectation. :param dict value: The dict to evaluate :param dict expectation: The values to check against :rtype: bool """ LOGGER.debug('Comparing %r to %r', value, expectation) for key in expectation: if value.get(key) != expectation[key]: LOGGER.debug('Values in dict do not match for %s', key) return False return True @staticmethod def _obj_arguments_match(value, expectation): """Checks an object to see if it has attributes that meet the expectation. :param object value: The object to evaluate :param dict expectation: The values to check against :rtype: bool """ for key in expectation: if not hasattr(value, key): LOGGER.debug('%r does not have required attribute: %s', type(value), key) return False if getattr(value, key) != expectation[key]: LOGGER.debug('Values in %s do not match for %s', type(value), key) return False return True def _should_process_callback(self, callback_dict, caller, args): """Returns True if the callback should be processed. :param dict callback_dict: The callback configuration :param object caller: Who is firing the event :param list args: Any optional arguments :rtype: bool """ if not self._arguments_match(callback_dict, args): LOGGER.debug('Arguments do not match for %r, %r', callback_dict, args) return False return (callback_dict[self.ONLY_CALLER] is None or (callback_dict[self.ONLY_CALLER] and callback_dict[self.ONLY_CALLER] == caller)) def _use_one_shot_callback(self, prefix, key, callback_dict): """Process the one-shot callback, decrementing the use counter and removing it from the stack if it's now been fully used. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param dict callback_dict: The callback dict to process """ LOGGER.debug('Processing use of oneshot callback') callback_dict[self.CALLS] -= 1 LOGGER.debug('%i registered uses left', callback_dict[self.CALLS]) if callback_dict[self.CALLS] <= 0: self.remove(prefix, key, callback_dict[self.CALLBACK], callback_dict[self.ARGUMENTS]) pika-0.10.0/pika/channel.py000066400000000000000000001475411257163076400154510ustar00rootroot00000000000000"""The Channel class provides a wrapper for interacting with RabbitMQ implementing the methods and behaviors for an AMQP Channel. """ import collections import logging import warnings import uuid import pika.frame as frame import pika.exceptions as exceptions import pika.spec as spec from pika.utils import is_callable from pika.compat import unicode_type, dictkeys, as_bytes LOGGER = logging.getLogger(__name__) MAX_CHANNELS = 32768 class Channel(object): """A Channel is the primary communication method for interacting with RabbitMQ. It is recommended that you do not directly invoke the creation of a channel object in your application code but rather construct the a channel by calling the active connection's channel() method. """ CLOSED = 0 OPENING = 1 OPEN = 2 CLOSING = 3 _ON_CHANNEL_CLEANUP_CB_KEY = '_on_channel_cleanup' def __init__(self, connection, channel_number, on_open_callback=None): """Create a new instance of the Channel :param pika.connection.Connection connection: The connection :param int channel_number: The channel number for this instance :param method on_open_callback: The method to call on channel open """ if not isinstance(channel_number, int): raise exceptions.InvalidChannelNumber self.channel_number = channel_number self.callbacks = connection.callbacks self.connection = connection # The frame-handler changes depending on the type of frame processed self.frame_dispatcher = ContentFrameDispatcher() self._blocked = collections.deque(list()) self._blocking = None self._has_on_flow_callback = False self._cancelled = set() self._consumers = dict() self._consumers_with_noack = set() self._on_flowok_callback = None self._on_getok_callback = None self._on_openok_callback = on_open_callback self._pending = dict() self._state = self.CLOSED # opaque cookie value set by wrapper layer (e.g., BlockingConnection) # via _set_cookie self._cookie = None def __int__(self): """Return the channel object as its channel number :rtype: int """ return self.channel_number def add_callback(self, callback, replies, one_shot=True): """Pass in a callback handler and a list replies from the RabbitMQ broker which you'd like the callback notified of. Callbacks should allow for the frame parameter to be passed in. :param method callback: The method to call :param list replies: The replies to get a callback for :param bool one_shot: Only handle the first type callback """ for reply in replies: self.callbacks.add(self.channel_number, reply, callback, one_shot) def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when the basic_cancel is sent by the server. The callback function should receive a frame parameter. :param method callback: The method to call on callback """ self.callbacks.add(self.channel_number, spec.Basic.Cancel, callback, False) def add_on_close_callback(self, callback): """Pass a callback function that will be called when the channel is closed. The callback function will receive the channel, the reply_code (int) and the reply_text (int) sent by the server describing why the channel was closed. :param method callback: The method to call on callback """ self.callbacks.add(self.channel_number, '_on_channel_close', callback, False, self) def add_on_flow_callback(self, callback): """Pass a callback function that will be called when Channel.Flow is called by the remote server. Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure :param method callback: The method to call on callback """ self._has_on_flow_callback = True self.callbacks.add(self.channel_number, spec.Channel.Flow, callback, False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when basic_publish as sent a message that has been rejected and returned by the server. :param method callback: The method to call on callback with the signature callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) """ self.callbacks.add(self.channel_number, '_on_return', callback, False) def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ if not self.is_open: raise exceptions.ChannelClosed() return self._send_method(spec.Basic.Ack(delivery_tag, multiple)) def basic_cancel(self, callback=None, consumer_tag='', nowait=False): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. It may also be sent from the server to the client in the event of the consumer being unexpectedly cancelled (i.e. cancelled for any reason other than the server receiving the corresponding basic.cancel from the client). This allows clients to be notified of the loss of consumers due to events such as queue deletion. :param method callback: Method to call for a Basic.CancelOk response :param str consumer_tag: Identifier for the consumer :param bool nowait: Do not expect a Basic.CancelOk response :raises: ValueError """ self._validate_channel_and_callback(callback) if consumer_tag not in self.consumer_tags: return if callback: if nowait is True: raise ValueError('Can not pass a callback if nowait is True') self.callbacks.add(self.channel_number, spec.Basic.CancelOk, callback) self._cancelled.add(consumer_tag) self._rpc(spec.Basic.Cancel(consumer_tag=consumer_tag, nowait=nowait), self._on_cancelok, [(spec.Basic.CancelOk, {'consumer_tag': consumer_tag})] if nowait is False else []) def basic_consume(self, consumer_callback, queue='', no_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Sends the AMQP command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. For more information on basic_consume, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param method consumer_callback: The method to callback when consuming with the signature consumer_callback(channel, method, properties, body), where channel: pika.Channel method: pika.spec.Basic.Deliver properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param queue: The queue to consume from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a response :param bool exclusive: Don't allow other consumers on the queue :param consumer_tag: Specify your own consumer tag :type consumer_tag: str or unicode :param dict arguments: Custom key/value pair arguments for the consume :rtype: str """ self._validate_channel_and_callback(consumer_callback) # If a consumer tag was not passed, create one if not consumer_tag: consumer_tag = self._generate_consumer_tag() if consumer_tag in self._consumers or consumer_tag in self._cancelled: raise exceptions.DuplicateConsumerTag(consumer_tag) if no_ack: self._consumers_with_noack.add(consumer_tag) self._consumers[consumer_tag] = consumer_callback self._pending[consumer_tag] = list() self._rpc(spec.Basic.Consume(queue=queue, consumer_tag=consumer_tag, no_ack=no_ack, exclusive=exclusive, arguments=arguments or dict()), self._on_eventok, [(spec.Basic.ConsumeOk, {'consumer_tag': consumer_tag})]) return consumer_tag def _generate_consumer_tag(self): """Generate a consumer tag NOTE: this protected method may be called by derived classes :returns: consumer tag :rtype: str """ return 'ctag%i.%s' % (self.channel_number, uuid.uuid4().hex) def basic_get(self, callback=None, queue='', no_ack=False): """Get a single message from the AMQP broker. If you want to be notified of Basic.GetEmpty, use the Channel.add_callback method adding your Basic.GetEmpty callback which should expect only one parameter, frame. For more information on basic_get and its parameters, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get :param method callback: The method to callback with a message that has the signature callback(channel, method, properties, body), where: channel: pika.Channel method: pika.spec.Basic.GetOk properties: pika.spec.BasicProperties body: str, unicode, or bytes (python 3.x) :param queue: The queue to get a message from :type queue: str or unicode :param bool no_ack: Tell the broker to not expect a reply """ self._validate_channel_and_callback(callback) self._on_getok_callback = callback self._send_method(spec.Basic.Get(queue=queue, no_ack=no_ack)) def basic_nack(self, delivery_tag=None, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ if not self.is_open: raise exceptions.ChannelClosed() return self._send_method(spec.Basic.Nack(delivery_tag, multiple, requeue)) def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False, immediate=False): """Publish to the channel with the given exchange, routing key and body. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish :param exchange: The exchange to publish to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param body: The message body :type body: str or unicode :param pika.spec.BasicProperties properties: Basic.properties :param bool mandatory: The mandatory flag :param bool immediate: The immediate flag """ if not self.is_open: raise exceptions.ChannelClosed() if immediate: LOGGER.warning('The immediate flag is deprecated in RabbitMQ') if isinstance(body, unicode_type): body = body.encode('utf-8') properties = properties or spec.BasicProperties() self._send_method(spec.Basic.Publish(exchange=exchange, routing_key=routing_key, mandatory=mandatory, immediate=immediate), (properties, body)) def basic_qos(self, callback=None, prefetch_size=0, prefetch_count=0, all_channels=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param method callback: The method to callback for Basic.QosOk response :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored if the no-ack option is set. :param bool all_channels: Should the QoS apply to all channels """ self._validate_channel_and_callback(callback) return self._rpc(spec.Basic.Qos(prefetch_size, prefetch_count, all_channels), callback, [spec.Basic.QosOk]) def basic_reject(self, delivery_tag, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. :raises: TypeError """ if not self.is_open: raise exceptions.ChannelClosed() if not isinstance(delivery_tag, int): raise TypeError('delivery_tag must be an integer') return self._send_method(spec.Basic.Reject(delivery_tag, requeue)) def basic_recover(self, callback=None, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param method callback: Method to call when receiving Basic.RecoverOk :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ self._validate_channel_and_callback(callback) return self._rpc(spec.Basic.Recover(requeue), callback, [spec.Basic.RecoverOk]) def close(self, reply_code=0, reply_text="Normal Shutdown"): """Will invoke a clean shutdown of the channel with the AMQP Broker. :param int reply_code: The reply code to close the channel with :param str reply_text: The reply text to close the channel with """ if not self.is_open: raise exceptions.ChannelClosed() LOGGER.info('Channel.close(%s, %s)', reply_code, reply_text) if self._consumers: LOGGER.debug('Cancelling %i consumers', len(self._consumers)) for consumer_tag in dictkeys(self._consumers): self.basic_cancel(consumer_tag=consumer_tag) self._set_state(self.CLOSING) self._rpc(spec.Channel.Close(reply_code, reply_text, 0, 0), self._on_closeok, [spec.Channel.CloseOk]) def confirm_delivery(self, callback=None, nowait=False): """Turn on Confirm mode in the channel. Pass in a callback to be notified by the Broker when a message has been confirmed as received or rejected (Basic.Ack, Basic.Nack) from the broker to the publisher. For more information see: http://www.rabbitmq.com/extensions.html#confirms :param method callback: The callback for delivery confirmations :param bool nowait: Do not send a reply frame (Confirm.SelectOk) """ self._validate_channel_and_callback(callback) if (self.connection.publisher_confirms is False or self.connection.basic_nack is False): raise exceptions.MethodNotImplemented('Not Supported on Server') # Add the ack and nack callbacks if callback is not None: self.callbacks.add(self.channel_number, spec.Basic.Ack, callback, False) self.callbacks.add(self.channel_number, spec.Basic.Nack, callback, False) # Send the RPC command self._rpc(spec.Confirm.Select(nowait), self._on_selectok, [spec.Confirm.SelectOk] if nowait is False else []) @property def consumer_tags(self): """Property method that returns a list of currently active consumers :rtype: list """ return dictkeys(self._consumers) def exchange_bind(self, callback=None, destination=None, source=None, routing_key='', nowait=False, arguments=None): """Bind an exchange to another exchange. :param method callback: The method to call on Exchange.BindOk :param destination: The destination exchange to bind :type destination: str or unicode :param source: The source exchange to bind to :type source: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param bool nowait: Do not wait for an Exchange.BindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Bind(0, destination, source, routing_key, nowait, arguments or dict()), callback, [spec.Exchange.BindOk] if nowait is False else []) def exchange_declare(self, callback=None, exchange=None, exchange_type='direct', passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments=None, type=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param method callback: Call this method on Exchange.DeclareOk :param exchange: The exchange name consists of a non-empty :type exchange: str or unicode sequence of these characters: letters, digits, hyphen, underscore, period, or colon. :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param bool nowait: Do not expect an Exchange.DeclareOk response :param dict arguments: Custom key/value pair arguments for the exchange :param str type: The deprecated exchange type parameter """ self._validate_channel_and_callback(callback) if type is not None: warnings.warn('type is deprecated, use exchange_type instead', DeprecationWarning) if exchange_type == 'direct' and type != exchange_type: exchange_type = type return self._rpc(spec.Exchange.Declare(0, exchange, exchange_type, passive, durable, auto_delete, internal, nowait, arguments or dict()), callback, [spec.Exchange.DeclareOk] if nowait is False else []) def exchange_delete(self, callback=None, exchange=None, if_unused=False, nowait=False): """Delete the exchange. :param method callback: The method to call on Exchange.DeleteOk :param exchange: The exchange name :type exchange: str or unicode :param bool if_unused: only delete if the exchange is unused :param bool nowait: Do not wait for an Exchange.DeleteOk """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Delete(0, exchange, if_unused, nowait), callback, [spec.Exchange.DeleteOk] if nowait is False else []) def exchange_unbind(self, callback=None, destination=None, source=None, routing_key='', nowait=False, arguments=None): """Unbind an exchange from another exchange. :param method callback: The method to call on Exchange.UnbindOk :param destination: The destination exchange to unbind :type destination: str or unicode :param source: The source exchange to unbind from :type source: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param bool nowait: Do not wait for an Exchange.UnbindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) return self._rpc(spec.Exchange.Unbind(0, destination, source, routing_key, nowait, arguments), callback, [spec.Exchange.UnbindOk] if nowait is False else []) def flow(self, callback, active): """Turn Channel flow control off and on. Pass a callback to be notified of the response from the server. active is a bool. Callback should expect a bool in response indicating channel flow state. For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param method callback: The callback method :param bool active: Turn flow on or off """ self._validate_channel_and_callback(callback) self._on_flowok_callback = callback self._rpc(spec.Channel.Flow(active), self._on_flowok, [spec.Channel.FlowOk]) @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._state == self.CLOSED @property def is_closing(self): """Returns True if the channel is closing. :rtype: bool """ return self._state == self.CLOSING @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._state == self.OPEN def open(self): """Open the channel""" self._set_state(self.OPENING) self._add_callbacks() self._rpc(spec.Channel.Open(), self._on_openok, [spec.Channel.OpenOk]) def queue_bind(self, callback, queue, exchange, routing_key=None, nowait=False, arguments=None): """Bind the queue to the specified exchange :param method callback: The method to call on Queue.BindOk :param queue: The queue to bind to the exchange :type queue: str or unicode :param exchange: The source exchange to bind to :type exchange: str or unicode :param routing_key: The routing key to bind on :type routing_key: str or unicode :param bool nowait: Do not wait for a Queue.BindOk :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) replies = [spec.Queue.BindOk] if nowait is False else [] if routing_key is None: routing_key = queue return self._rpc(spec.Queue.Bind(0, queue, exchange, routing_key, nowait, arguments or dict()), callback, replies) def queue_declare(self, callback, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Leave the queue name empty for a auto-named queue in RabbitMQ :param method callback: The method to call on Queue.DeclareOk :param queue: The queue name :type queue: str or unicode :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param bool nowait: Do not wait for a Queue.DeclareOk :param dict arguments: Custom key/value arguments for the queue """ if queue: condition = (spec.Queue.DeclareOk, {'queue': queue}) else: condition = spec.Queue.DeclareOk replies = [condition] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Declare(0, queue, passive, durable, exclusive, auto_delete, nowait, arguments or dict()), callback, replies) def queue_delete(self, callback=None, queue='', if_unused=False, if_empty=False, nowait=False): """Delete a queue from the broker. :param method callback: The method to call on Queue.DeleteOk :param queue: The queue to delete :type queue: str or unicode :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :param bool nowait: Do not wait for a Queue.DeleteOk """ replies = [spec.Queue.DeleteOk] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Delete(0, queue, if_unused, if_empty, nowait), callback, replies) def queue_purge(self, callback=None, queue='', nowait=False): """Purge all of the messages from the specified queue :param method callback: The method to call on Queue.PurgeOk :param queue: The queue to purge :type queue: str or unicode :param bool nowait: Do not expect a Queue.PurgeOk response """ replies = [spec.Queue.PurgeOk] if nowait is False else [] self._validate_channel_and_callback(callback) return self._rpc(spec.Queue.Purge(0, queue, nowait), callback, replies) def queue_unbind(self, callback=None, queue='', exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param method callback: The method to call on Queue.UnbindOk :param queue: The queue to unbind from the exchange :type queue: str or unicode :param exchange: The source exchange to bind from :type exchange: str or unicode :param routing_key: The routing key to unbind :type routing_key: str or unicode :param dict arguments: Custom key/value pair arguments for the binding """ self._validate_channel_and_callback(callback) if routing_key is None: routing_key = queue return self._rpc(spec.Queue.Unbind(0, queue, exchange, routing_key, arguments or dict()), callback, [spec.Queue.UnbindOk]) def tx_commit(self, callback=None): """Commit a transaction :param method callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Commit(), callback, [spec.Tx.CommitOk]) def tx_rollback(self, callback=None): """Rollback a transaction. :param method callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Rollback(), callback, [spec.Tx.RollbackOk]) def tx_select(self, callback=None): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :param method callback: The callback for delivery confirmations """ self._validate_channel_and_callback(callback) return self._rpc(spec.Tx.Select(), callback, [spec.Tx.SelectOk]) # Internal methods def _add_callbacks(self): """Callbacks that add the required behavior for a channel when connecting and connected to a server. """ # Add a callback for Basic.GetEmpty self.callbacks.add(self.channel_number, spec.Basic.GetEmpty, self._on_getempty, False) # Add a callback for Basic.Cancel self.callbacks.add(self.channel_number, spec.Basic.Cancel, self._on_cancel, False) # Deprecated in newer versions of RabbitMQ but still register for it self.callbacks.add(self.channel_number, spec.Channel.Flow, self._on_flow, False) # Add a callback for when the server closes our channel self.callbacks.add(self.channel_number, spec.Channel.Close, self._on_close, True) def _add_on_cleanup_callback(self, callback): """For internal use only (e.g., Connection needs to remove closed channels from its channel container). Pass a callback function that will be called when the channel is being cleaned up after all channel-close callbacks callbacks. :param method callback: The method to call on callback with the signature: callback(channel) """ self.callbacks.add(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, callback, one_shot=True, only_caller=self) def _add_pending_msg(self, consumer_tag, method_frame, header_frame, body): """Add the received message to the pending message stack. :param str consumer_tag: The consumer tag for the message :param pika.frame.Method method_frame: The received method frame :param pika.frame.Header header_frame: The received header frame :param body: The message body :type body: str or unicode """ self._pending[consumer_tag].append((self, method_frame.method, header_frame.properties, body)) def _cleanup(self): """Remove all consumers and any callbacks for the channel.""" self.callbacks.process(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, self, self) self._consumers = dict() self.callbacks.cleanup(str(self.channel_number)) self._cookie = None def _cleanup_consumer_ref(self, consumer_tag): """Remove any references to the consumer tag in internal structures for consumer state. :param str consumer_tag: The consumer tag to cleanup """ if consumer_tag in self._consumers_with_noack: self._consumers_with_noack.remove(consumer_tag) if consumer_tag in self._consumers: del self._consumers[consumer_tag] if consumer_tag in self._pending: del self._pending[consumer_tag] self._cancelled.discard(consumer_tag) def _get_cookie(self): """Used by the wrapper implementation (e.g., `BlockingChannel`) to retrieve the cookie that it set via `_set_cookie` :returns: opaque cookie value that was set via `_set_cookie` """ return self._cookie def _get_pending_msg(self, consumer_tag): """Get a pending message for the consumer tag from the stack. :param str consumer_tag: The consumer tag to get a message from :rtype: tuple(pika.frame.Header, pika.frame.Method, str|unicode) """ return self._pending[consumer_tag].pop(0) def _handle_content_frame(self, frame_value): """This is invoked by the connection when frames that are not registered with the CallbackManager have been found. This should only be the case when the frames are related to content delivery. The frame_dispatcher will be invoked which will return the fully formed message in three parts when all of the body frames have been received. :param pika.amqp_object.Frame frame_value: The frame to deliver """ try: response = self.frame_dispatcher.process(frame_value) except exceptions.UnexpectedFrameError: return self._unexpected_frame(frame_value) if response: if isinstance(response[0].method, spec.Basic.Deliver): self._on_deliver(*response) elif isinstance(response[0].method, spec.Basic.GetOk): self._on_getok(*response) elif isinstance(response[0].method, spec.Basic.Return): self._on_return(*response) def _has_content(self, method_frame): """Return a bool if it's a content method as defined by the spec :param pika.amqp_object.Method method_frame: The method frame received """ return spec.has_content(method_frame.INDEX) def _on_cancel(self, method_frame): """When the broker cancels a consumer, delete it from our internal dictionary. :param pika.frame.Method method_frame: The method frame received """ if method_frame.method.consumer_tag in self._cancelled: # User-initiated cancel is waiting for Cancel-ok return self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _on_cancelok(self, method_frame): """Called in response to a frame from the Broker when the client sends Basic.Cancel :param pika.frame.Method method_frame: The method frame received """ self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _on_close(self, method_frame): """Handle the case where our channel has been closed for us :param pika.frame.Method method_frame: The close frame """ LOGGER.info('%s', method_frame) LOGGER.warning('Received remote Channel.Close (%s): %s', method_frame.method.reply_code, method_frame.method.reply_text) if self.connection.is_open: self._send_method(spec.Channel.CloseOk()) self._set_state(self.CLOSED) self.callbacks.process(self.channel_number, '_on_channel_close', self, self, method_frame.method.reply_code, method_frame.method.reply_text) self._cleanup() def _on_closeok(self, method_frame): """Invoked when RabbitMQ replies to a Channel.Close method :param pika.frame.Method method_frame: The CloseOk frame """ self._set_state(self.CLOSED) self.callbacks.process(self.channel_number, '_on_channel_close', self, self, 0, '') self._cleanup() def _on_deliver(self, method_frame, header_frame, body): """Cope with reentrancy. If a particular consumer is still active when another delivery appears for it, queue the deliveries up until it finally exits. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param body: The body received :type body: str or unicode """ consumer_tag = method_frame.method.consumer_tag if consumer_tag in self._cancelled: if self.is_open and consumer_tag not in self._consumers_with_noack: self.basic_reject(method_frame.method.delivery_tag) return if consumer_tag not in self._consumers: return self._add_pending_msg(consumer_tag, method_frame, header_frame, body) while self._pending[consumer_tag]: self._consumers[consumer_tag](*self._get_pending_msg(consumer_tag)) self._consumers[consumer_tag](self, method_frame.method, header_frame.properties, body) def _on_eventok(self, method_frame): """Generic events that returned ok that may have internal callbacks. We keep a list of what we've yet to implement so that we don't silently drain events that we don't support. :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Discarding frame %r', method_frame) def _on_flow(self, method_frame_unused): """Called if the server sends a Channel.Flow frame. :param pika.frame.Method method_frame_unused: The Channel.Flow frame """ if self._has_on_flow_callback is False: LOGGER.warning('Channel.Flow received from server') def _on_flowok(self, method_frame): """Called in response to us asking the server to toggle on Channel.Flow :param pika.frame.Method method_frame: The method frame received """ self.flow_active = method_frame.method.active if self._on_flowok_callback: self._on_flowok_callback(method_frame.method.active) self._on_flowok_callback = None else: LOGGER.warning('Channel.FlowOk received with no active callbacks') def _on_getempty(self, method_frame): """When we receive an empty reply do nothing but log it :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Received Basic.GetEmpty: %r', method_frame) def _on_getok(self, method_frame, header_frame, body): """Called in reply to a Basic.Get when there is a message. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param body: The body received :type body: str or unicode """ if self._on_getok_callback is not None: callback = self._on_getok_callback self._on_getok_callback = None callback(self, method_frame.method, header_frame.properties, body) else: LOGGER.error('Basic.GetOk received with no active callback') def _on_openok(self, frame_unused): """Called by our callback handler when we receive a Channel.OpenOk and subsequently calls our _on_openok_callback which was passed into the Channel constructor. The reason we do this is because we want to make sure that the on_open_callback parameter passed into the Channel constructor is not the first callback we make. :param pika.frame.Method frame_unused: Unused Channel.OpenOk frame """ self._set_state(self.OPEN) if self._on_openok_callback is not None: self._on_openok_callback(self) def _on_return(self, method_frame, header_frame, body): """Called if the server sends a Basic.Return frame. :param pika.frame.Method method_frame: The Basic.Return frame :param pika.frame.Header header_frame: The content header frame :param body: The message body :type body: str or unicode """ if not self.callbacks.process(self.channel_number, '_on_return', self, self, method_frame.method, header_frame.properties, body): LOGGER.warning('Basic.Return received from server (%r, %r)', method_frame.method, header_frame.properties) def _on_selectok(self, method_frame): """Called when the broker sends a Confirm.SelectOk frame :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug("Confirm.SelectOk Received: %r", method_frame) def _on_synchronous_complete(self, method_frame_unused): """This is called when a synchronous command is completed. It will undo the blocking state and send all the frames that stacked up while we were in the blocking state. :param pika.frame.Method method_frame_unused: The method frame received """ LOGGER.debug('%i blocked frames', len(self._blocked)) self._blocking = None while len(self._blocked) > 0 and self._blocking is None: self._rpc(*self._blocked.popleft()) def _rpc(self, method_frame, callback=None, acceptable_replies=None): """Shortcut wrapper to the Connection's rpc command using its callback stack, passing in our channel number. :param pika.amqp_object.Method method_frame: The method frame to call :param method callback: The callback for the RPC response :param list acceptable_replies: The replies this RPC call expects """ # Make sure the channel is open if self.is_closed: raise exceptions.ChannelClosed # If the channel is blocking, add subsequent commands to our stack if self._blocking: return self._blocked.append([method_frame, callback, acceptable_replies]) # Validate we got None or a list of acceptable_replies if acceptable_replies and not isinstance(acceptable_replies, list): raise TypeError("acceptable_replies should be list or None") # Validate the callback is callable if callback and not is_callable(callback): raise TypeError("callback should be None, a function or method.") # Block until a response frame is received for synchronous frames if method_frame.synchronous: self._blocking = method_frame.NAME # If acceptable replies are set, add callbacks if acceptable_replies: for reply in acceptable_replies or list(): if isinstance(reply, tuple): reply, arguments = reply else: arguments = None LOGGER.debug('Adding in on_synchronous_complete callback') self.callbacks.add(self.channel_number, reply, self._on_synchronous_complete, arguments=arguments) if callback: LOGGER.debug('Adding passed in callback') self.callbacks.add(self.channel_number, reply, callback, arguments=arguments) self._send_method(method_frame) def _send_method(self, method_frame, content=None): """Shortcut wrapper to send a method through our connection, passing in the channel number :param pika.object.Method method_frame: The method frame to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ self.connection._send_method(self.channel_number, method_frame, content) def _set_cookie(self, cookie): """Used by wrapper layer (e.g., `BlockingConnection`) to link the channel implementation back to the proxy. See `_get_cookie`. :param cookie: an opaque value; typically a proxy channel implementation instance (e.g., `BlockingChannel` instance) """ self._cookie = cookie def _set_state(self, connection_state): """Set the channel connection state to the specified state value. :param int connection_state: The connection_state value """ self._state = connection_state def _unexpected_frame(self, frame_value): """Invoked when a frame is received that is not setup to be processed. :param pika.frame.Frame frame_value: The frame received """ LOGGER.warning('Unexpected frame: %r', frame_value) def _validate_channel_and_callback(self, callback): if not self.is_open: raise exceptions.ChannelClosed() if callback is not None and not is_callable(callback): raise ValueError('callback must be a function or method') class ContentFrameDispatcher(object): """Handle content related frames, building a message and return the message back in three parts upon receipt. """ def __init__(self): """Create a new instance of the Dispatcher passing in the callback manager. """ self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() def process(self, frame_value): """Invoked by the Channel object when passed frames that are not setup in the rpc process and that don't have explicit reply types defined. This includes Basic.Publish, Basic.GetOk and Basic.Return :param Method|Header|Body frame_value: The frame to process """ if (isinstance(frame_value, frame.Method) and spec.has_content(frame_value.method.INDEX)): self._method_frame = frame_value elif isinstance(frame_value, frame.Header): self._header_frame = frame_value if frame_value.body_size == 0: return self._finish() elif isinstance(frame_value, frame.Body): return self._handle_body_frame(frame_value) else: raise exceptions.UnexpectedFrameError(frame_value) def _finish(self): """Invoked when all of the message has been received :rtype: tuple(pika.frame.Method, pika.frame.Header, str) """ content = (self._method_frame, self._header_frame, b''.join(self._body_fragments)) self._reset() return content def _handle_body_frame(self, body_frame): """Receive body frames and append them to the stack. When the body size matches, call the finish method. :param Body body_frame: The body frame :raises: pika.exceptions.BodyTooLongError :rtype: tuple(pika.frame.Method, pika.frame.Header, str)|None """ self._seen_so_far += len(body_frame.fragment) self._body_fragments.append(body_frame.fragment) if self._seen_so_far == self._header_frame.body_size: return self._finish() elif self._seen_so_far > self._header_frame.body_size: raise exceptions.BodyTooLongError(self._seen_so_far, self._header_frame.body_size) return None def _reset(self): """Reset the values for processing frames""" self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() pika-0.10.0/pika/compat.py000066400000000000000000000053121257163076400153110ustar00rootroot00000000000000import sys as _sys PY2 = _sys.version_info < (3,) PY3 = not PY2 if not PY2: # these were moved around for Python 3 from urllib.parse import unquote as url_unquote, urlencode # Python 3 does not have basestring anymore; we include # *only* the str here as this is used for textual data. basestring = (str,) # for assertions that the data is either encoded or non-encoded text str_or_bytes = (str, bytes) # xrange is gone, replace it with range xrange = range # the unicode type is str unicode_type = str def dictkeys(dct): """ Returns a list of keys of dictionary dict.keys returns a view that works like .keys in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.keys()) def dictvalues(dct): """ Returns a list of values of a dictionary dict.values returns a view that works like .values in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.values()) def byte(*args): """ This is the same as Python 2 `chr(n)` for bytes in Python 3 Returns a single byte `bytes` for the given int argument (we optimize it a bit here by passing the positional argument tuple directly to the bytes constructor. """ return bytes(args) class long(int): """ A marker class that signifies that the integer value should be serialized as `l` instead of `I` """ def __repr__(self): return str(self) + 'L' def canonical_str(value): """ Return the canonical str value for the string. In both Python 3 and Python 2 this is str. """ return str(value) else: from urllib import unquote as url_unquote, urlencode basestring = basestring str_or_bytes = basestring xrange = xrange unicode_type = unicode dictkeys = dict.keys dictvalues = dict.values byte = chr long = long def canonical_str(value): """ Returns the canonical string value of the given string. In Python 2 this is the value unchanged if it is an str, otherwise it is the unicode value encoded as UTF-8. """ try: return str(value) except UnicodeEncodeError: return str(value.encode('utf-8')) def as_bytes(value): if not isinstance(value, bytes): return value.encode('UTF-8') return value pika-0.10.0/pika/connection.py000066400000000000000000001666041257163076400162010ustar00rootroot00000000000000"""Core connection objects""" import ast import sys import collections import logging import math import platform import threading import urllib import warnings if sys.version_info > (3,): import urllib.parse as urlparse else: import urlparse from pika import __version__ from pika import callback from pika import channel from pika import credentials as pika_credentials from pika import exceptions from pika import frame from pika import heartbeat from pika import utils from pika import spec from pika.compat import basestring, url_unquote, dictkeys BACKPRESSURE_WARNING = ("Pika: Write buffer exceeded warning threshold at " "%i bytes and an estimated %i frames behind") PRODUCT = "Pika Python Client Library" LOGGER = logging.getLogger(__name__) class Parameters(object): """Base connection parameters class definition :param str DEFAULT_HOST: 'localhost' :param int DEFAULT_PORT: 5672 :param str DEFAULT_VIRTUAL_HOST: '/' :param str DEFAULT_USERNAME: 'guest' :param str DEFAULT_PASSWORD: 'guest' :param int DEFAULT_HEARTBEAT_INTERVAL: None :param int DEFAULT_CHANNEL_MAX: 0 :param int DEFAULT_FRAME_MAX: pika.spec.FRAME_MAX_SIZE :param str DEFAULT_LOCALE: 'en_US' :param int DEFAULT_CONNECTION_ATTEMPTS: 1 :param int|float DEFAULT_RETRY_DELAY: 2.0 :param int|float DEFAULT_SOCKET_TIMEOUT: 0.25 :param bool DEFAULT_SSL: False :param dict DEFAULT_SSL_OPTIONS: {} :param int DEFAULT_SSL_PORT: 5671 :param bool DEFAULT_BACKPRESSURE_DETECTION: False """ DEFAULT_BACKPRESSURE_DETECTION = False DEFAULT_CONNECTION_ATTEMPTS = 1 DEFAULT_CHANNEL_MAX = 0 DEFAULT_FRAME_MAX = spec.FRAME_MAX_SIZE DEFAULT_HEARTBEAT_INTERVAL = None # accept server's proposal DEFAULT_HOST = 'localhost' DEFAULT_LOCALE = 'en_US' DEFAULT_PASSWORD = 'guest' DEFAULT_PORT = 5672 DEFAULT_RETRY_DELAY = 2.0 DEFAULT_SOCKET_TIMEOUT = 0.25 DEFAULT_SSL = False DEFAULT_SSL_OPTIONS = {} DEFAULT_SSL_PORT = 5671 DEFAULT_USERNAME = 'guest' DEFAULT_VIRTUAL_HOST = '/' def __init__(self): self.virtual_host = self.DEFAULT_VIRTUAL_HOST self.backpressure_detection = self.DEFAULT_BACKPRESSURE_DETECTION self.channel_max = self.DEFAULT_CHANNEL_MAX self.connection_attempts = self.DEFAULT_CONNECTION_ATTEMPTS self.credentials = self._credentials(self.DEFAULT_USERNAME, self.DEFAULT_PASSWORD) self.frame_max = self.DEFAULT_FRAME_MAX self.heartbeat = self.DEFAULT_HEARTBEAT_INTERVAL self.host = self.DEFAULT_HOST self.locale = self.DEFAULT_LOCALE self.port = self.DEFAULT_PORT self.retry_delay = self.DEFAULT_RETRY_DELAY self.ssl = self.DEFAULT_SSL self.ssl_options = self.DEFAULT_SSL_OPTIONS self.socket_timeout = self.DEFAULT_SOCKET_TIMEOUT def __repr__(self): """Represent the info about the instance. :rtype: str """ return ('<%s host=%s port=%s virtual_host=%s ssl=%s>' % (self.__class__.__name__, self.host, self.port, self.virtual_host, self.ssl)) def _credentials(self, username, password): """Return a plain credentials object for the specified username and password. :param str username: The username to use :param str password: The password to use :rtype: pika_credentials.PlainCredentials """ return pika_credentials.PlainCredentials(username, password) def _validate_backpressure(self, backpressure_detection): """Validate that the backpressure detection option is a bool. :param bool backpressure_detection: The backpressure detection value :rtype: bool :raises: TypeError """ if not isinstance(backpressure_detection, bool): raise TypeError('backpressure detection must be a bool') return True def _validate_channel_max(self, channel_max): """Validate that the channel_max value is an int :param int channel_max: The value to validate :rtype: bool :raises: TypeError :raises: ValueError """ if not isinstance(channel_max, int): raise TypeError('channel_max must be an int') if channel_max < 1 or channel_max > 65535: raise ValueError('channel_max must be <= 65535 and > 0') return True def _validate_connection_attempts(self, connection_attempts): """Validate that the connection_attempts value is an int :param int connection_attempts: The value to validate :rtype: bool :raises: TypeError :raises: ValueError """ if not isinstance(connection_attempts, int): raise TypeError('connection_attempts must be an int') if connection_attempts < 1: raise ValueError('connection_attempts must be None or > 0') return True def _validate_credentials(self, credentials): """Validate the credentials passed in are using a valid object type. :param pika.credentials.Credentials credentials: Credentials to validate :rtype: bool :raises: TypeError """ for credential_type in pika_credentials.VALID_TYPES: if isinstance(credentials, credential_type): return True raise TypeError('Credentials must be an object of type: %r' % pika_credentials.VALID_TYPES) def _validate_frame_max(self, frame_max): """Validate that the frame_max value is an int and does not exceed the maximum frame size and is not less than the frame min size. :param int frame_max: The value to validate :rtype: bool :raises: TypeError :raises: InvalidMinimumFrameSize """ if not isinstance(frame_max, int): raise TypeError('frame_max must be an int') if frame_max < spec.FRAME_MIN_SIZE: raise exceptions.InvalidMinimumFrameSize elif frame_max > spec.FRAME_MAX_SIZE: raise exceptions.InvalidMaximumFrameSize return True def _validate_heartbeat_interval(self, heartbeat_interval): """Validate that the heartbeat_interval value is an int :param int heartbeat_interval: The value to validate :rtype: bool :raises: TypeError :raises: ValueError """ if not isinstance(heartbeat_interval, int): raise TypeError('heartbeat must be an int') if heartbeat_interval < 0: raise ValueError('heartbeat_interval must >= 0') return True def _validate_host(self, host): """Validate that the host value is an str :param str|unicode host: The value to validate :rtype: bool :raises: TypeError """ if not isinstance(host, basestring): raise TypeError('host must be a str or unicode str') return True def _validate_locale(self, locale): """Validate that the locale value is an str :param str locale: The value to validate :rtype: bool :raises: TypeError """ if not isinstance(locale, basestring): raise TypeError('locale must be a str') return True def _validate_port(self, port): """Validate that the port value is an int :param int port: The value to validate :rtype: bool :raises: TypeError """ if not isinstance(port, int): raise TypeError('port must be an int') return True def _validate_retry_delay(self, retry_delay): """Validate that the retry_delay value is an int or float :param int|float retry_delay: The value to validate :rtype: bool :raises: TypeError """ if not any([isinstance(retry_delay, int), isinstance(retry_delay, float)]): raise TypeError('retry_delay must be a float or int') return True def _validate_socket_timeout(self, socket_timeout): """Validate that the socket_timeout value is an int or float :param int|float socket_timeout: The value to validate :rtype: bool :raises: TypeError """ if not any([isinstance(socket_timeout, int), isinstance(socket_timeout, float)]): raise TypeError('socket_timeout must be a float or int') if not socket_timeout > 0: raise ValueError('socket_timeout must be > 0') return True def _validate_ssl(self, ssl): """Validate the SSL toggle is a bool :param bool ssl: The SSL enabled/disabled value :rtype: bool :raises: TypeError """ if not isinstance(ssl, bool): raise TypeError('ssl must be a bool') return True def _validate_ssl_options(self, ssl_options): """Validate the SSL options value is a dictionary. :param dict|None ssl_options: SSL Options to validate :rtype: bool :raises: TypeError """ if not isinstance(ssl_options, dict) and ssl_options is not None: raise TypeError('ssl_options must be either None or dict') return True def _validate_virtual_host(self, virtual_host): """Validate that the virtual_host value is an str :param str virtual_host: The value to validate :rtype: bool :raises: TypeError """ if not isinstance(virtual_host, basestring): raise TypeError('virtual_host must be a str') return True class ConnectionParameters(Parameters): """Connection parameters object that is passed into the connection adapter upon construction. :param str host: Hostname or IP Address to connect to :param int port: TCP port to connect to :param str virtual_host: RabbitMQ virtual host to use :param pika.credentials.Credentials credentials: auth credentials :param int channel_max: Maximum number of channels to allow :param int frame_max: The maximum byte size for an AMQP frame :param int heartbeat_interval: How often to send heartbeats :param bool ssl: Enable SSL :param dict ssl_options: Arguments passed to ssl.wrap_socket as :param int connection_attempts: Maximum number of retry attempts :param int|float retry_delay: Time to wait in seconds, before the next :param int|float socket_timeout: Use for high latency networks :param str locale: Set the locale value :param bool backpressure_detection: Toggle backpressure detection """ def __init__(self, host=None, port=None, virtual_host=None, credentials=None, channel_max=None, frame_max=None, heartbeat_interval=None, ssl=None, ssl_options=None, connection_attempts=None, retry_delay=None, socket_timeout=None, locale=None, backpressure_detection=None): """Create a new ConnectionParameters instance. :param str host: Hostname or IP Address to connect to :param int port: TCP port to connect to :param str virtual_host: RabbitMQ virtual host to use :param pika.credentials.Credentials credentials: auth credentials :param int channel_max: Maximum number of channels to allow :param int frame_max: The maximum byte size for an AMQP frame :param int heartbeat_interval: How often to send heartbeats. Min between this value and server's proposal will be used. Use 0 to deactivate heartbeats and None to accept server's proposal. :param bool ssl: Enable SSL :param dict ssl_options: Arguments passed to ssl.wrap_socket :param int connection_attempts: Maximum number of retry attempts :param int|float retry_delay: Time to wait in seconds, before the next :param int|float socket_timeout: Use for high latency networks :param str locale: Set the locale value :param bool backpressure_detection: Toggle backpressure detection """ super(ConnectionParameters, self).__init__() # Create the default credentials object if not credentials: credentials = self._credentials(self.DEFAULT_USERNAME, self.DEFAULT_PASSWORD) # Assign the values if host and self._validate_host(host): self.host = host if port is not None and self._validate_port(port): self.port = port if virtual_host and self._validate_virtual_host(virtual_host): self.virtual_host = virtual_host if credentials and self._validate_credentials(credentials): self.credentials = credentials if channel_max is not None and self._validate_channel_max(channel_max): self.channel_max = channel_max if frame_max is not None and self._validate_frame_max(frame_max): self.frame_max = frame_max if locale and self._validate_locale(locale): self.locale = locale if (heartbeat_interval is not None and self._validate_heartbeat_interval(heartbeat_interval)): self.heartbeat = heartbeat_interval if ssl is not None and self._validate_ssl(ssl): self.ssl = ssl if ssl_options and self._validate_ssl_options(ssl_options): self.ssl_options = ssl_options or dict() if (connection_attempts is not None and self._validate_connection_attempts(connection_attempts)): self.connection_attempts = connection_attempts if retry_delay is not None and self._validate_retry_delay(retry_delay): self.retry_delay = retry_delay if (socket_timeout is not None and self._validate_socket_timeout(socket_timeout)): self.socket_timeout = socket_timeout if (backpressure_detection is not None and self._validate_backpressure(backpressure_detection)): self.backpressure_detection = backpressure_detection class URLParameters(Parameters): """Connect to RabbitMQ via an AMQP URL in the format:: amqp://username:password@host:port/[?query-string] Ensure that the virtual host is URI encoded when specified. For example if you are using the default "/" virtual host, the value should be `%2f`. Valid query string values are: - backpressure_detection: Toggle backpressure detection, possible values are `t` or `f` - channel_max: Override the default maximum channel count value - connection_attempts: Specify how many times pika should try and reconnect before it gives up - frame_max: Override the default maximum frame size for communication - heartbeat_interval: Specify the number of seconds between heartbeat frames to ensure that the link between RabbitMQ and your application is up - locale: Override the default `en_US` locale value - ssl: Toggle SSL, possible values are `t`, `f` - ssl_options: Arguments passed to :meth:`ssl.wrap_socket` - retry_delay: The number of seconds to sleep before attempting to connect on connection failure. - socket_timeout: Override low level socket timeout value :param str url: The AMQP URL to connect to """ def __init__(self, url): """Create a new URLParameters instance. :param str url: The URL value """ super(URLParameters, self).__init__() self._process_url(url) def _process_url(self, url): """Take an AMQP URL and break it up into the various parameters. :param str url: The URL to parse """ if url[0:4] == 'amqp': url = 'http' + url[4:] parts = urlparse.urlparse(url) # Handle the Protocol scheme, changing to HTTPS so urlparse doesnt barf if parts.scheme == 'https': self.ssl = True if self._validate_host(parts.hostname): self.host = parts.hostname if not parts.port: if self.ssl: self.port = self.DEFAULT_SSL_PORT if \ self.ssl else self.DEFAULT_PORT elif self._validate_port(parts.port): self.port = parts.port if parts.username is not None: self.credentials = pika_credentials.PlainCredentials(parts.username, parts.password) # Get the Virtual Host if len(parts.path) <= 1: self.virtual_host = self.DEFAULT_VIRTUAL_HOST else: path_parts = parts.path.split('/') virtual_host = url_unquote(path_parts[1]) if self._validate_virtual_host(virtual_host): self.virtual_host = virtual_host # Handle query string values, validating and assigning them values = urlparse.parse_qs(parts.query) # Cast the various numeric values to the appropriate values for key in dictkeys(values): # Always reassign the first list item in query values values[key] = values[key].pop(0) if values[key].isdigit(): values[key] = int(values[key]) else: try: values[key] = float(values[key]) except ValueError: pass if 'backpressure_detection' in values: if values['backpressure_detection'] == 't': self.backpressure_detection = True elif values['backpressure_detection'] == 'f': self.backpressure_detection = False else: raise ValueError('Invalid backpressure_detection value: %s' % values['backpressure_detection']) if ('channel_max' in values and self._validate_channel_max(values['channel_max'])): self.channel_max = values['channel_max'] if ('connection_attempts' in values and self._validate_connection_attempts(values['connection_attempts'])): self.connection_attempts = values['connection_attempts'] if ('frame_max' in values and self._validate_frame_max(values['frame_max'])): self.frame_max = values['frame_max'] if ('heartbeat_interval' in values and self._validate_heartbeat_interval(values['heartbeat_interval'])): self.heartbeat = values['heartbeat_interval'] if ('locale' in values and self._validate_locale(values['locale'])): self.locale = values['locale'] if ('retry_delay' in values and self._validate_retry_delay(values['retry_delay'])): self.retry_delay = values['retry_delay'] if ('socket_timeout' in values and self._validate_socket_timeout(values['socket_timeout'])): self.socket_timeout = values['socket_timeout'] if 'ssl_options' in values: options = ast.literal_eval(values['ssl_options']) if self._validate_ssl_options(options): self.ssl_options = options class Connection(object): """This is the core class that implements communication with RabbitMQ. This class should not be invoked directly but rather through the use of an adapter such as SelectConnection or BlockingConnection. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Called when the connection is opened :param method on_open_error_callback: Called if the connection cant be opened :param method on_close_callback: Called when the connection is closed """ ON_CONNECTION_BACKPRESSURE = '_on_connection_backpressure' ON_CONNECTION_BLOCKED = '_on_connection_blocked' ON_CONNECTION_CLOSED = '_on_connection_closed' ON_CONNECTION_ERROR = '_on_connection_error' ON_CONNECTION_OPEN = '_on_connection_open' ON_CONNECTION_UNBLOCKED = '_on_connection_unblocked' CONNECTION_CLOSED = 0 CONNECTION_INIT = 1 CONNECTION_PROTOCOL = 2 CONNECTION_START = 3 CONNECTION_TUNE = 4 CONNECTION_OPEN = 5 CONNECTION_CLOSING = 6 def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None): """Connection initialization expects an object that has implemented the Parameters class and a callback function to notify when we have successfully connected to the AMQP Broker. Available Parameters classes are the ConnectionParameters class and URLParameters class. :param pika.connection.Parameters parameters: Connection parameters :param method on_open_callback: Called when the connection is opened :param method on_open_error_callback: Called if the connection cant be opened :param method on_close_callback: Called when the connection is closed """ self._write_lock = threading.Lock() # Define our callback dictionary self.callbacks = callback.CallbackManager() # Add the on connection error callback self.callbacks.add(0, self.ON_CONNECTION_ERROR, on_open_error_callback or self._on_connection_error, False) self.heartbeat = None # On connection callback if on_open_callback: self.add_on_open_callback(on_open_callback) # On connection callback if on_close_callback: self.add_on_close_callback(on_close_callback) # Set our configuration options self.params = parameters or ConnectionParameters() # Initialize the connection state and connect self._init_connection_state() self.connect() def add_backpressure_callback(self, callback_method): """Call method "callback" when pika believes backpressure is being applied. :param method callback_method: The method to call """ self.callbacks.add(0, self.ON_CONNECTION_BACKPRESSURE, callback_method, False) def add_on_close_callback(self, callback_method): """Add a callback notification when the connection has closed. The callback will be passed the connection, the reply_code (int) and the reply_text (str), if sent by the remote server. :param method callback_method: Callback to call on close """ self.callbacks.add(0, self.ON_CONNECTION_CLOSED, callback_method, False) def add_on_connection_blocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a ``Connection.Blocked`` frame indicating that RabbitMQ is low on resources. Publishers can use this to voluntarily suspend publishing, instead of relying on back pressure throttling. The callback will be passed the ``Connection.Blocked`` method frame. :param method callback_method: Callback to call on `Connection.Blocked` """ self.callbacks.add(0, spec.Connection.Blocked, callback_method, False) def add_on_connection_unblocked_callback(self, callback_method): """Add a callback to be notified when RabbitMQ has sent a ``Connection.Unblocked`` frame letting publishers know it's ok to start publishing again. The callback will be passed the ``Connection.Unblocked`` method frame. :param method callback_method: Callback to call on `Connection.Unblocked` """ self.callbacks.add(0, spec.Connection.Unblocked, callback_method, False) def add_on_open_callback(self, callback_method): """Add a callback notification when the connection has opened. :param method callback_method: Callback to call when open """ self.callbacks.add(0, self.ON_CONNECTION_OPEN, callback_method, False) def add_on_open_error_callback(self, callback_method, remove_default=True): """Add a callback notification when the connection can not be opened. The callback method should accept the connection object that could not connect, and an optional error message. :param method callback_method: Callback to call when can't connect :param bool remove_default: Remove default exception raising callback """ if remove_default: self.callbacks.remove(0, self.ON_CONNECTION_ERROR, self._on_connection_error) self.callbacks.add(0, self.ON_CONNECTION_ERROR, callback_method, False) def add_timeout(self, deadline, callback_method): """Adapters should override to call the callback after the specified number of seconds have elapsed, using a timer, or a thread, or similar. :param int deadline: The number of seconds to wait to call callback :param method callback_method: The callback method """ raise NotImplementedError def channel(self, on_open_callback, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :param method on_open_callback: The callback when the channel is opened :param int channel_number: The channel number to use, defaults to the next available. :rtype: pika.channel.Channel """ if not channel_number: channel_number = self._next_channel_number() self._channels[channel_number] = self._create_channel(channel_number, on_open_callback) self._add_channel_callbacks(channel_number) self._channels[channel_number].open() return self._channels[channel_number] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close """ if self.is_closing or self.is_closed: return if self._has_open_channels: self._close_channels(reply_code, reply_text) # Set our connection state self._set_connection_state(self.CONNECTION_CLOSING) LOGGER.info("Closing connection (%s): %s", reply_code, reply_text) self.closing = reply_code, reply_text if not self._has_open_channels: # if there are open channels then _on_close_ready will finally be # called in _on_channel_cleanup once all channels have been closed self._on_close_ready() def connect(self): """Invoke if trying to reconnect to a RabbitMQ server. Constructing the Connection object should connect on its own. """ self._set_connection_state(self.CONNECTION_INIT) error = self._adapter_connect() if not error: return self._on_connected() self.remaining_connection_attempts -= 1 LOGGER.warning('Could not connect, %i attempts left', self.remaining_connection_attempts) if self.remaining_connection_attempts: LOGGER.info('Retrying in %i seconds', self.params.retry_delay) self.add_timeout(self.params.retry_delay, self.connect) else: self.callbacks.process(0, self.ON_CONNECTION_ERROR, self, self, error) self.remaining_connection_attempts = self.params.connection_attempts self._set_connection_state(self.CONNECTION_CLOSED) def remove_timeout(self, callback_method): """Adapters should override to call the callback after the specified number of seconds have elapsed, using a timer, or a thread, or similar. :param method callback_method: The callback to remove a timeout for """ raise NotImplementedError def set_backpressure_multiplier(self, value=10): """Alter the backpressure multiplier value. We set this to 10 by default. This value is used to raise warnings and trigger the backpressure callback. :param int value: The multiplier value to set """ self._backpressure = value # # Connections state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_CLOSED @property def is_closing(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_CLOSING @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_OPEN # # Properties that reflect server capabilities for the current connection # @property def basic_nack(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self.server_capabilities.get('basic.nack', False) @property def consumer_cancel_notify(self): """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self.server_capabilities.get('consumer_cancel_notify', False) @property def exchange_exchange_bindings(self): """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self.server_capabilities.get('exchange_exchange_bindings', False) @property def publisher_confirms(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self.server_capabilities.get('publisher_confirms', False) # # Internal methods for managing the communication process # def _adapter_connect(self): """Subclasses should override to set up the outbound socket connection. :raises: NotImplementedError """ raise NotImplementedError def _adapter_disconnect(self): """Subclasses should override this to cause the underlying transport (socket) to close. :raises: NotImplementedError """ raise NotImplementedError def _add_channel_callbacks(self, channel_number): """Add the appropriate callbacks for the specified channel number. :param int channel_number: The channel number for the callbacks """ # This permits us to garbage-collect our reference to the channel # regardless of whether it was closed by client or broker, and do so # after all channel-close callbacks. self._channels[channel_number]._add_on_cleanup_callback( self._on_channel_cleanup) def _add_connection_start_callback(self): """Add a callback for when a Connection.Start frame is received from the broker. """ self.callbacks.add(0, spec.Connection.Start, self._on_connection_start) def _add_connection_tune_callback(self): """Add a callback for when a Connection.Tune frame is received.""" self.callbacks.add(0, spec.Connection.Tune, self._on_connection_tune) def _append_frame_buffer(self, value): """Append the bytes to the frame buffer. :param str value: The bytes to append to the frame buffer """ self._frame_buffer += value @property def _buffer_size(self): """Return the suggested buffer size from the connection state/tune or the default if that is None. :rtype: int """ return self.params.frame_max or spec.FRAME_MAX_SIZE def _check_for_protocol_mismatch(self, value): """Invoked when starting a connection to make sure it's a supported protocol. :param pika.frame.Method value: The frame to check :raises: ProtocolVersionMismatch """ if (value.method.version_major, value.method.version_minor) != spec.PROTOCOL_VERSION[0:2]: raise exceptions.ProtocolVersionMismatch(frame.ProtocolHeader(), value) @property def _client_properties(self): """Return the client properties dictionary. :rtype: dict """ return { 'product': PRODUCT, 'platform': 'Python %s' % platform.python_version(), 'capabilities': { 'authentication_failure_close': True, 'basic.nack': True, 'connection.blocked': True, 'consumer_cancel_notify': True, 'publisher_confirms': True }, 'information': 'See http://pika.rtfd.org', 'version': __version__ } def _close_channels(self, reply_code, reply_text): """Close the open channels with the specified reply_code and reply_text. :param int reply_code: The code for why the channels are being closed :param str reply_text: The text reason for why the channels are closing """ if self.is_open: for channel_number in dictkeys(self._channels): if self._channels[channel_number].is_open: self._channels[channel_number].close(reply_code, reply_text) else: del self._channels[channel_number] # Force any lingering callbacks to be removed # moved inside else block since channel's _cleanup removes # callbacks self.callbacks.cleanup(channel_number) else: self._channels = dict() def _combine(self, a, b): """Pass in two values, if a is 0, return b otherwise if b is 0, return a. If neither case matches return the smallest value. :param int a: The first value :param int b: The second value :rtype: int """ return min(a, b) or (a or b) def _connect(self): """Attempt to connect to RabbitMQ :rtype: bool """ warnings.warn('This method is deprecated, use Connection.connect', DeprecationWarning) def _create_channel(self, channel_number, on_open_callback): """Create a new channel using the specified channel number and calling back the method specified by on_open_callback :param int channel_number: The channel number to use :param method on_open_callback: The callback when the channel is opened """ LOGGER.debug('Creating channel %s', channel_number) return channel.Channel(self, channel_number, on_open_callback) def _create_heartbeat_checker(self): """Create a heartbeat checker instance if there is a heartbeat interval set. :rtype: pika.heartbeat.Heartbeat """ if self.params.heartbeat is not None and self.params.heartbeat > 0: LOGGER.debug('Creating a HeartbeatChecker: %r', self.params.heartbeat) return heartbeat.HeartbeatChecker(self, self.params.heartbeat) def _remove_heartbeat(self): """Stop the heartbeat checker if it exists """ if self.heartbeat: self.heartbeat.stop() self.heartbeat = None def _deliver_frame_to_channel(self, value): """Deliver the frame to the channel specified in the frame. :param pika.frame.Method value: The frame to deliver """ if not value.channel_number in self._channels: if self._is_basic_deliver_frame(value): self._reject_out_of_band_delivery(value.channel_number, value.method.delivery_tag) else: LOGGER.warning("Received %r for non-existing channel %i", value, value.channel_number) return return self._channels[value.channel_number]._handle_content_frame(value) def _detect_backpressure(self): """Attempt to calculate if TCP backpressure is being applied due to our outbound buffer being larger than the average frame size over a window of frames. """ avg_frame_size = self.bytes_sent / self.frames_sent buffer_size = sum([len(frame) for frame in self.outbound_buffer]) if buffer_size > (avg_frame_size * self._backpressure): LOGGER.warning(BACKPRESSURE_WARNING, buffer_size, int(buffer_size / avg_frame_size)) self.callbacks.process(0, self.ON_CONNECTION_BACKPRESSURE, self) def _ensure_closed(self): """If the connection is not closed, close it.""" if self.is_open: self.close() def _flush_outbound(self): """Adapters should override to flush the contents of outbound_buffer out along the socket. :raises: NotImplementedError """ raise NotImplementedError def _get_body_frame_max_length(self): """Calculate the maximum amount of bytes that can be in a body frame. :rtype: int """ return ( self.params.frame_max - spec.FRAME_HEADER_SIZE - spec.FRAME_END_SIZE ) def _get_credentials(self, method_frame): """Get credentials for authentication. :param pika.frame.MethodFrame method_frame: The Connection.Start frame :rtype: tuple(str, str) """ (auth_type, response) = self.params.credentials.response_for(method_frame.method) if not auth_type: raise exceptions.AuthenticationError(self.params.credentials.TYPE) self.params.credentials.erase_credentials() return auth_type, response @property def _has_open_channels(self): """Returns true if channels are open. :rtype: bool """ return any([self._channels[num].is_open for num in dictkeys(self._channels)]) def _has_pending_callbacks(self, value): """Return true if there are any callbacks pending for the specified frame. :param pika.frame.Method value: The frame to check :rtype: bool """ return self.callbacks.pending(value.channel_number, value.method) def _init_connection_state(self): """Initialize or reset all of the internal state variables for a given connection. On disconnect or reconnect all of the state needs to be wiped. """ # Connection state self._set_connection_state(self.CONNECTION_CLOSED) # Negotiated server properties self.server_properties = None # Outbound buffer for buffering writes until we're able to send them self.outbound_buffer = collections.deque([]) # Inbound buffer for decoding frames self._frame_buffer = bytes() # Dict of open channels self._channels = dict() # Remaining connection attempts self.remaining_connection_attempts = self.params.connection_attempts # Data used for Heartbeat checking and back-pressure detection self.bytes_sent = 0 self.bytes_received = 0 self.frames_sent = 0 self.frames_received = 0 self.heartbeat = None # Default back-pressure multiplier value self._backpressure = 10 # When closing, hold reason why self.closing = 0, 'Not specified' # Our starting point once connected, first frame received self._add_connection_start_callback() def _is_basic_deliver_frame(self, frame_value): """Returns true if the frame is a Basic.Deliver :param pika.frame.Method frame_value: The frame to check :rtype: bool """ return isinstance(frame_value, spec.Basic.Deliver) def _is_connection_close_frame(self, value): """Returns true if the frame is a Connection.Close frame. :param pika.frame.Method value: The frame to check :rtype: bool """ if not value: return False return isinstance(value.method, spec.Connection.Close) def _is_method_frame(self, value): """Returns true if the frame is a method frame. :param pika.frame.Frame value: The frame to evaluate :rtype: bool """ return isinstance(value, frame.Method) def _is_protocol_header_frame(self, value): """Returns True if it's a protocol header frame. :rtype: bool """ return isinstance(value, frame.ProtocolHeader) def _next_channel_number(self): """Return the next available channel number or raise an exception. :rtype: int """ limit = self.params.channel_max or channel.MAX_CHANNELS if len(self._channels) == limit: raise exceptions.NoFreeChannels() ckeys = set(self._channels.keys()) if not ckeys: return 1 return [x + 1 for x in sorted(ckeys) if x + 1 not in ckeys][0] def _on_channel_cleanup(self, channel): """Remove the channel from the dict of channels when Channel.CloseOk is sent. If connection is closing and no more channels remain, proceed to `_on_close_ready`. :param pika.channel.Channel channel: channel instance """ try: del self._channels[channel.channel_number] LOGGER.debug('Removed channel %s', channel.channel_number) except KeyError: LOGGER.error('Channel %r not in channels', channel.channel_number) if self.is_closing and not self._has_open_channels: self._on_close_ready() def _on_close_ready(self): """Called when the Connection is in a state that it can close after a close has been requested. This happens, for example, when all of the channels are closed that were open when the close request was made. """ if self.is_closed: LOGGER.warning('Invoked while already closed') return self._send_connection_close(self.closing[0], self.closing[1]) def _on_connected(self): """Invoked when the socket is connected and it's time to start speaking AMQP with the broker. """ self._set_connection_state(self.CONNECTION_PROTOCOL) # Start the communication with the RabbitMQ Broker self._send_frame(frame.ProtocolHeader()) def _on_connection_closed(self, method_frame, from_adapter=False): """Called when the connection is closed remotely. The from_adapter value will be true if the connection adapter has been disconnected from the broker and the method was invoked directly instead of by receiving a Connection.Close frame. :param pika.frame.Method: The Connection.Close frame :param bool from_adapter: Called by the connection adapter """ if method_frame and self._is_connection_close_frame(method_frame): self.closing = (method_frame.method.reply_code, method_frame.method.reply_text) # Save the codes because self.closing gets reset by _adapter_disconnect reply_code, reply_text = self.closing # Stop the heartbeat checker if it exists self._remove_heartbeat() # If this did not come from the connection adapter, close the socket if not from_adapter: self._adapter_disconnect() # Invoke a method frame neutral close self._on_disconnect(reply_code, reply_text) def _on_connection_error(self, connection_unused, error_message=None): """Default behavior when the connecting connection can not connect. :raises: exceptions.AMQPConnectionError """ raise exceptions.AMQPConnectionError(error_message or self.params.connection_attempts) def _on_connection_open(self, method_frame): """ This is called once we have tuned the connection with the server and called the Connection.Open on the server and it has replied with Connection.Ok. """ self.known_hosts = method_frame.method.known_hosts # Add a callback handler for the Broker telling us to disconnect self.callbacks.add(0, spec.Connection.Close, self._on_connection_closed) # We're now connected at the AMQP level self._set_connection_state(self.CONNECTION_OPEN) # Call our initial callback that we're open self.callbacks.process(0, self.ON_CONNECTION_OPEN, self, self) def _on_connection_start(self, method_frame): """This is called as a callback once we have received a Connection.Start from the server. :param pika.frame.Method method_frame: The frame received :raises: UnexpectedFrameError """ self._set_connection_state(self.CONNECTION_START) if self._is_protocol_header_frame(method_frame): raise exceptions.UnexpectedFrameError self._check_for_protocol_mismatch(method_frame) self._set_server_information(method_frame) self._add_connection_tune_callback() self._send_connection_start_ok(*self._get_credentials(method_frame)) def _on_connection_tune(self, method_frame): """Once the Broker sends back a Connection.Tune, we will set our tuning variables that have been returned to us and kick off the Heartbeat monitor if required, send our TuneOk and then the Connection. Open rpc call on channel 0. :param pika.frame.Method method_frame: The frame received """ self._set_connection_state(self.CONNECTION_TUNE) # Get our max channels, frames and heartbeat interval self.params.channel_max = self._combine(self.params.channel_max, method_frame.method.channel_max) self.params.frame_max = self._combine(self.params.frame_max, method_frame.method.frame_max) if self.params.heartbeat is None: self.params.heartbeat = method_frame.method.heartbeat elif self.params.heartbeat != 0: self.params.heartbeat = self._combine(self.params.heartbeat, method_frame.method.heartbeat) # Calculate the maximum pieces for body frames self._body_max_length = self._get_body_frame_max_length() # Create a new heartbeat checker if needed self.heartbeat = self._create_heartbeat_checker() # Send the TuneOk response with what we've agreed upon self._send_connection_tune_ok() # Send the Connection.Open RPC call for the vhost self._send_connection_open() def _on_data_available(self, data_in): """This is called by our Adapter, passing in the data from the socket. As long as we have buffer try and map out frame data. :param str data_in: The data that is available to read """ self._append_frame_buffer(data_in) while self._frame_buffer: consumed_count, frame_value = self._read_frame() if not frame_value: return self._trim_frame_buffer(consumed_count) self._process_frame(frame_value) def _on_disconnect(self, reply_code, reply_text): """Invoke passing in the reply_code and reply_text from internal methods to the adapter. Called from on_connection_closed and Heartbeat timeouts. :param str reply_code: The numeric close code :param str reply_text: The text close reason """ LOGGER.warning('Disconnected from RabbitMQ at %s:%i (%s): %s', self.params.host, self.params.port, reply_code, reply_text) self._set_connection_state(self.CONNECTION_CLOSED) for channel in dictkeys(self._channels): if channel not in self._channels: continue method_frame = frame.Method(channel, spec.Channel.Close(reply_code, reply_text)) self._channels[channel]._on_close(method_frame) self._process_connection_closed_callbacks(reply_code, reply_text) self._remove_connection_callbacks() def _process_callbacks(self, frame_value): """Process the callbacks for the frame if the frame is a method frame and if it has any callbacks pending. :param pika.frame.Method frame_value: The frame to process :rtype: bool """ if (self._is_method_frame(frame_value) and self._has_pending_callbacks(frame_value)): self.callbacks.process(frame_value.channel_number, # Prefix frame_value.method, # Key self, # Caller frame_value) # Args return True return False def _process_connection_closed_callbacks(self, reason_code, reason_text): """Process any callbacks that should be called when the connection is closed. :param str reason_code: The numeric code from RabbitMQ for the close :param str reason_text: The text reason fro closing """ self.callbacks.process(0, self.ON_CONNECTION_CLOSED, self, self, reason_code, reason_text) def _process_frame(self, frame_value): """Process an inbound frame from the socket. :param frame_value: The frame to process :type frame_value: pika.frame.Frame | pika.frame.Method """ # Will receive a frame type of -1 if protocol version mismatch if frame_value.frame_type < 0: return # Keep track of how many frames have been read self.frames_received += 1 # Process any callbacks, if True, exit method if self._process_callbacks(frame_value): return # If a heartbeat is received, update the checker if isinstance(frame_value, frame.Heartbeat): if self.heartbeat: self.heartbeat.received() else: LOGGER.warning('Received heartbeat frame without a heartbeat ' 'checker') # If the frame has a channel number beyond the base channel, deliver it elif frame_value.channel_number > 0: self._deliver_frame_to_channel(frame_value) def _read_frame(self): """Try and read from the frame buffer and decode a frame. :rtype tuple: (int, pika.frame.Frame) """ return frame.decode_frame(self._frame_buffer) def _reject_out_of_band_delivery(self, channel_number, delivery_tag): """Reject a delivery on the specified channel number and delivery tag because said channel no longer exists. :param int channel_number: The channel number :param int delivery_tag: The delivery tag """ LOGGER.warning('Rejected out-of-band delivery on channel %i (%s)', channel_number, delivery_tag) self._send_method(channel_number, spec.Basic.Reject(delivery_tag)) def _remove_callback(self, channel_number, method_frame): """Remove the specified method_frame callback if it is set for the specified channel number. :param int channel_number: The channel number to remove the callback on :param pika.object.Method: The method frame for the callback """ self.callbacks.remove(str(channel_number), method_frame) def _remove_callbacks(self, channel_number, method_frames): """Remove the callbacks for the specified channel number and list of method frames. :param int channel_number: The channel number to remove the callback on :param list method_frames: The method frames for the callback """ for method_frame in method_frames: self._remove_callback(channel_number, method_frame) def _remove_connection_callbacks(self): """Remove all callbacks for the connection""" self._remove_callbacks(0, [spec.Connection.Close, spec.Connection.Start, spec.Connection.Open]) def _rpc(self, channel_number, method_frame, callback_method=None, acceptable_replies=None): """Make an RPC call for the given callback, channel number and method. acceptable_replies lists out what responses we'll process from the server with the specified callback. :param int channel_number: The channel number for the RPC call :param pika.object.Method method_frame: The method frame to call :param method callback_method: The callback for the RPC response :param list acceptable_replies: The replies this RPC call expects """ # Validate that acceptable_replies is a list or None if acceptable_replies and not isinstance(acceptable_replies, list): raise TypeError('acceptable_replies should be list or None') # Validate the callback is callable if callback_method: if not utils.is_callable(callback_method): raise TypeError('callback should be None, function or method.') for reply in acceptable_replies: self.callbacks.add(channel_number, reply, callback_method) # Send the rpc call to RabbitMQ self._send_method(channel_number, method_frame) def _send_connection_close(self, reply_code, reply_text): """Send a Connection.Close method frame. :param int reply_code: The reason for the close :param str reply_text: The text reason for the close """ self._rpc(0, spec.Connection.Close(reply_code, reply_text, 0, 0), self._on_connection_closed, [spec.Connection.CloseOk]) def _send_connection_open(self): """Send a Connection.Open frame""" self._rpc(0, spec.Connection.Open(self.params.virtual_host, insist=True), self._on_connection_open, [spec.Connection.OpenOk]) def _send_connection_start_ok(self, authentication_type, response): """Send a Connection.StartOk frame :param str authentication_type: The auth type value :param str response: The encoded value to send """ self._send_method(0, spec.Connection.StartOk(self._client_properties, authentication_type, response, self.params.locale)) def _send_connection_tune_ok(self): """Send a Connection.TuneOk frame""" self._send_method(0, spec.Connection.TuneOk(self.params.channel_max, self.params.frame_max, self.params.heartbeat)) def _send_frame(self, frame_value): """This appends the fully generated frame to send to the broker to the output buffer which will be then sent via the connection adapter. :param frame_value: The frame to write :type frame_value: pika.frame.Frame|pika.frame.ProtocolHeader :raises: exceptions.ConnectionClosed """ if self.is_closed: LOGGER.critical('Attempted to send frame when closed') raise exceptions.ConnectionClosed marshaled_frame = frame_value.marshal() self.bytes_sent += len(marshaled_frame) self.frames_sent += 1 self.outbound_buffer.append(marshaled_frame) self._flush_outbound() if self.params.backpressure_detection: self._detect_backpressure() def _send_method(self, channel_number, method_frame, content=None): """Constructs a RPC method frame and then sends it to the broker. :param int channel_number: The channel number for the frame :param pika.object.Method method_frame: The method frame to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ if not content: with self._write_lock: self._send_frame(frame.Method(channel_number, method_frame)) return self._send_message(channel_number, method_frame, content) def _send_message(self, channel_number, method_frame, content=None): """Send the message directly, bypassing the single _send_frame invocation by directly appending to the output buffer and flushing within a lock. :param int channel_number: The channel number for the frame :param pika.object.Method method_frame: The method frame to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ length = len(content[1]) write_buffer = [frame.Method(channel_number, method_frame).marshal(), frame.Header(channel_number, length, content[0]).marshal()] if content[1]: chunks = int(math.ceil(float(length) / self._body_max_length)) for chunk in range(0, chunks): s = chunk * self._body_max_length e = s + self._body_max_length if e > length: e = length write_buffer.append(frame.Body(channel_number, content[1][s:e]).marshal()) with self._write_lock: self.outbound_buffer += write_buffer self.frames_sent += len(write_buffer) self._flush_outbound() if self.params.backpressure_detection: self._detect_backpressure() def _set_connection_state(self, connection_state): """Set the connection state. :param int connection_state: The connection state to set """ self.connection_state = connection_state def _set_server_information(self, method_frame): """Set the server properties and capabilities :param spec.connection.Start method_frame: The Connection.Start frame """ self.server_properties = method_frame.method.server_properties self.server_capabilities = self.server_properties.get('capabilities', dict()) if hasattr(self.server_properties, 'capabilities'): del self.server_properties['capabilities'] def _trim_frame_buffer(self, byte_count): """Trim the leading N bytes off the frame buffer and increment the counter that keeps track of how many bytes have been read/used from the socket. :param int byte_count: The number of bytes consumed """ self._frame_buffer = self._frame_buffer[byte_count:] self.bytes_received += byte_count pika-0.10.0/pika/credentials.py000066400000000000000000000073311257163076400163260ustar00rootroot00000000000000"""The credentials classes are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. The :class:`~pika.credentials.PlainCredentials` class returns the properly formatted username and password to the :class:`~pika.connection.Connection`. To authenticate with Pika, create a :class:`~pika.credentials.PlainCredentials` object passing in the username and password and pass it as the credentials argument value to the :class:`~pika.connection.ConnectionParameters` object. If you are using :class:`~pika.connection.URLParameters` you do not need a credentials object, one will automatically be created for you. If you are looking to implement SSL certificate style authentication, you would extend the :class:`~pika.credentials.ExternalCredentials` class implementing the required behavior. """ from .compat import as_bytes import logging LOGGER = logging.getLogger(__name__) class PlainCredentials(object): """A credentials object for the default authentication methodology with RabbitMQ. If you do not pass in credentials to the ConnectionParameters object, it will create credentials for 'guest' with the password of 'guest'. If you pass True to erase_on_connect the credentials will not be stored in memory after the Connection attempt has been made. :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ TYPE = 'PLAIN' def __init__(self, username, password, erase_on_connect=False): """Create a new instance of PlainCredentials :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ self.username = username self.password = password self.erase_on_connect = erase_on_connect def response_for(self, start): """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str|None, str|None) """ if as_bytes(PlainCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return (PlainCredentials.TYPE, b'\0' + as_bytes(self.username) + b'\0' + as_bytes(self.password)) def erase_credentials(self): """Called by Connection when it no longer needs the credentials""" if self.erase_on_connect: LOGGER.info("Erasing stored credential values") self.username = None self.password = None class ExternalCredentials(object): """The ExternalCredentials class allows the connection to use EXTERNAL authentication, generally with a client SSL certificate. """ TYPE = 'EXTERNAL' def __init__(self): """Create a new instance of ExternalCredentials""" self.erase_on_connect = False def response_for(self, start): """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str or None, str or None) """ if as_bytes(ExternalCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return ExternalCredentials.TYPE, b'' def erase_credentials(self): """Called by Connection when it no longer needs the credentials""" LOGGER.debug('Not supported by this Credentials type') # Append custom credential types to this list for validation support VALID_TYPES = [PlainCredentials, ExternalCredentials] pika-0.10.0/pika/data.py000066400000000000000000000213241257163076400147400ustar00rootroot00000000000000"""AMQP Table Encoding/Decoding""" import struct import decimal import calendar from datetime import datetime from pika import exceptions from pika.compat import unicode_type, PY2, long, as_bytes def encode_short_string(pieces, value): """Encode a string value as short string and append it to pieces list returning the size of the encoded value. :param list pieces: Already encoded values :param value: String value to encode :type value: str or unicode :rtype: int """ encoded_value = as_bytes(value) length = len(encoded_value) # 4.2.5.3 # Short strings, stored as an 8-bit unsigned integer length followed by zero # or more octets of data. Short strings can carry up to 255 octets of UTF-8 # data, but may not contain binary zero octets. # ... # 4.2.5.5 # The server SHOULD validate field names and upon receiving an invalid field # name, it SHOULD signal a connection exception with reply code 503 (syntax # error). # -> validate length (avoid truncated utf-8 / corrupted data), but skip null # byte check. if length > 255: raise exceptions.ShortStringTooLong(encoded_value) pieces.append(struct.pack('B', length)) pieces.append(encoded_value) return 1 + length if PY2: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 # Purely for compatibility with original python2 code. No idea what # and why this does. value = encoded[offset:offset + length] try: value = bytes(value) except UnicodeEncodeError: pass offset += length return value, offset else: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 value = encoded[offset:offset + length].decode('utf8') offset += length return value, offset def encode_table(pieces, table): """Encode a dict as an AMQP table appending the encded table to the pieces list passed in. :param list pieces: Already encoded frame pieces :param dict table: The dict to encode :rtype: int """ table = table or {} length_index = len(pieces) pieces.append(None) # placeholder tablesize = 0 for (key, value) in table.items(): tablesize += encode_short_string(pieces, key) tablesize += encode_value(pieces, value) pieces[length_index] = struct.pack('>I', tablesize) return tablesize + 4 def encode_value(pieces, value): """Encode the value passed in and append it to the pieces list returning the the size of the encoded value. :param list pieces: Already encoded values :param any value: The value to encode :rtype: int """ if PY2: if isinstance(value, basestring): if isinstance(value, unicode_type): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) else: # support only str on Python 3 if isinstance(value, str): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) if isinstance(value, bool): pieces.append(struct.pack('>cB', b't', int(value))) return 2 if isinstance(value, long): pieces.append(struct.pack('>cq', b'l', value)) return 9 elif isinstance(value, int): pieces.append(struct.pack('>ci', b'I', value)) return 5 elif isinstance(value, decimal.Decimal): value = value.normalize() if value.as_tuple().exponent < 0: decimals = -value.as_tuple().exponent raw = int(value * (decimal.Decimal(10) ** decimals)) pieces.append(struct.pack('>cBi', b'D', decimals, raw)) else: # per spec, the "decimals" octet is unsigned (!) pieces.append(struct.pack('>cBi', b'D', 0, int(value))) return 6 elif isinstance(value, datetime): pieces.append(struct.pack('>cQ', b'T', calendar.timegm(value.utctimetuple()))) return 9 elif isinstance(value, dict): pieces.append(struct.pack('>c', b'F')) return 1 + encode_table(pieces, value) elif isinstance(value, list): p = [] for v in value: encode_value(p, v) piece = b''.join(p) pieces.append(struct.pack('>cI', b'A', len(piece))) pieces.append(piece) return 5 + len(piece) elif value is None: pieces.append(struct.pack('>c', b'V')) return 1 else: raise exceptions.UnsupportedAMQPFieldException(pieces, value) def decode_table(encoded, offset): """Decode the AMQP table passed in from the encoded value returning the decoded result and the number of bytes read plus the offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple """ result = {} tablesize = struct.unpack_from('>I', encoded, offset)[0] offset += 4 limit = offset + tablesize while offset < limit: key, offset = decode_short_string(encoded, offset) value, offset = decode_value(encoded, offset) result[key] = value return result, offset def decode_value(encoded, offset): """Decode the value passed in returning the decoded value and the number of bytes read in addition to the starting offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple :raises: pika.exceptions.InvalidFieldTypeException """ # slice to get bytes in Python 3 and str in Python 2 kind = encoded[offset:offset + 1] offset += 1 # Bool if kind == b't': value = struct.unpack_from('>B', encoded, offset)[0] value = bool(value) offset += 1 # Short-Short Int elif kind == b'b': value = struct.unpack_from('>B', encoded, offset)[0] offset += 1 # Short-Short Unsigned Int elif kind == b'B': value = struct.unpack_from('>b', encoded, offset)[0] offset += 1 # Short Int elif kind == b'U': value = struct.unpack_from('>h', encoded, offset)[0] offset += 2 # Short Unsigned Int elif kind == b'u': value = struct.unpack_from('>H', encoded, offset)[0] offset += 2 # Long Int elif kind == b'I': value = struct.unpack_from('>i', encoded, offset)[0] offset += 4 # Long Unsigned Int elif kind == b'i': value = struct.unpack_from('>I', encoded, offset)[0] offset += 4 # Long-Long Int elif kind == b'L': value = long(struct.unpack_from('>q', encoded, offset)[0]) offset += 8 # Long-Long Unsigned Int elif kind == b'l': value = long(struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Float elif kind == b'f': value = long(struct.unpack_from('>f', encoded, offset)[0]) offset += 4 # Double elif kind == b'd': value = long(struct.unpack_from('>d', encoded, offset)[0]) offset += 8 # Decimal elif kind == b'D': decimals = struct.unpack_from('B', encoded, offset)[0] offset += 1 raw = struct.unpack_from('>i', encoded, offset)[0] offset += 4 value = decimal.Decimal(raw) * (decimal.Decimal(10) ** -decimals) # Short String elif kind == b's': value, offset = decode_short_string(encoded, offset) # Long String elif kind == b'S': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 value = encoded[offset:offset + length].decode('utf8') offset += length # Field Array elif kind == b'A': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 offset_end = offset + length value = [] while offset < offset_end: v, offset = decode_value(encoded, offset) value.append(v) # Timestamp elif kind == b'T': value = datetime.utcfromtimestamp(struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Field Table elif kind == b'F': (value, offset) = decode_table(encoded, offset) # Null / Void elif kind == b'V': value = None else: raise exceptions.InvalidFieldTypeException(kind) return value, offset pika-0.10.0/pika/exceptions.py000066400000000000000000000146071257163076400162160ustar00rootroot00000000000000"""Pika specific exceptions""" class AMQPError(Exception): def __repr__(self): return 'An unspecified AMQP error has occurred' class AMQPConnectionError(AMQPError): def __repr__(self): if len(self.args) == 1: if self.args[0] == 1: return ('No connection could be opened after 1 ' 'connection attempt') elif isinstance(self.args[0], int): return ('No connection could be opened after %s ' 'connection attempts' % self.args[0]) else: return ('No connection could be opened: %s' % self.args[0]) elif len(self.args) == 2: return '%s: %s' % (self.args[0], self.args[1]) class IncompatibleProtocolError(AMQPConnectionError): def __repr__(self): return 'The protocol returned by the server is not supported' class AuthenticationError(AMQPConnectionError): def __repr__(self): return ('Server and client could not negotiate use of the %s ' 'authentication mechanism' % self.args[0]) class ProbableAuthenticationError(AMQPConnectionError): def __repr__(self): return ('Client was disconnected at a connection stage indicating a ' 'probable authentication error') class ProbableAccessDeniedError(AMQPConnectionError): def __repr__(self): return ('Client was disconnected at a connection stage indicating a ' 'probable denial of access to the specified virtual host') class NoFreeChannels(AMQPConnectionError): def __repr__(self): return 'The connection has run out of free channels' class ConnectionClosed(AMQPConnectionError): def __repr__(self): if len(self.args) == 2: return 'The AMQP connection was closed (%s) %s' % (self.args[0], self.args[1]) else: return 'The AMQP connection was closed: %s' % (self.args,) class AMQPChannelError(AMQPError): def __repr__(self): return 'An unspecified AMQP channel error has occurred' class ChannelClosed(AMQPChannelError): def __repr__(self): if len(self.args) == 2: return 'The channel was closed (%s) %s' % (self.args[0], self.args[1]) else: return 'The channel was closed: %s' % (self.args,) class DuplicateConsumerTag(AMQPChannelError): def __repr__(self): return ('The consumer tag specified already exists for this ' 'channel: %s' % self.args[0]) class ConsumerCancelled(AMQPChannelError): def __repr__(self): return 'Server cancelled consumer' class UnroutableError(AMQPChannelError): """Exception containing one or more unroutable messages returned by broker via Basic.Return. Used by BlockingChannel. In publisher-acknowledgements mode, this is raised upon receipt of Basic.Ack from broker; in the event of Basic.Nack from broker, `NackError` is raised instead """ def __init__(self, messages): """ :param messages: sequence of returned unroutable messages :type messages: sequence of `blocking_connection.ReturnedMessage` objects """ super(UnroutableError, self).__init__( "%s unroutable message(s) returned" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class NackError(AMQPChannelError): """This exception is raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. Used by BlockingChannel. """ def __init__(self, messages): """ :param messages: sequence of returned unroutable messages :type messages: sequence of `blocking_connection.ReturnedMessage` objects """ super(NackError, self).__init__( "%s message(s) NACKed" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class InvalidChannelNumber(AMQPError): def __repr__(self): return 'An invalid channel number has been specified: %s' % self.args[0] class ProtocolSyntaxError(AMQPError): def __repr__(self): return 'An unspecified protocol syntax error occurred' class UnexpectedFrameError(ProtocolSyntaxError): def __repr__(self): return 'Received a frame out of sequence: %r' % self.args[0] class ProtocolVersionMismatch(ProtocolSyntaxError): def __repr__(self): return 'Protocol versions did not match: %r vs %r' % (self.args[0], self.args[1]) class BodyTooLongError(ProtocolSyntaxError): def __repr__(self): return ('Received too many bytes for a message delivery: ' 'Received %i, expected %i' % (self.args[0], self.args[1])) class InvalidFrameError(ProtocolSyntaxError): def __repr__(self): return 'Invalid frame received: %r' % self.args[0] class InvalidFieldTypeException(ProtocolSyntaxError): def __repr__(self): return 'Unsupported field kind %s' % self.args[0] class UnsupportedAMQPFieldException(ProtocolSyntaxError): def __repr__(self): return 'Unsupported field kind %s' % type(self.args[1]) class UnspportedAMQPFieldException(UnsupportedAMQPFieldException): """Deprecated version of UnsupportedAMQPFieldException""" class MethodNotImplemented(AMQPError): pass class ChannelError(Exception): def __repr__(self): return 'An unspecified error occurred with the Channel' class InvalidMinimumFrameSize(ProtocolSyntaxError): def __repr__(self): return 'AMQP Minimum Frame Size is 4096 Bytes' class InvalidMaximumFrameSize(ProtocolSyntaxError): def __repr__(self): return 'AMQP Maximum Frame Size is 131072 Bytes' class RecursionError(Exception): """The requested operation would result in unsupported recursion or reentrancy. Used by BlockingConnection/BlockingChannel """ class ShortStringTooLong(AMQPError): def __repr__(self): return ('AMQP Short String can contain up to 255 bytes: ' '%.300s' % self.args[0]) pika-0.10.0/pika/frame.py000066400000000000000000000170661257163076400151310ustar00rootroot00000000000000"""Frame objects that do the frame demarshaling and marshaling.""" import logging import struct from pika import amqp_object from pika import exceptions from pika import spec from pika.compat import byte LOGGER = logging.getLogger(__name__) class Frame(amqp_object.AMQPObject): """Base Frame object mapping. Defines a behavior for all child classes for assignment of core attributes and implementation of the a core _marshal method which child classes use to create the binary AMQP frame. """ NAME = 'Frame' def __init__(self, frame_type, channel_number): """Create a new instance of a frame :param int frame_type: The frame type :param int channel_number: The channel number for the frame """ self.frame_type = frame_type self.channel_number = channel_number def _marshal(self, pieces): """Create the full AMQP wire protocol frame data representation :rtype: bytes """ payload = b''.join(pieces) return struct.pack('>BHI', self.frame_type, self.channel_number, len(payload)) + payload + byte(spec.FRAME_END) def marshal(self): """To be ended by child classes :raises NotImplementedError """ raise NotImplementedError class Method(Frame): """Base Method frame object mapping. AMQP method frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'METHOD' def __init__(self, channel_number, method): """Create a new instance of a frame :param int channel_number: The frame type :param pika.Spec.Class.Method method: The AMQP Class.Method """ Frame.__init__(self, spec.FRAME_METHOD, channel_number) self.method = method def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.method.encode() pieces.insert(0, struct.pack('>I', self.method.INDEX)) return self._marshal(pieces) class Header(Frame): """Header frame object mapping. AMQP content header frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'Header' def __init__(self, channel_number, body_size, props): """Create a new instance of a AMQP ContentHeader object :param int channel_number: The channel number for the frame :param int body_size: The number of bytes for the body :param pika.spec.BasicProperties props: Basic.Properties object """ Frame.__init__(self, spec.FRAME_HEADER, channel_number) self.body_size = body_size self.properties = props def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.properties.encode() pieces.insert(0, struct.pack('>HxxQ', self.properties.INDEX, self.body_size)) return self._marshal(pieces) class Body(Frame): """Body frame object mapping class. AMQP content body frames are mapped on to this base class for getting/setting of attributes/data. """ NAME = 'Body' def __init__(self, channel_number, fragment): """ Parameters: - channel_number: int - fragment: unicode or str """ Frame.__init__(self, spec.FRAME_BODY, channel_number) self.fragment = fragment def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal([self.fragment]) class Heartbeat(Frame): """Heartbeat frame object mapping class. AMQP Heartbeat frames are mapped on to this class for a common access structure to the attributes/data values. """ NAME = 'Heartbeat' def __init__(self): """Create a new instance of the Heartbeat frame""" Frame.__init__(self, spec.FRAME_HEARTBEAT, 0) def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal(list()) class ProtocolHeader(amqp_object.AMQPObject): """AMQP Protocol header frame class which provides a pythonic interface for creating AMQP Protocol headers """ NAME = 'ProtocolHeader' def __init__(self, major=None, minor=None, revision=None): """Construct a Protocol Header frame object for the specified AMQP version :param int major: Major version number :param int minor: Minor version number :param int revision: Revision """ self.frame_type = -1 self.major = major or spec.PROTOCOL_VERSION[0] self.minor = minor or spec.PROTOCOL_VERSION[1] self.revision = revision or spec.PROTOCOL_VERSION[2] def marshal(self): """Return the full AMQP wire protocol frame data representation of the ProtocolHeader frame :rtype: str """ return b'AMQP' + struct.pack('BBBB', 0, self.major, self.minor, self.revision) def decode_frame(data_in): """Receives raw socket data and attempts to turn it into a frame. Returns bytes used to make the frame and the frame :param str data_in: The raw data stream :rtype: tuple(bytes consumed, frame) :raises: pika.exceptions.InvalidFrameError """ # Look to see if it's a protocol header frame try: if data_in[0:4] == b'AMQP': major, minor, revision = struct.unpack_from('BBB', data_in, 5) return 8, ProtocolHeader(major, minor, revision) except (IndexError, struct.error): return 0, None # Get the Frame Type, Channel Number and Frame Size try: (frame_type, channel_number, frame_size) = struct.unpack('>BHL', data_in[0:7]) except struct.error: return 0, None # Get the frame data frame_end = spec.FRAME_HEADER_SIZE + frame_size + spec.FRAME_END_SIZE # We don't have all of the frame yet if frame_end > len(data_in): return 0, None # The Frame termination chr is wrong if data_in[frame_end - 1:frame_end] != byte(spec.FRAME_END): raise exceptions.InvalidFrameError("Invalid FRAME_END marker") # Get the raw frame data frame_data = data_in[spec.FRAME_HEADER_SIZE:frame_end - 1] if frame_type == spec.FRAME_METHOD: # Get the Method ID from the frame data method_id = struct.unpack_from('>I', frame_data)[0] # Get a Method object for this method_id method = spec.methods[method_id]() # Decode the content method.decode(frame_data, 4) # Return the amount of data consumed and the Method object return frame_end, Method(channel_number, method) elif frame_type == spec.FRAME_HEADER: # Return the header class and body size class_id, weight, body_size = struct.unpack_from('>HHQ', frame_data) # Get the Properties type properties = spec.props[class_id]() # Decode the properties out = properties.decode(frame_data[12:]) # Return a Header frame return frame_end, Header(channel_number, body_size, properties) elif frame_type == spec.FRAME_BODY: # Return the amount of data consumed and the Body frame w/ data return frame_end, Body(channel_number, frame_data) elif frame_type == spec.FRAME_HEARTBEAT: # Return the amount of data and a Heartbeat frame return frame_end, Heartbeat() raise exceptions.InvalidFrameError("Unknown frame type: %i" % frame_type) pika-0.10.0/pika/heartbeat.py000066400000000000000000000124431257163076400157700ustar00rootroot00000000000000"""Handle AMQP Heartbeats""" import logging from pika import frame LOGGER = logging.getLogger(__name__) class HeartbeatChecker(object): """Checks to make sure that our heartbeat is received at the expected intervals. """ MAX_IDLE_COUNT = 2 _CONNECTION_FORCED = 320 _STALE_CONNECTION = "Too Many Missed Heartbeats, No reply in %i seconds" def __init__(self, connection, interval, idle_count=MAX_IDLE_COUNT): """Create a heartbeat on connection sending a heartbeat frame every interval seconds. :param pika.connection.Connection: Connection object :param int interval: Heartbeat check interval :param int idle_count: Number of heartbeat intervals missed until the connection is considered idle and disconnects """ self._connection = connection self._interval = interval self._max_idle_count = idle_count # Initialize counters self._bytes_received = 0 self._bytes_sent = 0 self._heartbeat_frames_received = 0 self._heartbeat_frames_sent = 0 self._idle_byte_intervals = 0 # The handle for the last timer self._timer = None # Setup the timer to fire in _interval seconds self._setup_timer() @property def active(self): """Return True if the connection's heartbeat attribute is set to this instance. :rtype True """ return self._connection.heartbeat is self @property def bytes_received_on_connection(self): """Return the number of bytes received by the connection bytes object. :rtype int """ return self._connection.bytes_received @property def connection_is_idle(self): """Returns true if the byte count hasn't changed in enough intervals to trip the max idle threshold. """ return self._idle_byte_intervals >= self._max_idle_count def received(self): """Called when a heartbeat is received""" LOGGER.debug('Received heartbeat frame') self._heartbeat_frames_received += 1 def send_and_check(self): """Invoked by a timer to send a heartbeat when we need to, check to see if we've missed any heartbeats and disconnect our connection if it's been idle too long. """ LOGGER.debug('Received %i heartbeat frames, sent %i', self._heartbeat_frames_received, self._heartbeat_frames_sent) if self.connection_is_idle: return self._close_connection() # Connection has not received any data, increment the counter if not self._has_received_data: self._idle_byte_intervals += 1 else: self._idle_byte_intervals = 0 # Update the counters of bytes sent/received and the frames received self._update_counters() # Send a heartbeat frame self._send_heartbeat_frame() # Update the timer to fire again self._start_timer() def stop(self): """Stop the heartbeat checker""" if self._timer: LOGGER.debug('Removing timeout for next heartbeat interval') self._connection.remove_timeout(self._timer) self._timer = None def _close_connection(self): """Close the connection with the AMQP Connection-Forced value.""" LOGGER.info('Connection is idle, %i stale byte intervals', self._idle_byte_intervals) duration = self._max_idle_count * self._interval text = HeartbeatChecker._STALE_CONNECTION % duration self._connection.close(HeartbeatChecker._CONNECTION_FORCED, text) self._connection._adapter_disconnect() self._connection._on_disconnect(HeartbeatChecker._CONNECTION_FORCED, text) @property def _has_received_data(self): """Returns True if the connection has received data on the connection. :rtype: bool """ return not self._bytes_received == self.bytes_received_on_connection def _new_heartbeat_frame(self): """Return a new heartbeat frame. :rtype pika.frame.Heartbeat """ return frame.Heartbeat() def _send_heartbeat_frame(self): """Send a heartbeat frame on the connection. """ LOGGER.debug('Sending heartbeat frame') self._connection._send_frame(self._new_heartbeat_frame()) self._heartbeat_frames_sent += 1 def _setup_timer(self): """Use the connection objects delayed_call function which is implemented by the Adapter for calling the check_heartbeats function every interval seconds. """ self._timer = self._connection.add_timeout(self._interval, self.send_and_check) def _start_timer(self): """If the connection still has this object set for heartbeats, add a new timer. """ if self.active: self._setup_timer() def _update_counters(self): """Update the internal counters for bytes sent and received and the number of frames received """ self._bytes_sent = self._connection.bytes_sent self._bytes_received = self._connection.bytes_received pika-0.10.0/pika/spec.py000066400000000000000000002314741257163076400147720ustar00rootroot00000000000000# ***** BEGIN LICENSE BLOCK ***** # # For copyright and licensing please refer to COPYING. # # ***** END LICENSE BLOCK ***** # NOTE: Autogenerated code by codegen.py, do not edit import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type str = bytes PROTOCOL_VERSION = (0, 9, 1) PORT = 5672 ACCESS_REFUSED = 403 CHANNEL_ERROR = 504 COMMAND_INVALID = 503 CONNECTION_FORCED = 320 CONTENT_TOO_LARGE = 311 FRAME_BODY = 3 FRAME_END = 206 FRAME_END_SIZE = 1 FRAME_ERROR = 501 FRAME_HEADER = 2 FRAME_HEADER_SIZE = 7 FRAME_HEARTBEAT = 8 FRAME_MAX_SIZE = 131072 FRAME_METHOD = 1 FRAME_MIN_SIZE = 4096 INTERNAL_ERROR = 541 INVALID_PATH = 402 NOT_ALLOWED = 530 NOT_FOUND = 404 NOT_IMPLEMENTED = 540 NO_CONSUMERS = 313 NO_ROUTE = 312 PRECONDITION_FAILED = 406 REPLY_SUCCESS = 200 RESOURCE_ERROR = 506 RESOURCE_LOCKED = 405 SYNTAX_ERROR = 502 UNEXPECTED_FRAME = 505 class Connection(amqp_object.Class): INDEX = 0x000A # 10 NAME = 'Connection' class Start(amqp_object.Method): INDEX = 0x000A000A # 10, 10; 655370 NAME = 'Connection.Start' def __init__(self, version_major=0, version_minor=9, server_properties=None, mechanisms='PLAIN', locales='en_US'): self.version_major = version_major self.version_minor = version_minor self.server_properties = server_properties self.mechanisms = mechanisms self.locales = locales @property def synchronous(self): return True def decode(self, encoded, offset=0): self.version_major = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.version_minor = struct.unpack_from('B', encoded, offset)[0] offset += 1 (self.server_properties, offset) = data.decode_table(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.mechanisms = encoded[offset:offset + length] try: self.mechanisms = str(self.mechanisms) except UnicodeEncodeError: pass offset += length length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.locales = encoded[offset:offset + length] try: self.locales = str(self.locales) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() pieces.append(struct.pack('B', self.version_major)) pieces.append(struct.pack('B', self.version_minor)) data.encode_table(pieces, self.server_properties) assert isinstance(self.mechanisms, str_or_bytes),\ 'A non-string value was supplied for self.mechanisms' value = self.mechanisms.encode('utf-8') if isinstance(self.mechanisms, unicode_type) else self.mechanisms pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locales, str_or_bytes),\ 'A non-string value was supplied for self.locales' value = self.locales.encode('utf-8') if isinstance(self.locales, unicode_type) else self.locales pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class StartOk(amqp_object.Method): INDEX = 0x000A000B # 10, 11; 655371 NAME = 'Connection.StartOk' def __init__(self, client_properties=None, mechanism='PLAIN', response=None, locale='en_US'): self.client_properties = client_properties self.mechanism = mechanism self.response = response self.locale = locale @property def synchronous(self): return False def decode(self, encoded, offset=0): (self.client_properties, offset) = data.decode_table(encoded, offset) self.mechanism, offset = data.decode_short_string(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length self.locale, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() data.encode_table(pieces, self.client_properties) assert isinstance(self.mechanism, str_or_bytes),\ 'A non-string value was supplied for self.mechanism' data.encode_short_string(pieces, self.mechanism) assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locale, str_or_bytes),\ 'A non-string value was supplied for self.locale' data.encode_short_string(pieces, self.locale) return pieces class Secure(amqp_object.Method): INDEX = 0x000A0014 # 10, 20; 655380 NAME = 'Connection.Secure' def __init__(self, challenge=None): self.challenge = challenge @property def synchronous(self): return True def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.challenge = encoded[offset:offset + length] try: self.challenge = str(self.challenge) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.challenge, str_or_bytes),\ 'A non-string value was supplied for self.challenge' value = self.challenge.encode('utf-8') if isinstance(self.challenge, unicode_type) else self.challenge pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class SecureOk(amqp_object.Method): INDEX = 0x000A0015 # 10, 21; 655381 NAME = 'Connection.SecureOk' def __init__(self, response=None): self.response = response @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Tune(amqp_object.Method): INDEX = 0x000A001E # 10, 30; 655390 NAME = 'Connection.Tune' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return True def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class TuneOk(amqp_object.Method): INDEX = 0x000A001F # 10, 31; 655391 NAME = 'Connection.TuneOk' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return False def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class Open(amqp_object.Method): INDEX = 0x000A0028 # 10, 40; 655400 NAME = 'Connection.Open' def __init__(self, virtual_host='/', capabilities='', insist=False): self.virtual_host = virtual_host self.capabilities = capabilities self.insist = insist @property def synchronous(self): return True def decode(self, encoded, offset=0): self.virtual_host, offset = data.decode_short_string(encoded, offset) self.capabilities, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.insist = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.virtual_host, str_or_bytes),\ 'A non-string value was supplied for self.virtual_host' data.encode_short_string(pieces, self.virtual_host) assert isinstance(self.capabilities, str_or_bytes),\ 'A non-string value was supplied for self.capabilities' data.encode_short_string(pieces, self.capabilities) bit_buffer = 0 if self.insist: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class OpenOk(amqp_object.Method): INDEX = 0x000A0029 # 10, 41; 655401 NAME = 'Connection.OpenOk' def __init__(self, known_hosts=''): self.known_hosts = known_hosts @property def synchronous(self): return False def decode(self, encoded, offset=0): self.known_hosts, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.known_hosts, str_or_bytes),\ 'A non-string value was supplied for self.known_hosts' data.encode_short_string(pieces, self.known_hosts) return pieces class Close(amqp_object.Method): INDEX = 0x000A0032 # 10, 50; 655410 NAME = 'Connection.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x000A0033 # 10, 51; 655411 NAME = 'Connection.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Blocked(amqp_object.Method): INDEX = 0x000A003C # 10, 60; 655420 NAME = 'Connection.Blocked' def __init__(self, reason=''): self.reason = reason @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reason, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.reason, str_or_bytes),\ 'A non-string value was supplied for self.reason' data.encode_short_string(pieces, self.reason) return pieces class Unblocked(amqp_object.Method): INDEX = 0x000A003D # 10, 61; 655421 NAME = 'Connection.Unblocked' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Channel(amqp_object.Class): INDEX = 0x0014 # 20 NAME = 'Channel' class Open(amqp_object.Method): INDEX = 0x0014000A # 20, 10; 1310730 NAME = 'Channel.Open' def __init__(self, out_of_band=''): self.out_of_band = out_of_band @property def synchronous(self): return True def decode(self, encoded, offset=0): self.out_of_band, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.out_of_band, str_or_bytes),\ 'A non-string value was supplied for self.out_of_band' data.encode_short_string(pieces, self.out_of_band) return pieces class OpenOk(amqp_object.Method): INDEX = 0x0014000B # 20, 11; 1310731 NAME = 'Channel.OpenOk' def __init__(self, channel_id=''): self.channel_id = channel_id @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.channel_id = encoded[offset:offset + length] try: self.channel_id = str(self.channel_id) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.channel_id, str_or_bytes),\ 'A non-string value was supplied for self.channel_id' value = self.channel_id.encode('utf-8') if isinstance(self.channel_id, unicode_type) else self.channel_id pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Flow(amqp_object.Method): INDEX = 0x00140014 # 20, 20; 1310740 NAME = 'Channel.Flow' def __init__(self, active=None): self.active = active @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class FlowOk(amqp_object.Method): INDEX = 0x00140015 # 20, 21; 1310741 NAME = 'Channel.FlowOk' def __init__(self, active=None): self.active = active @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Close(amqp_object.Method): INDEX = 0x00140028 # 20, 40; 1310760 NAME = 'Channel.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x00140029 # 20, 41; 1310761 NAME = 'Channel.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Access(amqp_object.Class): INDEX = 0x001E # 30 NAME = 'Access' class Request(amqp_object.Method): INDEX = 0x001E000A # 30, 10; 1966090 NAME = 'Access.Request' def __init__(self, realm='/data', exclusive=False, passive=True, active=True, write=True, read=True): self.realm = realm self.exclusive = exclusive self.passive = passive self.active = active self.write = write self.read = read @property def synchronous(self): return True def decode(self, encoded, offset=0): self.realm, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.exclusive = (bit_buffer & (1 << 0)) != 0 self.passive = (bit_buffer & (1 << 1)) != 0 self.active = (bit_buffer & (1 << 2)) != 0 self.write = (bit_buffer & (1 << 3)) != 0 self.read = (bit_buffer & (1 << 4)) != 0 return self def encode(self): pieces = list() assert isinstance(self.realm, str_or_bytes),\ 'A non-string value was supplied for self.realm' data.encode_short_string(pieces, self.realm) bit_buffer = 0 if self.exclusive: bit_buffer = bit_buffer | (1 << 0) if self.passive: bit_buffer = bit_buffer | (1 << 1) if self.active: bit_buffer = bit_buffer | (1 << 2) if self.write: bit_buffer = bit_buffer | (1 << 3) if self.read: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) return pieces class RequestOk(amqp_object.Method): INDEX = 0x001E000B # 30, 11; 1966091 NAME = 'Access.RequestOk' def __init__(self, ticket=1): self.ticket = ticket @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) return pieces class Exchange(amqp_object.Class): INDEX = 0x0028 # 40 NAME = 'Exchange' class Declare(amqp_object.Method): INDEX = 0x0028000A # 40, 10; 2621450 NAME = 'Exchange.Declare' def __init__(self, ticket=0, exchange=None, type='direct', passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments={}): self.ticket = ticket self.exchange = exchange self.type = type self.passive = passive self.durable = durable self.auto_delete = auto_delete self.internal = internal self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.type, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.auto_delete = (bit_buffer & (1 << 2)) != 0 self.internal = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) bit_buffer = 0 if self.passive: bit_buffer = bit_buffer | (1 << 0) if self.durable: bit_buffer = bit_buffer | (1 << 1) if self.auto_delete: bit_buffer = bit_buffer | (1 << 2) if self.internal: bit_buffer = bit_buffer | (1 << 3) if self.nowait: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0028000B # 40, 11; 2621451 NAME = 'Exchange.DeclareOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Delete(amqp_object.Method): INDEX = 0x00280014 # 40, 20; 2621460 NAME = 'Exchange.Delete' def __init__(self, ticket=0, exchange=None, if_unused=False, nowait=False): self.ticket = ticket self.exchange = exchange self.if_unused = if_unused self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.nowait = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) bit_buffer = 0 if self.if_unused: bit_buffer = bit_buffer | (1 << 0) if self.nowait: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00280015 # 40, 21; 2621461 NAME = 'Exchange.DeleteOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Bind(amqp_object.Method): INDEX = 0x0028001E # 40, 30; 2621470 NAME = 'Exchange.Bind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x0028001F # 40, 31; 2621471 NAME = 'Exchange.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Unbind(amqp_object.Method): INDEX = 0x00280028 # 40, 40; 2621480 NAME = 'Exchange.Unbind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00280033 # 40, 51; 2621491 NAME = 'Exchange.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Queue(amqp_object.Class): INDEX = 0x0032 # 50 NAME = 'Queue' class Declare(amqp_object.Method): INDEX = 0x0032000A # 50, 10; 3276810 NAME = 'Queue.Declare' def __init__(self, ticket=0, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.passive = passive self.durable = durable self.exclusive = exclusive self.auto_delete = auto_delete self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.auto_delete = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.passive: bit_buffer = bit_buffer | (1 << 0) if self.durable: bit_buffer = bit_buffer | (1 << 1) if self.exclusive: bit_buffer = bit_buffer | (1 << 2) if self.auto_delete: bit_buffer = bit_buffer | (1 << 3) if self.nowait: bit_buffer = bit_buffer | (1 << 4) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0032000B # 50, 11; 3276811 NAME = 'Queue.DeclareOk' def __init__(self, queue=None, message_count=None, consumer_count=None): self.queue = queue self.message_count = message_count self.consumer_count = consumer_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.queue, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.consumer_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) pieces.append(struct.pack('>I', self.message_count)) pieces.append(struct.pack('>I', self.consumer_count)) return pieces class Bind(amqp_object.Method): INDEX = 0x00320014 # 50, 20; 3276820 NAME = 'Queue.Bind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x00320015 # 50, 21; 3276821 NAME = 'Queue.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Purge(amqp_object.Method): INDEX = 0x0032001E # 50, 30; 3276830 NAME = 'Queue.Purge' def __init__(self, ticket=0, queue='', nowait=False): self.ticket = ticket self.queue = queue self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class PurgeOk(amqp_object.Method): INDEX = 0x0032001F # 50, 31; 3276831 NAME = 'Queue.PurgeOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Delete(amqp_object.Method): INDEX = 0x00320028 # 50, 40; 3276840 NAME = 'Queue.Delete' def __init__(self, ticket=0, queue='', if_unused=False, if_empty=False, nowait=False): self.ticket = ticket self.queue = queue self.if_unused = if_unused self.if_empty = if_empty self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.if_empty = (bit_buffer & (1 << 1)) != 0 self.nowait = (bit_buffer & (1 << 2)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.if_unused: bit_buffer = bit_buffer | (1 << 0) if self.if_empty: bit_buffer = bit_buffer | (1 << 1) if self.nowait: bit_buffer = bit_buffer | (1 << 2) pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00320029 # 50, 41; 3276841 NAME = 'Queue.DeleteOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Unbind(amqp_object.Method): INDEX = 0x00320032 # 50, 50; 3276850 NAME = 'Queue.Unbind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', arguments={}): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00320033 # 50, 51; 3276851 NAME = 'Queue.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Basic(amqp_object.Class): INDEX = 0x003C # 60 NAME = 'Basic' class Qos(amqp_object.Method): INDEX = 0x003C000A # 60, 10; 3932170 NAME = 'Basic.Qos' def __init__(self, prefetch_size=0, prefetch_count=0, global_=False): self.prefetch_size = prefetch_size self.prefetch_count = prefetch_count self.global_ = global_ @property def synchronous(self): return True def decode(self, encoded, offset=0): self.prefetch_size = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.prefetch_count = struct.unpack_from('>H', encoded, offset)[0] offset += 2 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.global_ = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.prefetch_size)) pieces.append(struct.pack('>H', self.prefetch_count)) bit_buffer = 0 if self.global_: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class QosOk(amqp_object.Method): INDEX = 0x003C000B # 60, 11; 3932171 NAME = 'Basic.QosOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Consume(amqp_object.Method): INDEX = 0x003C0014 # 60, 20; 3932180 NAME = 'Basic.Consume' def __init__(self, ticket=0, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, arguments={}): self.ticket = ticket self.queue = queue self.consumer_tag = consumer_tag self.no_local = no_local self.no_ack = no_ack self.exclusive = exclusive self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_local = (bit_buffer & (1 << 0)) != 0 self.no_ack = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.nowait = (bit_buffer & (1 << 3)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.no_local: bit_buffer = bit_buffer | (1 << 0) if self.no_ack: bit_buffer = bit_buffer | (1 << 1) if self.exclusive: bit_buffer = bit_buffer | (1 << 2) if self.nowait: bit_buffer = bit_buffer | (1 << 3) pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class ConsumeOk(amqp_object.Method): INDEX = 0x003C0015 # 60, 21; 3932181 NAME = 'Basic.ConsumeOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Cancel(amqp_object.Method): INDEX = 0x003C001E # 60, 30; 3932190 NAME = 'Basic.Cancel' def __init__(self, consumer_tag=None, nowait=False): self.consumer_tag = consumer_tag self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class CancelOk(amqp_object.Method): INDEX = 0x003C001F # 60, 31; 3932191 NAME = 'Basic.CancelOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Publish(amqp_object.Method): INDEX = 0x003C0028 # 60, 40; 3932200 NAME = 'Basic.Publish' def __init__(self, ticket=0, exchange='', routing_key='', mandatory=False, immediate=False): self.ticket = ticket self.exchange = exchange self.routing_key = routing_key self.mandatory = mandatory self.immediate = immediate @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.mandatory = (bit_buffer & (1 << 0)) != 0 self.immediate = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.mandatory: bit_buffer = bit_buffer | (1 << 0) if self.immediate: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class Return(amqp_object.Method): INDEX = 0x003C0032 # 60, 50; 3932210 NAME = 'Basic.Return' def __init__(self, reply_code=None, reply_text='', exchange=None, routing_key=None): self.reply_code = reply_code self.reply_text = reply_text self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Deliver(amqp_object.Method): INDEX = 0x003C003C # 60, 60; 3932220 NAME = 'Basic.Deliver' def __init__(self, consumer_tag=None, delivery_tag=None, redelivered=False, exchange=None, routing_key=None): self.consumer_tag = consumer_tag self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Get(amqp_object.Method): INDEX = 0x003C0046 # 60, 70; 3932230 NAME = 'Basic.Get' def __init__(self, ticket=0, queue='', no_ack=False): self.ticket = ticket self.queue = queue self.no_ack = no_ack @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_ack = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.no_ack: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class GetOk(amqp_object.Method): INDEX = 0x003C0047 # 60, 71; 3932231 NAME = 'Basic.GetOk' def __init__(self, delivery_tag=None, redelivered=False, exchange=None, routing_key=None, message_count=None): self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) pieces.append(struct.pack('>I', self.message_count)) return pieces class GetEmpty(amqp_object.Method): INDEX = 0x003C0048 # 60, 72; 3932232 NAME = 'Basic.GetEmpty' def __init__(self, cluster_id=''): self.cluster_id = cluster_id @property def synchronous(self): return False def decode(self, encoded, offset=0): self.cluster_id, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) return pieces class Ack(amqp_object.Method): INDEX = 0x003C0050 # 60, 80; 3932240 NAME = 'Basic.Ack' def __init__(self, delivery_tag=0, multiple=False): self.delivery_tag = delivery_tag self.multiple = multiple @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Reject(amqp_object.Method): INDEX = 0x003C005A # 60, 90; 3932250 NAME = 'Basic.Reject' def __init__(self, delivery_tag=None, requeue=True): self.delivery_tag = delivery_tag self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverAsync(amqp_object.Method): INDEX = 0x003C0064 # 60, 100; 3932260 NAME = 'Basic.RecoverAsync' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class Recover(amqp_object.Method): INDEX = 0x003C006E # 60, 110; 3932270 NAME = 'Basic.Recover' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverOk(amqp_object.Method): INDEX = 0x003C006F # 60, 111; 3932271 NAME = 'Basic.RecoverOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Nack(amqp_object.Method): INDEX = 0x003C0078 # 60, 120; 3932280 NAME = 'Basic.Nack' def __init__(self, delivery_tag=0, multiple=False, requeue=True): self.delivery_tag = delivery_tag self.multiple = multiple self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 self.requeue = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer = bit_buffer | (1 << 0) if self.requeue: bit_buffer = bit_buffer | (1 << 1) pieces.append(struct.pack('B', bit_buffer)) return pieces class Tx(amqp_object.Class): INDEX = 0x005A # 90 NAME = 'Tx' class Select(amqp_object.Method): INDEX = 0x005A000A # 90, 10; 5898250 NAME = 'Tx.Select' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class SelectOk(amqp_object.Method): INDEX = 0x005A000B # 90, 11; 5898251 NAME = 'Tx.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Commit(amqp_object.Method): INDEX = 0x005A0014 # 90, 20; 5898260 NAME = 'Tx.Commit' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class CommitOk(amqp_object.Method): INDEX = 0x005A0015 # 90, 21; 5898261 NAME = 'Tx.CommitOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Rollback(amqp_object.Method): INDEX = 0x005A001E # 90, 30; 5898270 NAME = 'Tx.Rollback' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class RollbackOk(amqp_object.Method): INDEX = 0x005A001F # 90, 31; 5898271 NAME = 'Tx.RollbackOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Confirm(amqp_object.Class): INDEX = 0x0055 # 85 NAME = 'Confirm' class Select(amqp_object.Method): INDEX = 0x0055000A # 85, 10; 5570570 NAME = 'Confirm.Select' def __init__(self, nowait=False): self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.nowait: bit_buffer = bit_buffer | (1 << 0) pieces.append(struct.pack('B', bit_buffer)) return pieces class SelectOk(amqp_object.Method): INDEX = 0x0055000B # 85, 11; 5570571 NAME = 'Confirm.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class BasicProperties(amqp_object.Properties): CLASS = Basic INDEX = 0x003C # 60 NAME = 'BasicProperties' FLAG_CONTENT_TYPE = (1 << 15) FLAG_CONTENT_ENCODING = (1 << 14) FLAG_HEADERS = (1 << 13) FLAG_DELIVERY_MODE = (1 << 12) FLAG_PRIORITY = (1 << 11) FLAG_CORRELATION_ID = (1 << 10) FLAG_REPLY_TO = (1 << 9) FLAG_EXPIRATION = (1 << 8) FLAG_MESSAGE_ID = (1 << 7) FLAG_TIMESTAMP = (1 << 6) FLAG_TYPE = (1 << 5) FLAG_USER_ID = (1 << 4) FLAG_APP_ID = (1 << 3) FLAG_CLUSTER_ID = (1 << 2) def __init__(self, content_type=None, content_encoding=None, headers=None, delivery_mode=None, priority=None, correlation_id=None, reply_to=None, expiration=None, message_id=None, timestamp=None, type=None, user_id=None, app_id=None, cluster_id=None): self.content_type = content_type self.content_encoding = content_encoding self.headers = headers self.delivery_mode = delivery_mode self.priority = priority self.correlation_id = correlation_id self.reply_to = reply_to self.expiration = expiration self.message_id = message_id self.timestamp = timestamp self.type = type self.user_id = user_id self.app_id = app_id self.cluster_id = cluster_id def decode(self, encoded, offset=0): flags = 0 flagword_index = 0 while True: partial_flags = struct.unpack_from('>H', encoded, offset)[0] offset += 2 flags = flags | (partial_flags << (flagword_index * 16)) if not (partial_flags & 1): break flagword_index += 1 if flags & BasicProperties.FLAG_CONTENT_TYPE: self.content_type, offset = data.decode_short_string(encoded, offset) else: self.content_type = None if flags & BasicProperties.FLAG_CONTENT_ENCODING: self.content_encoding, offset = data.decode_short_string(encoded, offset) else: self.content_encoding = None if flags & BasicProperties.FLAG_HEADERS: (self.headers, offset) = data.decode_table(encoded, offset) else: self.headers = None if flags & BasicProperties.FLAG_DELIVERY_MODE: self.delivery_mode = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.delivery_mode = None if flags & BasicProperties.FLAG_PRIORITY: self.priority = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.priority = None if flags & BasicProperties.FLAG_CORRELATION_ID: self.correlation_id, offset = data.decode_short_string(encoded, offset) else: self.correlation_id = None if flags & BasicProperties.FLAG_REPLY_TO: self.reply_to, offset = data.decode_short_string(encoded, offset) else: self.reply_to = None if flags & BasicProperties.FLAG_EXPIRATION: self.expiration, offset = data.decode_short_string(encoded, offset) else: self.expiration = None if flags & BasicProperties.FLAG_MESSAGE_ID: self.message_id, offset = data.decode_short_string(encoded, offset) else: self.message_id = None if flags & BasicProperties.FLAG_TIMESTAMP: self.timestamp = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 else: self.timestamp = None if flags & BasicProperties.FLAG_TYPE: self.type, offset = data.decode_short_string(encoded, offset) else: self.type = None if flags & BasicProperties.FLAG_USER_ID: self.user_id, offset = data.decode_short_string(encoded, offset) else: self.user_id = None if flags & BasicProperties.FLAG_APP_ID: self.app_id, offset = data.decode_short_string(encoded, offset) else: self.app_id = None if flags & BasicProperties.FLAG_CLUSTER_ID: self.cluster_id, offset = data.decode_short_string(encoded, offset) else: self.cluster_id = None return self def encode(self): pieces = list() flags = 0 if self.content_type is not None: flags = flags | BasicProperties.FLAG_CONTENT_TYPE assert isinstance(self.content_type, str_or_bytes),\ 'A non-string value was supplied for self.content_type' data.encode_short_string(pieces, self.content_type) if self.content_encoding is not None: flags = flags | BasicProperties.FLAG_CONTENT_ENCODING assert isinstance(self.content_encoding, str_or_bytes),\ 'A non-string value was supplied for self.content_encoding' data.encode_short_string(pieces, self.content_encoding) if self.headers is not None: flags = flags | BasicProperties.FLAG_HEADERS data.encode_table(pieces, self.headers) if self.delivery_mode is not None: flags = flags | BasicProperties.FLAG_DELIVERY_MODE pieces.append(struct.pack('B', self.delivery_mode)) if self.priority is not None: flags = flags | BasicProperties.FLAG_PRIORITY pieces.append(struct.pack('B', self.priority)) if self.correlation_id is not None: flags = flags | BasicProperties.FLAG_CORRELATION_ID assert isinstance(self.correlation_id, str_or_bytes),\ 'A non-string value was supplied for self.correlation_id' data.encode_short_string(pieces, self.correlation_id) if self.reply_to is not None: flags = flags | BasicProperties.FLAG_REPLY_TO assert isinstance(self.reply_to, str_or_bytes),\ 'A non-string value was supplied for self.reply_to' data.encode_short_string(pieces, self.reply_to) if self.expiration is not None: flags = flags | BasicProperties.FLAG_EXPIRATION assert isinstance(self.expiration, str_or_bytes),\ 'A non-string value was supplied for self.expiration' data.encode_short_string(pieces, self.expiration) if self.message_id is not None: flags = flags | BasicProperties.FLAG_MESSAGE_ID assert isinstance(self.message_id, str_or_bytes),\ 'A non-string value was supplied for self.message_id' data.encode_short_string(pieces, self.message_id) if self.timestamp is not None: flags = flags | BasicProperties.FLAG_TIMESTAMP pieces.append(struct.pack('>Q', self.timestamp)) if self.type is not None: flags = flags | BasicProperties.FLAG_TYPE assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) if self.user_id is not None: flags = flags | BasicProperties.FLAG_USER_ID assert isinstance(self.user_id, str_or_bytes),\ 'A non-string value was supplied for self.user_id' data.encode_short_string(pieces, self.user_id) if self.app_id is not None: flags = flags | BasicProperties.FLAG_APP_ID assert isinstance(self.app_id, str_or_bytes),\ 'A non-string value was supplied for self.app_id' data.encode_short_string(pieces, self.app_id) if self.cluster_id is not None: flags = flags | BasicProperties.FLAG_CLUSTER_ID assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) flag_pieces = list() while True: remainder = flags >> 16 partial_flags = flags & 0xFFFE if remainder != 0: partial_flags |= 1 flag_pieces.append(struct.pack('>H', partial_flags)) flags = remainder if not flags: break return flag_pieces + pieces methods = { 0x000A000A: Connection.Start, 0x000A000B: Connection.StartOk, 0x000A0014: Connection.Secure, 0x000A0015: Connection.SecureOk, 0x000A001E: Connection.Tune, 0x000A001F: Connection.TuneOk, 0x000A0028: Connection.Open, 0x000A0029: Connection.OpenOk, 0x000A0032: Connection.Close, 0x000A0033: Connection.CloseOk, 0x000A003C: Connection.Blocked, 0x000A003D: Connection.Unblocked, 0x0014000A: Channel.Open, 0x0014000B: Channel.OpenOk, 0x00140014: Channel.Flow, 0x00140015: Channel.FlowOk, 0x00140028: Channel.Close, 0x00140029: Channel.CloseOk, 0x001E000A: Access.Request, 0x001E000B: Access.RequestOk, 0x0028000A: Exchange.Declare, 0x0028000B: Exchange.DeclareOk, 0x00280014: Exchange.Delete, 0x00280015: Exchange.DeleteOk, 0x0028001E: Exchange.Bind, 0x0028001F: Exchange.BindOk, 0x00280028: Exchange.Unbind, 0x00280033: Exchange.UnbindOk, 0x0032000A: Queue.Declare, 0x0032000B: Queue.DeclareOk, 0x00320014: Queue.Bind, 0x00320015: Queue.BindOk, 0x0032001E: Queue.Purge, 0x0032001F: Queue.PurgeOk, 0x00320028: Queue.Delete, 0x00320029: Queue.DeleteOk, 0x00320032: Queue.Unbind, 0x00320033: Queue.UnbindOk, 0x003C000A: Basic.Qos, 0x003C000B: Basic.QosOk, 0x003C0014: Basic.Consume, 0x003C0015: Basic.ConsumeOk, 0x003C001E: Basic.Cancel, 0x003C001F: Basic.CancelOk, 0x003C0028: Basic.Publish, 0x003C0032: Basic.Return, 0x003C003C: Basic.Deliver, 0x003C0046: Basic.Get, 0x003C0047: Basic.GetOk, 0x003C0048: Basic.GetEmpty, 0x003C0050: Basic.Ack, 0x003C005A: Basic.Reject, 0x003C0064: Basic.RecoverAsync, 0x003C006E: Basic.Recover, 0x003C006F: Basic.RecoverOk, 0x003C0078: Basic.Nack, 0x005A000A: Tx.Select, 0x005A000B: Tx.SelectOk, 0x005A0014: Tx.Commit, 0x005A0015: Tx.CommitOk, 0x005A001E: Tx.Rollback, 0x005A001F: Tx.RollbackOk, 0x0055000A: Confirm.Select, 0x0055000B: Confirm.SelectOk } props = { 0x003C: BasicProperties } def has_content(methodNumber): return methodNumber in ( Basic.Publish.INDEX, Basic.Return.INDEX, Basic.Deliver.INDEX, Basic.GetOk.INDEX, ) pika-0.10.0/pika/utils.py000066400000000000000000000005131257163076400151640ustar00rootroot00000000000000""" Non-module specific functions shared by modules in the pika package """ import collections def is_callable(handle): """Returns a bool value if the handle passed in is a callable method/function :param any handle: The object to check :rtype: bool """ return isinstance(handle, collections.Callable) pika-0.10.0/setup.cfg000066400000000000000000000000341257163076400143450ustar00rootroot00000000000000[bdist_wheel] universal = 1 pika-0.10.0/setup.py000066400000000000000000000042261257163076400142450ustar00rootroot00000000000000from setuptools import setup import os # Conditionally include additional modules for docs on_rtd = os.environ.get('READTHEDOCS', None) == 'True' requirements = list() if on_rtd: requirements.append('tornado') requirements.append('twisted') #requirements.append('pyev') long_description = ('Pika is a pure-Python implementation of the AMQP 0-9-1 ' 'protocol that tries to stay fairly independent of the ' 'underlying network support library. Pika was developed ' 'primarily for use with RabbitMQ, but should also work ' 'with other AMQP 0-9-1 brokers.') setup(name='pika', version='0.10.0', description='Pika Python AMQP Client Library', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='gavinmroy@gmail.com', url='https://pika.readthedocs.org ', packages=['pika', 'pika.adapters'], license='BSD', install_requires=requirements, package_data={'': ['LICENSE', 'README.rst']}, extras_require={'tornado': ['tornado'], 'twisted': ['twisted'], 'libev': ['pyev']}, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: Jython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Communications', 'Topic :: Internet', 'Topic :: Software Development :: Libraries', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: System :: Networking'], zip_safe=True) pika-0.10.0/test-requirements.txt000066400000000000000000000000531257163076400167660ustar00rootroot00000000000000coverage codecov mock nose tornado twisted pika-0.10.0/tests/000077500000000000000000000000001257163076400136715ustar00rootroot00000000000000pika-0.10.0/tests/acceptance/000077500000000000000000000000001257163076400157575ustar00rootroot00000000000000pika-0.10.0/tests/acceptance/async_adapter_tests.py000066400000000000000000000265531257163076400224030ustar00rootroot00000000000000import time import uuid from pika import spec from pika.compat import as_bytes from async_test_base import (AsyncTestCase, BoundQueueTestCase, AsyncAdapters) class TestA_Connect(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect, open channel and disconnect" def begin(self, channel): self.stop() class TestConfirmSelect(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Receive confirmation of Confirm.Select" def begin(self, channel): channel._on_selectok = self.on_complete channel.confirm_delivery() def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Confirm.SelectOk) self.stop() class TestConsumeCancel(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Consume and cancel" def begin(self, channel): self.queue_name = str(uuid.uuid4()) channel.queue_declare(self.on_queue_declared, queue=self.queue_name) def on_queue_declared(self, frame): for i in range(0, 100): msg_body = '{0}:{1}:{2}'.format(self.__class__.__name__, i, time.time()) self.channel.basic_publish('', self.queue_name, msg_body) self.ctag = self.channel.basic_consume(self.on_message, queue=self.queue_name, no_ack=True) def on_message(self, _channel, _frame, _header, body): self.channel.basic_cancel(self.on_cancel, self.ctag) def on_cancel(self, _frame): self.channel.queue_delete(self.on_deleted, self.queue_name) def on_deleted(self, _frame): self.stop() class TestExchangeDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete and exchange" X_TYPE = 'direct' def begin(self, channel): self.name = self.__class__.__name__ + ':' + str(id(self)) channel.exchange_declare(self.on_exchange_declared, self.name, exchange_type=self.X_TYPE, passive=False, durable=False, auto_delete=True) def on_exchange_declared(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeclareOk) self.channel.exchange_delete(self.on_exchange_delete, self.name) def on_exchange_delete(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeleteOk) self.stop() class TestExchangeRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "should close chan: re-declared exchange w/ diff params" X_TYPE1 = 'direct' X_TYPE2 = 'topic' def begin(self, channel): self.name = self.__class__.__name__ + ':' + str(id(self)) self.channel.add_on_close_callback(self.on_channel_closed) channel.exchange_declare(self.on_exchange_declared, self.name, exchange_type=self.X_TYPE1, passive=False, durable=False, auto_delete=True) def on_cleanup_channel(self, channel): channel.exchange_delete(None, self.name, nowait=True) self.stop() def on_channel_closed(self, channel, reply_code, reply_text): self.connection.channel(self.on_cleanup_channel) def on_exchange_declared(self, frame): self.channel.exchange_declare(self.on_exchange_declared, self.name, exchange_type=self.X_TYPE2, passive=False, durable=False, auto_delete=True) def on_bad_result(self, frame): self.channel.exchange_delete(None, self.name, nowait=True) raise AssertionError("Should not have received a Queue.DeclareOk") class TestQueueDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete a queue" def begin(self, channel): channel.queue_declare(self.on_queue_declared, passive=False, durable=False, exclusive=True, auto_delete=False, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_queue_declared(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeclareOk) self.channel.queue_delete(self.on_queue_delete, frame.method.queue) def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueNameDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete a named queue" def begin(self, channel): channel.queue_declare(self.on_queue_declared, str(id(self)), passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_queue_declared(self, frame): queue = str(id(self)) self.assertIsInstance(frame.method, spec.Queue.DeclareOk) # Frame's method's queue is encoded (impl detail) self.assertEqual(frame.method.queue, queue) self.channel.queue_delete(self.on_queue_delete, frame.method.queue) def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Should close chan: re-declared queue w/ diff params" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) channel.queue_declare(self.on_queue_declared, str(id(self)), passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() def on_queue_declared(self, frame): self.channel.queue_declare(self.on_bad_result, str(id(self)), passive=False, durable=True, exclusive=False, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000}) def on_bad_result(self, frame): self.channel.queue_delete(None, str(id(self)), nowait=True) raise AssertionError("Should not have received a Queue.DeclareOk") class TestTX1_Select(AsyncTestCase, AsyncAdapters): DESCRIPTION="Receive confirmation of Tx.Select" def begin(self, channel): channel.tx_select(self.on_complete) def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.stop() class TestTX2_Commit(AsyncTestCase, AsyncAdapters): DESCRIPTION="Start a transaction, and commit it" def begin(self, channel): channel.tx_select(self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_commit(self.on_commitok) def on_commitok(self, frame): self.assertIsInstance(frame.method, spec.Tx.CommitOk) self.stop() class TestTX2_CommitFailure(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Close the channel: commit without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_commit(self.on_commitok) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) def on_commitok(self, frame): raise AssertionError("Should not have received a Tx.CommitOk") class TestTX3_Rollback(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Start a transaction, then rollback" def begin(self, channel): channel.tx_select(self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_rollback(self.on_rollbackok) def on_rollbackok(self, frame): self.assertIsInstance(frame.method, spec.Tx.RollbackOk) self.stop() class TestTX3_RollbackFailure(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Close the channel: rollback without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_rollback(self.on_commitok) def on_channel_closed(self, channel, reply_code, reply_text): self.stop() def on_commitok(self, frame): raise AssertionError("Should not have received a Tx.RollbackOk") class TestZ_PublishAndConsume(BoundQueueTestCase, AsyncAdapters): DESCRIPTION = "Publish a message and consume it" def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.on_message, self.queue) self.msg_body = "%s: %i" % (self.__class__.__name__, time.time()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.on_cancelled, self.ctag) class TestZ_PublishAndConsumeBig(BoundQueueTestCase, AsyncAdapters): DESCRIPTION = "Publish a big message and consume it" def _get_msg_body(self): return '\n'.join(["%s" % i for i in range(0, 2097152)]) def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.on_message, self.queue) self.msg_body = self._get_msg_body() self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.on_cancelled, self.ctag) class TestZ_PublishAndGet(BoundQueueTestCase, AsyncAdapters): DESCRIPTION = "Publish a message and get it" def on_ready(self, frame): self.msg_body = "%s: %i" % (self.__class__.__name__, time.time()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) self.channel.basic_get(self.on_get, self.queue) def on_get(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.GetOk) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.stop() pika-0.10.0/tests/acceptance/async_test_base.py000066400000000000000000000143171257163076400215050ustar00rootroot00000000000000import select import logging try: import unittest2 as unittest except ImportError: import unittest import platform target = platform.python_implementation() import pika from pika import adapters from pika.adapters import select_connection LOGGER = logging.getLogger(__name__) PARAMETERS = pika.URLParameters('amqp://guest:guest@localhost:5672/%2f') DEFAULT_TIMEOUT = 15 class AsyncTestCase(unittest.TestCase): DESCRIPTION = "" ADAPTER = None TIMEOUT = DEFAULT_TIMEOUT def shortDescription(self): method_desc = super(AsyncTestCase, self).shortDescription() if self.DESCRIPTION: return "%s (%s)" % (self.DESCRIPTION, method_desc) else: return method_desc def begin(self, channel): """Extend to start the actual tests on the channel""" raise AssertionError("AsyncTestCase.begin_test not extended") def start(self, adapter=None): self.adapter = adapter or self.ADAPTER self.connection = self.adapter(PARAMETERS, self.on_open, self.on_open_error, self.on_closed) self.timeout = self.connection.add_timeout(self.TIMEOUT, self.on_timeout) self.connection.ioloop.start() def stop(self): """close the connection and stop the ioloop""" LOGGER.info("Stopping test") self.connection.remove_timeout(self.timeout) self.timeout = None self.connection.close() def _stop(self): if hasattr(self, 'timeout') and self.timeout: self.connection.remove_timeout(self.timeout) self.timeout = None if hasattr(self, 'connection') and self.connection: self.connection.ioloop.stop() self.connection = None def tearDown(self): self._stop() def on_closed(self, connection, reply_code, reply_text): """called when the connection has finished closing""" LOGGER.debug("Connection Closed") self._stop() def on_open(self, connection): self.channel = connection.channel(self.begin) def on_open_error(self, connection): connection.ioloop.stop() raise AssertionError('Error connecting to RabbitMQ') def on_timeout(self): """called when stuck waiting for connection to close""" # force the ioloop to stop self.connection.ioloop.stop() raise AssertionError('Test timed out') class BoundQueueTestCase(AsyncTestCase): def tearDown(self): """Cleanup auto-declared queue and exchange""" self._cconn = self.adapter(PARAMETERS, self._on_cconn_open, self._on_cconn_error, self._on_cconn_closed) def start(self, adapter=None): # PY3 compat encoding self.exchange = 'e' + str(id(self)) self.queue = 'q' + str(id(self)) self.routing_key = self.__class__.__name__ super(BoundQueueTestCase, self).start(adapter) def begin(self, channel): self.channel.exchange_declare(self.on_exchange_declared, self.exchange, exchange_type='direct', passive=False, durable=False, auto_delete=True) def on_exchange_declared(self, frame): self.channel.queue_declare(self.on_queue_declared, self.queue, passive=False, durable=False, exclusive=True, auto_delete=True, nowait=False, arguments={'x-expires': self.TIMEOUT * 1000} ) def on_queue_declared(self, frame): self.channel.queue_bind(self.on_ready, self.queue, self.exchange, self.routing_key) def on_ready(self, frame): raise NotImplementedError def _on_cconn_closed(self, cconn, *args, **kwargs): cconn.ioloop.stop() self._cconn = None def _on_cconn_error(self, connection): connection.ioloop.stop() raise AssertionError('Error connecting to RabbitMQ') def _on_cconn_open(self, connection): connection.channel(self._on_cconn_channel) def _on_cconn_channel(self, channel): channel.exchange_delete(None, self.exchange, nowait=True) channel.queue_delete(None, self.queue, nowait=True) self._cconn.close() # # In order to write test cases that will tested using all the Async Adapters # write a class that inherits both from one of TestCase classes above and # from the AsyncAdapters class below. This allows you to avoid duplicating the # test methods for each adapter in each test class. # class AsyncAdapters(object): def select_default_test(self): "SelectConnection:DefaultPoller" select_connection.POLLER_TYPE=None self.start(adapters.SelectConnection) def select_select_test(self): "SelectConnection:select" select_connection.POLLER_TYPE='select' self.start(adapters.SelectConnection) @unittest.skipIf(not hasattr(select, 'poll') or not hasattr(select.poll(), 'modify'), "poll not supported") def select_poll_test(self): "SelectConnection:poll" select_connection.POLLER_TYPE='poll' self.start(adapters.SelectConnection) @unittest.skipIf(not hasattr(select, 'epoll'), "epoll not supported") def select_epoll_test(self): "SelectConnection:epoll" select_connection.POLLER_TYPE='epoll' self.start(adapters.SelectConnection) @unittest.skipIf(not hasattr(select, 'kqueue'), "kqueue not supported") def select_kqueue_test(self): "SelectConnection:kqueue" select_connection.POLLER_TYPE='kqueue' self.start(adapters.SelectConnection) def tornado_test(self): "TornadoConnection" self.start(adapters.TornadoConnection) @unittest.skipIf(target == 'PyPy', 'PyPy is not supported') @unittest.skipIf(adapters.LibevConnection is None, 'pyev is not installed') def libev_test(self): "LibevConnection" self.start(adapters.LibevConnection) pika-0.10.0/tests/acceptance/blocking_adapter_test.py000066400000000000000000002436531257163076400226750ustar00rootroot00000000000000"""blocking adapter test""" from datetime import datetime import logging import socket import time try: import unittest2 as unittest except ImportError: import unittest import uuid from forward_server import ForwardServer import pika from pika.adapters import blocking_connection from pika.compat import as_bytes import pika.connection import pika.exceptions # Disable warning about access to protected member # pylint: disable=W0212 # Disable warning Attribute defined outside __init__ # pylint: disable=W0201 # Disable warning Missing docstring # pylint: disable=C0111 # Disable warning Too many public methods # pylint: disable=R0904 # Disable warning Invalid variable name # pylint: disable=C0103 LOGGER = logging.getLogger(__name__) PARAMS_URL_TEMPLATE = ( 'amqp://guest:guest@127.0.0.1:%(port)s/%%2f?socket_timeout=1') DEFAULT_URL = PARAMS_URL_TEMPLATE % {'port': 5672} DEFAULT_PARAMS = pika.URLParameters(DEFAULT_URL) DEFAULT_TIMEOUT = 15 class BlockingTestCaseBase(unittest.TestCase): TIMEOUT = DEFAULT_TIMEOUT def _connect(self, url=DEFAULT_URL, connection_class=pika.BlockingConnection, impl_class=None): parameters = pika.URLParameters(url) connection = connection_class(parameters, _impl_class=impl_class) self.addCleanup(lambda: connection.close() if connection.is_open else None) connection._impl.add_timeout( self.TIMEOUT, # pylint: disable=E1101 self._on_test_timeout) return connection def _on_test_timeout(self): """Called when test times out""" LOGGER.info('%s TIMED OUT (%s)', datetime.utcnow(), self) self.fail('Test timed out') class TestCreateAndCloseConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection""" connection = self._connect() self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertFalse(connection.is_closed) self.assertFalse(connection.is_closing) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) class TestConnectionContextManagerClosesConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection""" with self._connect() as connection: self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesOriginalException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes original exception""" class MyException(Exception): pass with self.assertRaises(MyException): with self._connect() as connection: self.assertTrue(connection.is_open) raise MyException() self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesSystemException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes system exception""" with self.assertRaises(SystemExit): with self._connect() as connection: self.assertTrue(connection.is_open) raise SystemExit() self.assertTrue(connection.is_closed) class TestInvalidExchangeTypeRaisesConnectionClosed(BlockingTestCaseBase): def test(self): """BlockingConnection: ConnectionClosed raised when creating exchange with invalid type""" # pylint: disable=C0301 # This test exploits behavior specific to RabbitMQ whereby the broker # closes the connection if an attempt is made to declare an exchange # with an invalid exchange type connection = self._connect() ch = connection.channel() exg_name = ("TestInvalidExchangeTypeRaisesConnectionClosed_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ConnectionClosed) as ex_cm: # Attempt to create an exchange with invalid exchange type ch.exchange_declare(exg_name, exchange_type='ZZwwInvalid') self.assertEqual(ex_cm.exception.args[0], 503) class TestCreateAndCloseConnectionWithChannelAndConsumer(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection with channel and consumer""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ( 'TestCreateAndCloseConnectionWithChannelAndConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish the message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) # Create a non-ackable consumer ch.basic_consume(lambda *x: None, q_name, no_ack=True, exclusive=False, arguments=None) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) self.assertFalse(connection._impl._channels) self.assertFalse(ch._consumer_infos) self.assertFalse(ch._impl._consumers) class TestSuddenBrokerDisconnectBeforeChannel(BlockingTestCaseBase): def test(self): """BlockingConnection resets properly on TCP/IP drop during channel() """ with ForwardServer((DEFAULT_PARAMS.host, DEFAULT_PARAMS.port)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl.socket) class TestNoAccessToFileDescriptorAfterConnectionClosed(BlockingTestCaseBase): def test(self): """BlockingConnection no access file descriptor after ConnectionClosed """ with ForwardServer((DEFAULT_PARAMS.host, DEFAULT_PARAMS.port)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl.socket) # Attempt to operate on the connection once again after ConnectionClosed self.assertIsNone(self.connection._impl.socket) with self.assertRaises(pika.exceptions.ConnectionClosed): self.connection.channel() class TestConnectWithDownedBroker(BlockingTestCaseBase): def test(self): """ BlockingConnection to downed broker results in AMQPConnectionError """ # Reserve a port for use in connect sock = socket.socket() self.addCleanup(sock.close) sock.bind(("127.0.0.1", 0)) port = sock.getsockname()[1] sock.close() with self.assertRaises(pika.exceptions.AMQPConnectionError): self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": port}) class TestDisconnectDuringConnectionStart(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_START """ fwd = ForwardServer((DEFAULT_PARAMS.host, DEFAULT_PARAMS.port)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_start') def _on_connection_start(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connection_start( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAuthenticationError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionTune(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_TUNE """ fwd = ForwardServer((DEFAULT_PARAMS.host, DEFAULT_PARAMS.port)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_tune') def _on_connection_tune(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connection_tune( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAccessDeniedError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionProtocol(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_PROTOCOL """ fwd = ForwardServer((DEFAULT_PARAMS.host, DEFAULT_PARAMS.port)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connected') def _on_connected(self, *args, **kwargs): fwd.stop() return super(MySelectConnection, self)._on_connected( *args, **kwargs) with self.assertRaises(pika.exceptions.IncompatibleProtocolError): self._connect(PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestProcessDataEvents(BlockingTestCaseBase): def test(self): """BlockingConnection.process_data_events""" connection = self._connect() # Try with time_limit=0 start_time = time.time() connection.process_data_events(time_limit=0) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Try with time_limit=0.005 start_time = time.time() connection.process_data_events(time_limit=0.005) elapsed = time.time() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionBlockAndUnblock(BlockingTestCaseBase): def test(self): """BlockingConnection register for Connection.Blocked/Unblocked""" connection = self._connect() # NOTE: I haven't figured out yet how to coerce RabbitMQ to emit # Connection.Block and Connection.Unblock from the test, so we'll # just call the registration functions for now, to make sure that # registration doesn't crash connection.add_on_connection_blocked_callback(lambda frame: None) blocked_buffer = [] evt = blocking_connection._ConnectionBlockedEvt( lambda f: blocked_buffer.append("blocked"), pika.frame.Method(1, pika.spec.Connection.Blocked('reason'))) repr(evt) evt.dispatch() self.assertEqual(blocked_buffer, ["blocked"]) unblocked_buffer = [] connection.add_on_connection_unblocked_callback(lambda frame: None) evt = blocking_connection._ConnectionUnblockedEvt( lambda f: unblocked_buffer.append("unblocked"), pika.frame.Method(1, pika.spec.Connection.Unblocked())) repr(evt) evt.dispatch() self.assertEqual(unblocked_buffer, ["unblocked"]) class TestAddTimeoutRemoveTimeout(BlockingTestCaseBase): def test(self): """BlockingConnection.add_timeout and remove_timeout""" connection = self._connect() # Test timer completion start_time = time.time() rx_callback = [] timer_id = connection.add_timeout( 0.005, lambda: rx_callback.append(time.time())) while not rx_callback: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_callback), 1) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Test removing triggered timeout connection.remove_timeout(timer_id) # Test aborted timer rx_callback = [] timer_id = connection.add_timeout( 0.001, lambda: rx_callback.append(time.time())) connection.remove_timeout(timer_id) connection.process_data_events(time_limit=0.1) self.assertFalse(rx_callback) # Make sure _TimerEvt repr doesn't crash evt = blocking_connection._TimerEvt(lambda: None) repr(evt) class TestRemoveTimeoutFromTimeoutCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.remove_timeout from timeout callback""" connection = self._connect() # Test timer completion timer_id1 = connection.add_timeout(5, lambda: 0/0) rx_timer2 = [] def on_timer2(): connection.remove_timeout(timer_id1) connection.remove_timeout(timer_id2) rx_timer2.append(1) timer_id2 = connection.add_timeout(0, on_timer2) while not rx_timer2: connection.process_data_events(time_limit=None) self.assertNotIn(timer_id1, connection._impl.ioloop._timeouts) self.assertFalse(connection._ready_events) class TestSleep(BlockingTestCaseBase): def test(self): """BlockingConnection.sleep""" connection = self._connect() # Try with duration=0 start_time = time.time() connection.sleep(duration=0) elapsed = time.time() - start_time self.assertLess(elapsed, 0.25) # Try with duration=0.005 start_time = time.time() connection.sleep(duration=0.005) elapsed = time.time() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionProperties(BlockingTestCaseBase): def test(self): """Test BlockingConnection properties""" connection = self._connect() self.assertTrue(connection.is_open) self.assertFalse(connection.is_closing) self.assertFalse(connection.is_closed) self.assertTrue(connection.basic_nack_supported) self.assertTrue(connection.consumer_cancel_notify_supported) self.assertTrue(connection.exchange_exchange_bindings_supported) self.assertTrue(connection.publisher_confirms_supported) connection.close() self.assertFalse(connection.is_open) self.assertFalse(connection.is_closing) self.assertTrue(connection.is_closed) class TestCreateAndCloseChannel(BlockingTestCaseBase): def test(self): """BlockingChannel: Create and close channel""" connection = self._connect() ch = connection.channel() self.assertIsInstance(ch, blocking_connection.BlockingChannel) self.assertTrue(ch.is_open) self.assertFalse(ch.is_closed) self.assertFalse(ch.is_closing) self.assertIs(ch.connection, connection) ch.close() self.assertTrue(ch.is_closed) self.assertFalse(ch.is_open) self.assertFalse(ch.is_closing) class TestExchangeDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_declare and exchange_delete""" connection = self._connect() ch = connection.channel() name = "TestExchangeDeclareAndDelete_" + uuid.uuid1().hex # Declare a new exchange frame = ch.exchange_declare(name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Check if it exists by declaring it passively frame = ch.exchange_declare(name, passive=True) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Delete the exchange frame = ch.exchange_delete(name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosed) as cm: ch.exchange_declare(name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestExchangeBindAndUnbind(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_bind and exchange_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestExchangeBindAndUnbind_q' + uuid.uuid1().hex src_exg_name = 'TestExchangeBindAndUnbind_src_exg_' + uuid.uuid1().hex dest_exg_name = 'TestExchangeBindAndUnbind_dest_exg_' + uuid.uuid1().hex routing_key = 'TestExchangeBindAndUnbind' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare both exchanges ch.exchange_declare(src_exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, src_exg_name) ch.exchange_declare(dest_exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, dest_exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the destination exchange ch.queue_bind(q_name, exchange=dest_exg_name, routing_key=routing_key) # Verify that the queue is unreachable without exchange-exchange binding with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(src_exg_name, routing_key, body='', mandatory=True) # Bind the exchanges frame = ch.exchange_bind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.BindOk) # Publish a message via the source exchange ch.publish(src_exg_name, routing_key, body='TestExchangeBindAndUnbind', mandatory=True) # Check that the queue now has one message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) # Unbind the exchanges frame = ch.exchange_unbind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.UnbindOk) # Verify that the queue is now unreachable via the source exchange with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(src_exg_name, routing_key, body='', mandatory=True) class TestQueueDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_declare and queue_delete""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueDeclareAndDelete_' + uuid.uuid1().hex # Declare a new queue frame = ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Check if it exists by declaring it passively frame = ch.queue_declare(q_name, passive=True) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Delete the queue frame = ch.queue_delete(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosed) as cm: ch.queue_declare(q_name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed( BlockingTestCaseBase): def test(self): """BlockingChannel: ChannelClosed raised when passive-declaring unknown queue""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ("TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosed) as ex_cm: ch.queue_declare(q_name, passive=True) self.assertEqual(ex_cm.exception.args[0], 404) class TestQueueBindAndUnbindAndPurge(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_bind and queue_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueBindAndUnbindAndPurge_q' + uuid.uuid1().hex exg_name = 'TestQueueBindAndUnbindAndPurge_exg_' + uuid.uuid1().hex routing_key = 'TestQueueBindAndUnbindAndPurge' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.BindOk) # Check that the queue is empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Deposit a message in the queue ch.publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge', mandatory=True) # Check that the queue now has one message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) # Unbind the queue frame = ch.queue_unbind(queue=q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.UnbindOk) # Verify that the queue is now unreachable via that binding with self.assertRaises(pika.exceptions.UnroutableError): ch.publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge-2', mandatory=True) # Purge the queue and verify that 1 message was purged frame = ch.queue_purge(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.PurgeOk) self.assertEqual(frame.method.message_count, 1) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicGet(BlockingTestCaseBase): def tearDown(self): LOGGER.info('%s TEARING DOWN (%s)', datetime.utcnow(), self) def test(self): """BlockingChannel.basic_get""" LOGGER.info('%s STARTED (%s)', datetime.utcnow(), self) connection = self._connect() LOGGER.info('%s CONNECTED (%s)', datetime.utcnow(), self) ch = connection.channel() LOGGER.info('%s CREATED CHANNEL (%s)', datetime.utcnow(), self) q_name = 'TestBasicGet_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() LOGGER.info('%s ENABLED PUB-ACKS (%s)', datetime.utcnow(), self) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) LOGGER.info('%s DECLARED QUEUE (%s)', datetime.utcnow(), self) # Verify result of getting a message from an empty queue msg = ch.basic_get(q_name, no_ack=False) self.assertTupleEqual(msg, (None, None, None)) LOGGER.info('%s GOT FROM EMPTY QUEUE (%s)', datetime.utcnow(), self) body = 'TestBasicGet' # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body=body, mandatory=True) LOGGER.info('%s PUBLISHED (%s)', datetime.utcnow(), self) # Get the message (method, properties, body) = ch.basic_get(q_name, no_ack=False) LOGGER.info('%s GOT FROM NON-EMPTY QUEUE (%s)', datetime.utcnow(), self) self.assertIsInstance(method, pika.spec.Basic.GetOk) self.assertEqual(method.delivery_tag, 1) self.assertFalse(method.redelivered) self.assertEqual(method.exchange, '') self.assertEqual(method.routing_key, q_name) self.assertEqual(method.message_count, 0) self.assertIsInstance(properties, pika.BasicProperties) self.assertIsNone(properties.headers) self.assertEqual(body, as_bytes(body)) # Ack it ch.basic_ack(delivery_tag=method.delivery_tag) LOGGER.info('%s ACKED (%s)', datetime.utcnow(), self) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) LOGGER.info('%s DECLARE PASSIVE QUEUE DONE (%s)', datetime.utcnow(), self) self.assertEqual(frame.method.message_count, 0) class TestBasicReject(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicReject_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicReject1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicReject2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) class TestBasicRejectNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicRejectNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicNack(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack single message""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNack_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNack1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNack2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, multiple=False, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) class TestBasicNackNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicNackMultiple(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack multiple messages""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackMultiple_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) # Nack both messages via the "multiple" option ch.basic_nack(rx_method.delivery_tag, multiple=True, requeue=True) # Verify that both messages are present in the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) class TestBasicRecoverWithRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_recover with requeue=True. NOTE: the requeue=False option is not supported by RabbitMQ broker as of this writing (using RabbitMQ 3.5.1) """ connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicRecoverWithRequeue_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue1', mandatory=True) ch.publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue2', mandatory=True) rx_messages = [] num_messages = 0 for msg in ch.consume(q_name, no_ack=False): num_messages += 1 if num_messages == 2: ch.basic_recover(requeue=True) if num_messages > 2: rx_messages.append(msg) if num_messages == 4: break else: self.fail('consumer aborted prematurely') # Get the messages (_, _, rx_body) = rx_messages[0] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue1')) (_, _, rx_body) = rx_messages[1] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue2')) class TestTxCommit(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxCommit_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestTxCommit1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Commit the transaction ch.tx_commit() # Verify that the queue has the expected message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) (_, _, rx_body) = ch.basic_get(q_name, no_ack=False) self.assertEqual(rx_body, as_bytes('TestTxCommit1')) class TestTxRollback(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxRollback_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.publish(exchange='', routing_key=q_name, body='TestTxRollback1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Roll back the transaction ch.tx_rollback() # Verify that the queue continues to be empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicConsumeFromUnknownQueueRaisesChannelClosed(BlockingTestCaseBase): def test(self): """ChannelClosed raised when consuming from unknown queue""" connection = self._connect() ch = connection.channel() q_name = ("TestBasicConsumeFromUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosed) as ex_cm: ch.basic_consume(lambda *args: None, q_name) self.assertEqual(ex_cm.exception.args[0], 404) class TestPublishAndBasicPublishWithPubacksUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.publish amd basic_publish unroutable message with pubacks""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ('TestPublishAndBasicPublishUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestPublishAndBasicPublishUnroutable' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Verify unroutable message handling using basic_publish res = ch.basic_publish(exg_name, routing_key=routing_key, body='', mandatory=True) self.assertEqual(res, False) # Verify unroutable message handling using publish msg2_headers = dict( test_name='TestPublishAndBasicPublishWithPubacksUnroutable') msg2_properties = pika.spec.BasicProperties(headers=msg2_headers) with self.assertRaises(pika.exceptions.UnroutableError) as cm: ch.publish(exg_name, routing_key=routing_key, body='', properties=msg2_properties, mandatory=True) (msg,) = cm.exception.messages self.assertIsInstance(msg, blocking_connection.ReturnedMessage) self.assertIsInstance(msg.method, pika.spec.Basic.Return) self.assertEqual(msg.method.reply_code, 312) self.assertEqual(msg.method.exchange, exg_name) self.assertEqual(msg.method.routing_key, routing_key) self.assertIsInstance(msg.properties, pika.BasicProperties) self.assertEqual(msg.properties.headers, msg2_headers) self.assertEqual(msg.body, as_bytes('')) class TestConfirmDeliveryAfterUnroutableMessage(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.confirm_delivery following unroutable message""" connection = self._connect() ch = connection.channel() exg_name = ('TestConfirmDeliveryAfterUnroutableMessage_exg_' + uuid.uuid1().hex) routing_key = 'TestConfirmDeliveryAfterUnroutableMessage' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback(lambda *args: returned_messages.append(args)) # Emit unroutable message without pubacks res = ch.basic_publish(exg_name, routing_key=routing_key, body='', mandatory=True) self.assertEqual(res, True) # Select delivery confirmations ch.confirm_delivery() # Verify that unroutable message is in pending events self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) # Dispach events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify that unroutable message was dispatched ((channel, method, properties, body,),) = returned_messages self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('')) class TestUnroutableMessagesReturnedInNonPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in non-puback mode""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInNonPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInNonPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages without pubacks ch.publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) ch.publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) # Process I/O until Basic.Return are dispatched while len(returned_messages) < 2: connection.process_data_events() self.assertEqual(len(returned_messages), 2) self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestUnroutableMessageReturnedInPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in puback mode""" connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Select delivery confirmations ch.confirm_delivery() # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages with pubacks res = ch.basic_publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) self.assertEqual(res, False) res = ch.basic_publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) self.assertEqual(res, False) # Verify that unroutable messages are already in pending events self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) self.assertIsInstance(ch._pending_events[1], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) repr(ch._pending_events[1]) # Dispatch events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestBasicPublishDeliveredWhenPendingUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.basic_publish msg delivered despite pending unroutable message""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_q' + uuid.uuid1().hex) exg_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestBasicPublishDeliveredWhenPendingUnroutable' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Attempt to send an unroutable message in the queue via basic_publish res = ch.basic_publish(exg_name, routing_key='', body='unroutable-message', mandatory=True) self.assertEqual(res, True) # Flush channel to force Basic.Return connection.channel().close() # Deposit a routable message in the queue res = ch.basic_publish(exg_name, routing_key=routing_key, body='routable-message', mandatory=True) self.assertEqual(res, True) # Wait for the queue to get the routable message while ch.queue_declare(q_name, passive=True).method.message_count < 1: pass self.assertEqual( ch.queue_declare(q_name, passive=True).method.message_count, 1) msg = ch.basic_get(q_name) # Check the first message self.assertIsInstance(msg, tuple) rx_method, rx_properties, rx_body = msg self.assertIsInstance(rx_method, pika.spec.Basic.GetOk) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('routable-message')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestPublishAndConsumeWithPubacksAndQosOfOne(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish, publish, basic_consume, QoS, \ Basic.Cancel from broker """ connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeAndQos_q' + uuid.uuid1().hex exg_name = 'TestPublishAndConsumeAndQos_exg_' + uuid.uuid1().hex routing_key = 'TestPublishAndConsumeAndQos' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue via basic_publish msg1_headers = dict( test_name='TestPublishAndConsumeWithPubacksAndQosOfOne') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish', properties=msg1_properties, mandatory=True) self.assertEqual(res, True) # Deposit another message in the queue via publish ch.publish(exg_name, routing_key, body='via-publish', mandatory=True) # Check that the queue now has two messages frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) # Configure QoS for one message ch.basic_qos(prefetch_size=0, prefetch_count=1, all_channels=False) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Attempt to cosume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) # Delete the queue and wait for consumer cancellation rx_cancellations = [] ch.add_on_cancel_callback(rx_cancellations.append) ch.queue_delete(q_name) ch.start_consuming() self.assertEqual(len(rx_cancellations), 1) frame, = rx_cancellations self.assertEqual(frame.method.consumer_tag, consumer_tag) class TestBasicCancelPurgesPendingConsumerCancellationEvt(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_cancel purges pending _ConsumerCancellationEvt""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicCancelPurgesPendingConsumerCancellationEvt_q' + uuid.uuid1().hex) ch.queue_declare(q_name) self.addCleanup(self._connect().channel().queue_delete, q_name) ch.publish('', routing_key=q_name, body='via-publish', mandatory=True) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for the published message to arrive, but don't consume it while not ch._pending_events: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ConsumerDeliveryEvt) # Delete the queue and wait for broker-initiated consumer cancellation ch.queue_delete(q_name) while len(ch._pending_events) < 2: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[1], blocking_connection._ConsumerCancellationEvt) # Issue consumer cancellation and verify that the pending # _ConsumerCancellationEvt instance was removed messages = ch.basic_cancel(consumer_tag) self.assertEqual(messages, []) self.assertEqual(len(ch._pending_events), 0) class TestBasicPublishWithoutPubacks(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish without pubacks""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicPublishWithoutPubacks_q' + uuid.uuid1().hex exg_name = 'TestBasicPublishWithoutPubacks_exg_' + uuid.uuid1().hex routing_key = 'TestBasicPublishWithoutPubacks' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type='direct') self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, q_name) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue via basic_publish and mandatory=True msg1_headers = dict( test_name='TestBasicPublishWithoutPubacks') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=True', properties=msg1_properties, mandatory=True) self.assertEqual(res, True) # Deposit a message in the queue via basic_publish and mandatory=False res = ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=False', mandatory=True) self.assertEqual(res, True) # Wait for the messages to arrive in queue while ch.queue_declare(q_name, passive=True).method.message_count != 2: pass # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertGreaterEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=True')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=False')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Attempt to cosume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) class TestPublishFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_publish from basic_consume callback """ connection = self._connect() ch = connection.channel() src_q_name = ( 'TestPublishFromBasicConsumeCallback_src_q' + uuid.uuid1().hex) dest_q_name = ( 'TestPublishFromBasicConsumeCallback_dest_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare source and destination queues ch.queue_declare(src_q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, src_q_name) ch.queue_declare(dest_q_name, auto_delete=True) self.addCleanup(self._connect().channel().queue_delete, dest_q_name) # Deposit a message in the source queue ch.publish('', routing_key=src_q_name, body='via-publish', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): channel.publish( '', routing_key=dest_q_name, body=body, properties=props, mandatory=True) channel.basic_ack(method.delivery_tag) ch.basic_consume(on_consume, src_q_name, no_ack=False, exclusive=False, arguments=None) # Consume from destination queue for _, _, rx_body in ch.consume(dest_q_name, no_ack=True): self.assertEqual(rx_body, as_bytes('via-publish')) break else: self.fail('failed to consume a messages from destination q') class TestStopConsumingFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.stop_consuming from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestStopConsumingFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.stop_consuming() channel.basic_ack(method.delivery_tag) ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() ch.close() ch = connection.channel() # Verify that only the second message is present in the queue _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) msg = ch.basic_get(q_name) self.assertTupleEqual(msg, (None, None, None)) class TestCloseChannelFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseChannelFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.close() ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) # Verify that both messages are present in the queue ch = connection.channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestCloseConnectionFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseConnectionFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Deposit two messages in the queue ch.publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 connection.close() ch.basic_consume(on_consume, q_name, no_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) self.assertTrue(connection.is_closed) # Verify that both messages are present in the queue ch = self._connect().channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestNonPubAckPublishAndConsumeHugeMessage(BlockingTestCaseBase): def test(self): """BlockingChannel.publish/consume huge message""" connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeHugeMessage_q' + uuid.uuid1().hex body = 'a' * 1000000 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish a message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body) LOGGER.info('Published message body size=%s', len(body)) # Consume the message for rx_method, rx_props, rx_body in ch.consume(q_name, no_ack=False, exclusive=False, arguments=None): self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) # Verify that the queue is now empty ch.close() ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestNonPubackPublishAndConsumeManyMessages(BlockingTestCaseBase): def test(self): """BlockingChannel non-pub-ack publish/consume many messages""" connection = self._connect() ch = connection.channel() q_name = ('TestNonPubackPublishAndConsumeManyMessages_q' + uuid.uuid1().hex) body = 'b' * 1024 num_messages_to_publish = 500 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) for _ in pika.compat.xrange(num_messages_to_publish): # Publish a message to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body) # Consume the messages num_consumed = 0 for rx_method, rx_props, rx_body in ch.consume(q_name, no_ack=False, exclusive=False, arguments=None): num_consumed += 1 self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, num_consumed) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) if num_consumed >= num_messages_to_publish: break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) ch.close() self.assertIsNone(ch._queue_consumer_generator) # Verify that the queue is now empty ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicCancelWithNonAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels non-ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithNonAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages while ch.queue_declare(q_name, passive=True).method.message_count != 2: pass # Create a non-ackable consumer consumer_tag = ch.basic_consume(lambda *x: None, q_name, no_ack=True, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client while ch.queue_declare(q_name, passive=True).method.message_count > 0: pass # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 2) _, _, rx_body1 = messages[0] self.assertEqual(rx_body1, as_bytes(body1)) _, _, rx_body2 = messages[1] self.assertEqual(rx_body2, as_bytes(body2)) ch.close() ch = connection.channel() # Verify that the queue is now empty; this validates the multi-ack frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicCancelWithAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages while ch.queue_declare(q_name, passive=True).method.message_count != 2: pass # Create an ackable consumer consumer_tag = ch.basic_consume(lambda *x: None, q_name, no_ack=False, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client while ch.queue_declare(q_name, passive=True).method.message_count > 0: pass # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 0) ch.close() ch = connection.channel() # Verify that the queue is now empty; this validates the multi-ack frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) class TestUnackedMessageAutoRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestUnackedMessageAutoRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Consume the events, but don't ack rx_messages = [] ch.basic_consume(lambda *args: rx_messages.append(args), q_name, no_ack=False, exclusive=False, arguments=None) while len(rx_messages) != 2: connection.process_data_events(time_limit=None) self.assertEqual(rx_messages[0][1].delivery_tag, 1) self.assertEqual(rx_messages[1][1].delivery_tag, 2) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should restore messages back to queue ch.close() # Verify that there are two messages in q now ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) class TestNoAckMessageNotRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestNoAckMessageNotRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Publish two messages to the queue by way of default exchange ch.publish(exchange='', routing_key=q_name, body=body1) ch.publish(exchange='', routing_key=q_name, body=body2) # Consume, but don't ack num_messages = 0 for rx_method, _, _ in ch.consume(q_name, no_ack=True, exclusive=False): num_messages += 1 self.assertEqual(rx_method.delivery_tag, num_messages) if num_messages == 2: break else: self.fail('expected 2 messages, but consumed %i' % (num_messages,)) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should not restore no-ack messages back to queue ch.close() # Verify that there are no messages in q now ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestChannelFlow(BlockingTestCaseBase): def test(self): """BlockingChannel Channel.Flow activate and deactivate """ connection = self._connect() ch = connection.channel() q_name = ('TestChannelFlow_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(self._connect().channel().queue_delete, q_name) # Verify zero active consumers on the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 0) # Create consumer ch.basic_consume(lambda *args: None, q_name) # Verify one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) # Activate flow from default state (active by default) active = ch.flow(True) self.assertEqual(active, True) # Verify still one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) # active=False is not supported by RabbitMQ per # https://www.rabbitmq.com/specification.html: # "active=false is not supported by the server. Limiting prefetch with # basic.qos provides much better control" ## # Deactivate flow ## active = ch.flow(False) ## self.assertEqual(active, False) ## ## # Verify zero active consumers on the queue now ## frame = ch.queue_declare(q_name, passive=True) ## self.assertEqual(frame.method.consumer_count, 0) ## ## # Re-activate flow ## active = ch.flow(True) ## self.assertEqual(active, True) ## ## # Verify one active consumers on the queue once again ## frame = ch.queue_declare(q_name, passive=True) ## self.assertEqual(frame.method.consumer_count, 1) if __name__ == '__main__': unittest.main() pika-0.10.0/tests/acceptance/forward_server.py000066400000000000000000000447621257163076400214000ustar00rootroot00000000000000"""TCP/IP forwarding/echo service for testing.""" from __future__ import print_function import array from datetime import datetime import errno import logging import multiprocessing import os import socket import sys import threading import traceback from pika.compat import PY3 if PY3: def buffer(object, offset, size): # pylint: disable=W0622 """array etc. have the buffer protocol""" return object[offset:offset + size] try: import SocketServer except ImportError: import socketserver as SocketServer # pylint: disable=F0401 def _trace(fmt, *args): """Format and output the text to stderr""" print((fmt % args) + "\n", end="", file=sys.stderr) class ForwardServer(object): """ Implement a TCP/IP forwarding/echo service for testing. Listens for an incoming TCP/IP connection, accepts it, then connects to the given remote address and forwards data back and forth between the two endpoints. This is similar to the subset of `netcat` functionality, but without dependency on any specific flavor of netcat Connection forwarding example; forward local connection to default rabbitmq addr, connect to rabbit via forwarder, then disconnect forwarder, then attempt another pika operation to see what happens with ForwardServer(("localhost", 5672)) as fwd: params = pika.ConnectionParameters( host="localhost", port=fwd.server_address[1]) conn = pika.BlockingConnection(params) # Once outside the context, the forwarder is disconnected # Let's see what happens in pika with a disconnected server channel = conn.channel() Echo server example def talk_to_echo_server(port): pass with ForwardServer(None) as fwd: worker = threading.Thread(target=talk_to_echo_server, args=[fwd.server_address[1]]) worker.start() time.sleep(5) worker.join() """ # Amount of time, in seconds, we're willing to wait for the subprocess _SUBPROC_TIMEOUT = 10 def __init__(self, remote_addr, remote_addr_family=socket.AF_INET, remote_socket_type=socket.SOCK_STREAM, server_addr=("127.0.0.1", 0), server_addr_family=socket.AF_INET, server_socket_type=socket.SOCK_STREAM): """ :param tuple remote_addr: remote server's IP address, whose structure depends on remote_addr_family; pair (host-or-ip-addr, port-number). Pass None to have ForwardServer behave as echo server. :param remote_addr_family: socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX. :param remote_socket_type: only socket.SOCK_STREAM is supported at this time :param server_addr: optional address for binding this server's listening socket; the format depends on server_addr_family; defaults to ("127.0.0.1", 0) :param server_addr_family: Address family for this server's listening socket; socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX; defaults to socket.AF_INET :param server_socket_type: only socket.SOCK_STREAM is supported at this time """ self._logger = logging.getLogger(__name__) self._remote_addr = remote_addr self._remote_addr_family = remote_addr_family assert remote_socket_type == socket.SOCK_STREAM, remote_socket_type self._remote_socket_type = remote_socket_type assert server_addr is not None self._server_addr = server_addr assert server_addr_family is not None self._server_addr_family = server_addr_family assert server_socket_type == socket.SOCK_STREAM, server_socket_type self._server_socket_type = server_socket_type self._subproc = None @property def running(self): """Property: True if ForwardServer is active""" return self._subproc is not None @property def server_address_family(self): """Property: Get listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr_family is not None, "Not in context" return self._server_addr_family @property def server_address(self): """ Property: Get listening socket's address; the returned value depends on the listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr is not None, "Not in context" return self._server_addr def __enter__(self): """ Context manager entry. Starts the forwarding server :returns: self """ return self.start() def __exit__(self, *args): """ Context manager exit; stops the forwarding server """ self.stop() def start(self): """ Start the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. :returns: self """ q = multiprocessing.Queue() self._subproc = multiprocessing.Process( target=_run_server, kwargs=dict( local_addr=self._server_addr, local_addr_family=self._server_addr_family, local_socket_type=self._server_socket_type, remote_addr=self._remote_addr, remote_addr_family=self._remote_addr_family, remote_socket_type=self._remote_socket_type, q=q)) self._subproc.daemon = True self._subproc.start() try: # Get server socket info from subprocess self._server_addr_family, self._server_addr = q.get( block=True, timeout=self._SUBPROC_TIMEOUT) except Exception: # pylint: disable=W0703 try: self._logger.exception( "Failed while waiting for local socket info") # Preserve primary exception and traceback raise finally: # Clean up try: self.stop() except Exception: # pylint: disable=W0703 # Suppress secondary exception in favor of the primary self._logger.exception( "Emergency subprocess shutdown failed") return self def stop(self): """Stop the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. """ _trace("ForwardServer STOPPING") self._logger.info("ForwardServer STOPPING") try: self._subproc.terminate() self._subproc.join(timeout=self._SUBPROC_TIMEOUT) if self._subproc.is_alive(): self._logger.error( "ForwardServer failed to terminate, killing it") os.kill(self._subproc.pid) self._subproc.join(timeout=self._SUBPROC_TIMEOUT) assert not self._subproc.is_alive(), self._subproc # Log subprocess's exit code; NOTE: negative signal.SIGTERM (usually # -15) is normal on POSIX systems - it corresponds to SIGTERM exit_code = self._subproc.exitcode self._logger.info("ForwardServer terminated with exitcode=%s", exit_code) finally: self._subproc = None def _run_server(local_addr, local_addr_family, local_socket_type, remote_addr, remote_addr_family, remote_socket_type, q): """ Run the server; executed in the subprocess :param local_addr: listening address :param local_addr_family: listening address family; one of socket.AF_* :param local_socket_type: listening socket type; typically socket.SOCK_STREAM :param remote_addr: address of the target server :param remote_addr_family: address family for connecting to target server; one of socket.AF_* :param remote_socket_type: socket type for connecting to target server; typically socket.SOCK_STREAM :param multiprocessing.Queue q: queue for depositing the forwarding server's actual listening socket address family and bound address. The parent process waits for this. """ # NOTE: We define _ThreadedTCPServer class as a closure in order to # override some of its class members dynamically class _ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer, object): """Threaded streaming server for forwarding""" # Override TCPServer's class members address_family = local_addr_family socket_type = local_socket_type allow_reuse_address = True def __init__(self, remote_addr, remote_addr_family, remote_socket_type): self.remote_addr = remote_addr self.remote_addr_family = remote_addr_family self.remote_socket_type = remote_socket_type super(_ThreadedTCPServer, self).__init__( local_addr, _TCPHandler, bind_and_activate=True) server = _ThreadedTCPServer(remote_addr, remote_addr_family, remote_socket_type) # Send server socket info back to parent process q.put([server.socket.family, server.server_address]) q.close() ## # Validate server's socket fileno ## _trace("Checking server fd=%s after q.put", server.socket.fileno()) ## fcntl.fcntl(server.socket.fileno(), fcntl.F_GETFL) ## _trace("Server fd is OK after q.put") server.serve_forever() class _TCPHandler(SocketServer.StreamRequestHandler): """TCP/IP session handler instantiated by TCPServer upon incoming connection. Implements forwarding/echo of the incoming connection. """ _SOCK_RX_BUF_SIZE = 16 * 1024 def handle(self): try: local_sock = self.connection if self.server.remote_addr is not None: # Forwarding set-up remote_dest_sock = remote_src_sock = socket.socket( family=self.server.remote_addr_family, type=self.server.remote_socket_type, proto=socket.IPPROTO_IP) remote_dest_sock.connect(self.server.remote_addr) _trace("%s _TCPHandler connected to remote %s", datetime.utcnow(), remote_dest_sock.getpeername()) else: # Echo set-up remote_dest_sock, remote_src_sock = socket_pair() try: local_forwarder = threading.Thread( target=self._forward, args=(local_sock, remote_dest_sock,)) local_forwarder.setDaemon(True) local_forwarder.start() try: self._forward(remote_src_sock, local_sock) finally: # Wait for local forwarder thread to exit local_forwarder.join() finally: try: try: _safe_shutdown_socket(remote_dest_sock, socket.SHUT_RDWR) finally: if remote_src_sock is not remote_dest_sock: _safe_shutdown_socket(remote_src_sock, socket.SHUT_RDWR) finally: remote_dest_sock.close() if remote_src_sock is not remote_dest_sock: remote_src_sock.close() except: _trace("handle failed:\n%s", "".join(traceback.format_exc())) raise def _forward(self, src_sock, dest_sock): """Forward from src_sock to dest_sock""" src_peername = src_sock.getpeername() _trace("%s forwarding from %s to %s", datetime.utcnow(), src_peername, dest_sock.getpeername()) try: # NOTE: python 2.6 doesn't support bytearray with recv_into, so # we use array.array instead; this is only okay as long as the # array instance isn't shared across threads. See # http://bugs.python.org/issue7827 and # groups.google.com/forum/#!topic/comp.lang.python/M6Pqr-KUjQw rx_buf = array.array("B", [0] * self._SOCK_RX_BUF_SIZE) while True: try: nbytes = src_sock.recv_into(rx_buf) except socket.error as e: if e.errno == errno.EINTR: continue elif e.errno == errno.ECONNRESET: # Source peer forcibly closed connection _trace("%s errno.ECONNRESET from %s", datetime.utcnow(), src_peername) break else: _trace("%s Unexpected errno=%s from %s\n%s", datetime.utcnow(), e.errno, src_peername, "".join(traceback.format_stack())) raise if not nbytes: # Source input EOF _trace("%s EOF on %s", datetime.utcnow(), src_peername) break try: dest_sock.sendall(buffer(rx_buf, 0, nbytes)) except socket.error as e: if e.errno == errno.EPIPE: # Destination peer closed its end of the connection _trace("%s Destination peer %s closed its end of " "the connection: errno.EPIPE", datetime.utcnow(), dest_sock.getpeername()) break elif e.errno == errno.ECONNRESET: # Destination peer forcibly closed connection _trace("%s Destination peer %s forcibly closed " "connection: errno.ECONNRESET", datetime.utcnow(), dest_sock.getpeername()) break else: _trace( "%s Unexpected errno=%s in sendall to %s\n%s", datetime.utcnow(), e.errno, dest_sock.getpeername(), "".join(traceback.format_stack())) raise except: _trace("forward failed\n%s", "".join(traceback.format_exc())) raise finally: _trace("%s done forwarding from %s", datetime.utcnow(), src_peername) try: # Let source peer know we're done receiving _safe_shutdown_socket(src_sock, socket.SHUT_RD) finally: # Let destination peer know we're done sending _safe_shutdown_socket(dest_sock, socket.SHUT_WR) def echo(port=0): """ This function implements a simple echo server for testing the Forwarder class. :param int port: port number on which to listen We run this function and it prints out the listening socket binding. Then, we run Forwarder and point it at this echo "server". Then, we run telnet and point it at forwarder and see if whatever we type gets echoed back to us. This function exits when the remote end connects, then closes connection """ lsock = socket.socket() lsock.bind(("", port)) lsock.listen(1) _trace("Listening on sockname=%s", lsock.getsockname()) sock, remote_addr = lsock.accept() try: _trace("Connection from peer=%s", remote_addr) while True: try: data = sock.recv(4 * 1024) # pylint: disable=E1101 except socket.error as e: if e.errno == errno.EINTR: continue else: raise if not data: break sock.sendall(data) # pylint: disable=E1101 finally: try: _safe_shutdown_socket(sock, socket.SHUT_RDWR) finally: sock.close() def _safe_shutdown_socket(sock, how=socket.SHUT_RDWR): """ Shutdown a socket, suppressing ENOTCONN """ try: sock.shutdown(how) except socket.error as e: if e.errno != errno.ENOTCONN: raise def socket_pair(family=None, sock_type=socket.SOCK_STREAM, proto=socket.IPPROTO_IP): """ socket.socketpair abstraction with support for Windows :param family: address family; e.g., socket.AF_UNIX, socket.AF_INET, etc.; defaults to socket.AF_UNIX if available, with fallback to socket.AF_INET. :param sock_type: socket type; defaults to socket.SOCK_STREAM :param proto: protocol; defaults to socket.IPPROTO_IP """ if family is None: if hasattr(socket, "AF_UNIX"): family = socket.AF_UNIX else: family = socket.AF_INET if hasattr(socket, "socketpair"): socket1, socket2 = socket.socketpair(family, sock_type, proto) else: # Probably running on Windows where socket.socketpair isn't supported # Work around lack of socket.socketpair() socket1 = socket2 = None listener = socket.socket(family, sock_type, proto) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(("localhost", 0)) listener.listen(1) listener_port = listener.getsockname()[1] socket1 = socket.socket(family, sock_type, proto) # Use thread to connect in background, while foreground issues the # blocking accept() conn_thread = threading.Thread( target=socket1.connect, args=(('localhost', listener_port),)) conn_thread.setDaemon(1) conn_thread.start() try: socket2 = listener.accept()[0] finally: listener.close() # Join/reap background thread conn_thread.join(timeout=10) assert not conn_thread.isAlive() return (socket1, socket2) pika-0.10.0/tests/unit/000077500000000000000000000000001257163076400146505ustar00rootroot00000000000000pika-0.10.0/tests/unit/amqp_object_tests.py000066400000000000000000000040711257163076400207320ustar00rootroot00000000000000""" Tests for pika.callback """ try: import mock except: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika import amqp_object class AMQPObjectTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.AMQPObject().NAME, 'AMQPObject') def test_repr_no_items(self): obj = amqp_object.AMQPObject() self.assertEqual(repr(obj), '') def test_repr_items(self): obj = amqp_object.AMQPObject() setattr(obj, 'foo', 'bar') setattr(obj, 'baz', 'qux') self.assertEqual(repr(obj), "") class ClassTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Class().NAME, 'Unextended Class') class MethodTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Method().NAME, 'Unextended Method') def test_set_content_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._body, body) def test_set_content_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._properties, properties) def test_get_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_body(), body) def test_get_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_properties(), properties) class PropertiesTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Properties().NAME, 'Unextended Properties') pika-0.10.0/tests/unit/base_connection_tests.py000066400000000000000000000007421257163076400216000ustar00rootroot00000000000000""" Tests for pika.base_connection.BaseConnection """ try: import mock except: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika.adapters import base_connection class BaseConnectionTests(unittest.TestCase): def test_should_raise_value_exception_with_no_params_func_instead(self): def foo(): return True self.assertRaises(ValueError, base_connection.BaseConnection, foo) pika-0.10.0/tests/unit/blocking_channel_tests.py000066400000000000000000000040531257163076400217260ustar00rootroot00000000000000# -*- coding: utf8 -*- """ Tests for pika.adapters.blocking_connection.BlockingChannel """ from collections import deque import logging try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika.adapters import blocking_connection from pika import callback from pika import channel from pika import frame from pika import spec BLOCKING_CHANNEL = 'pika.adapters.blocking_connection.BlockingChannel' BLOCKING_CONNECTION = 'pika.adapters.blocking_connection.BlockingConnection' class ChannelTemplate(channel.Channel): channel_number = 1 class BlockingChannelTests(unittest.TestCase): @mock.patch(BLOCKING_CONNECTION) def _create_connection(self, connection=None): return connection def setUp(self): self.connection = self._create_connection() channelImplMock = mock.Mock(spec=ChannelTemplate, is_closing=False, is_closed=False, is_open=True) self.obj = blocking_connection.BlockingChannel(channelImplMock, self.connection) def tearDown(self): del self.connection del self.obj def test_init_initial_value_confirmation(self): self.assertFalse(self.obj._delivery_confirmation) def test_init_initial_value_pending_events(self): self.assertEqual(self.obj._pending_events, deque()) def test_init_initial_value_buback_return(self): self.assertIsNone(self.obj._puback_return) def test_basic_consume(self): with mock.patch.object(self.obj._impl, '_generate_consumer_tag'): self.obj._impl._generate_consumer_tag.return_value = 'ctag0' self.obj._impl.basic_consume.return_value = 'ctag0' self.obj.basic_consume(mock.Mock(), "queue") self.assertEqual(self.obj._consumer_infos['ctag0'].state, blocking_connection._ConsumerInfo.ACTIVE)pika-0.10.0/tests/unit/blocking_connection_tests.py000066400000000000000000000206021257163076400224530ustar00rootroot00000000000000# -*- coding: utf8 -*- """ Tests for pika.adapters.blocking_connection.BlockingConnection """ import socket try: from unittest import mock patch = mock.patch except ImportError: import mock from mock import patch try: import unittest2 as unittest except ImportError: import unittest import pika from pika.adapters import blocking_connection class BlockingConnectionMockTemplate(blocking_connection.BlockingConnection): pass class SelectConnectionTemplate(blocking_connection.SelectConnection): is_closed = False is_closing = False is_open = True outbound_buffer = [] _channels = dict() class BlockingConnectionTests(unittest.TestCase): """TODO: test properties""" @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) def test_constructor(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') select_connection_class_mock.assert_called_once_with( parameters='params', on_open_callback=mock.ANY, on_open_error_callback=mock.ANY, on_close_callback=mock.ANY, stop_ioloop_on_close=mock.ANY) @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) def test_process_io_for_connection_setup(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._opened_result.set_value_once( select_connection_class_mock.return_value) with mock.patch.object( blocking_connection.BlockingConnection, '_flush_output', spec_set=blocking_connection.BlockingConnection._flush_output): connection._process_io_for_connection_setup() @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) def test_process_io_for_connection_setup_fails_with_open_error( self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._open_error_result.set_value_once( select_connection_class_mock.return_value, 'failed') with mock.patch.object( blocking_connection.BlockingConnection, '_flush_output', spec_set=blocking_connection.BlockingConnection._flush_output): with self.assertRaises(pika.exceptions.AMQPConnectionError) as cm: connection._process_io_for_connection_setup() self.assertEqual(cm.exception.args[0], 'failed') @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate, is_closed=False, outbound_buffer=[]) def test_flush_output(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._opened_result.set_value_once( select_connection_class_mock.return_value) connection._flush_output(lambda: False, lambda: True) @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate, is_closed=False, outbound_buffer=[]) def test_flush_output_user_initiated_close(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._user_initiated_close = True connection._closed_result.set_value_once( select_connection_class_mock.return_value, 200, 'success') connection._flush_output(lambda: False, lambda: True) @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate, is_closed=False, outbound_buffer=[]) def test_flush_output_server_initiated_error_close( self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._user_initiated_close = False connection._closed_result.set_value_once( select_connection_class_mock.return_value, 404, 'not found') with self.assertRaises(pika.exceptions.ConnectionClosed) as cm: connection._flush_output(lambda: False, lambda: True) self.assertSequenceEqual(cm.exception.args, (404, 'not found')) @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate, is_closed=False, outbound_buffer=[]) def test_flush_output_server_initiated_no_error_close(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._user_initiated_close = False connection._closed_result.set_value_once( select_connection_class_mock.return_value, 200, 'ok') with self.assertRaises(pika.exceptions.ConnectionClosed) as cm: connection._flush_output(lambda: False, lambda: True) self.assertSequenceEqual( cm.exception.args, ()) @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) def test_close(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') connection._impl._channels = {1: mock.Mock()} with mock.patch.object( blocking_connection.BlockingConnection, '_flush_output', spec_set=blocking_connection.BlockingConnection._flush_output): connection._closed_result.signal_once() connection.close(200, 'text') select_connection_class_mock.return_value.close.assert_called_once_with( 200, 'text') @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) @patch.object(blocking_connection, 'BlockingChannel', spec_set=blocking_connection.BlockingChannel) def test_channel(self, blocking_channel_class_mock, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') with mock.patch.object( blocking_connection.BlockingConnection, '_flush_output', spec_set=blocking_connection.BlockingConnection._flush_output): channel = connection.channel() @patch.object(blocking_connection, 'SelectConnection', spec_set=SelectConnectionTemplate) def test_sleep(self, select_connection_class_mock): with mock.patch.object(blocking_connection.BlockingConnection, '_process_io_for_connection_setup'): connection = blocking_connection.BlockingConnection('params') with mock.patch.object( blocking_connection.BlockingConnection, '_flush_output', spec_set=blocking_connection.BlockingConnection._flush_output): connection.sleep(0.00001) pika-0.10.0/tests/unit/callback_tests.py000066400000000000000000000377211257163076400202120ustar00rootroot00000000000000# -*- coding: utf8 -*- """ Tests for pika.callback """ import logging try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika import amqp_object from pika import callback from pika import frame from pika import spec class CallbackTests(unittest.TestCase): KEY = 'Test Key' ARGUMENTS = callback.CallbackManager.ARGUMENTS CALLS = callback.CallbackManager.CALLS CALLBACK = callback.CallbackManager.CALLBACK ONE_SHOT = callback.CallbackManager.ONE_SHOT ONLY_CALLER = callback.CallbackManager.ONLY_CALLER PREFIX_CLASS = spec.Basic.Consume PREFIX = 'Basic.Consume' ARGUMENTS_VALUE = {'foo': 'bar'} @property def _callback_dict(self): return { self.CALLBACK: self.callback_mock, self.ONE_SHOT: True, self.ONLY_CALLER: self.mock_caller, self.ARGUMENTS: self.ARGUMENTS_VALUE, self.CALLS: 1 } def setUp(self): self.obj = callback.CallbackManager() self.callback_mock = mock.Mock() self.mock_caller = mock.Mock() def tearDown(self): del self.obj del self.callback_mock del self.mock_caller def test_initialization(self): obj = callback.CallbackManager() self.assertDictEqual(obj._stack, {}) def test_name_or_value_method_object(self): value = spec.Basic.Consume() self.assertEqual(callback.name_or_value(value), self.PREFIX) def test_name_or_value_basic_consume_object(self): self.assertEqual(callback.name_or_value(spec.Basic.Consume()), self.PREFIX) def test_name_or_value_amqpobject_class(self): self.assertEqual(callback.name_or_value(self.PREFIX_CLASS), self.PREFIX) def test_name_or_value_protocol_header(self): self.assertEqual(callback.name_or_value(frame.ProtocolHeader()), 'ProtocolHeader') def test_name_or_value_method_frame(self): value = frame.Method(1, self.PREFIX_CLASS()) self.assertEqual(callback.name_or_value(value), self.PREFIX) def test_name_or_value_str(self): value = 'Test String Value' expectation = value self.assertEqual(callback.name_or_value(value), expectation) def test_name_or_value_unicode(self): value = u'Это тест значения' expectation = 'Это тест значения' self.assertEqual(callback.name_or_value(value), expectation) def test_empty_callbacks_on_init(self): self.assertFalse(self.obj._stack) def test_sanitize_decorator_with_args_only(self): self.obj.add(self.PREFIX_CLASS, self.KEY, None) self.assertIn(self.PREFIX, self.obj._stack.keys()) def test_sanitize_decorator_with_kwargs(self): self.obj.add(prefix=self.PREFIX_CLASS, key=self.KEY, callback=None) self.assertIn(self.PREFIX, self.obj._stack.keys()) def test_sanitize_decorator_with_mixed_args_and_kwargs(self): self.obj.add(self.PREFIX_CLASS, key=self.KEY, callback=None) self.assertIn(self.PREFIX, self.obj._stack.keys()) def test_add_first_time_prefix_added(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertIn(self.PREFIX, self.obj._stack) def test_add_first_time_key_added(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertIn(self.KEY, self.obj._stack[self.PREFIX]) def test_add_first_time_callback_added(self): self.obj.add(self.PREFIX, self.KEY, self.callback_mock) self.assertEqual( self.callback_mock, self.obj._stack[self.PREFIX][self.KEY][0][self.CALLBACK]) def test_add_oneshot_default_is_true(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertTrue( self.obj._stack[self.PREFIX][self.KEY][0][self.ONE_SHOT]) def test_add_oneshot_is_false(self): self.obj.add(self.PREFIX, self.KEY, None, False) self.assertFalse( self.obj._stack[self.PREFIX][self.KEY][0][self.ONE_SHOT]) def test_add_only_caller_default_is_false(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertFalse( self.obj._stack[self.PREFIX][self.KEY][0][self.ONLY_CALLER]) def test_add_only_caller_true(self): self.obj.add(self.PREFIX, self.KEY, None, only_caller=True) self.assertTrue( self.obj._stack[self.PREFIX][self.KEY][0][self.ONLY_CALLER]) def test_add_returns_prefix_value_and_key(self): self.assertEqual(self.obj.add(self.PREFIX, self.KEY, None), (self.PREFIX, self.KEY)) def test_add_duplicate_callback(self): mock_callback = mock.Mock() def add_callback(): self.obj.add(self.PREFIX, self.KEY, mock_callback, False) with mock.patch('pika.callback.LOGGER', spec=logging.Logger) as logger: logger.warning = mock.Mock() add_callback() add_callback() DUPLICATE_WARNING = callback.CallbackManager.DUPLICATE_WARNING logger.warning.assert_called_once_with(DUPLICATE_WARNING, self.PREFIX, self.KEY) def test_add_duplicate_callback_returns_prefix_value_and_key(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertEqual(self.obj.add(self.PREFIX, self.KEY, None), (self.PREFIX, self.KEY)) def test_clear(self): self.obj.add(self.PREFIX, self.KEY, None) self.obj.clear() self.assertDictEqual(self.obj._stack, dict()) def test_cleanup_removes_prefix(self): OTHER_PREFIX = 'Foo' self.obj.add(self.PREFIX, self.KEY, None) self.obj.add(OTHER_PREFIX, 'Bar', None) self.obj.cleanup(self.PREFIX) self.assertNotIn(self.PREFIX, self.obj._stack) def test_cleanup_keeps_other_prefix(self): OTHER_PREFIX = 'Foo' self.obj.add(self.PREFIX, self.KEY, None) self.obj.add(OTHER_PREFIX, 'Bar', None) self.obj.cleanup(self.PREFIX) self.assertIn(OTHER_PREFIX, self.obj._stack) def test_cleanup_returns_true(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertTrue(self.obj.cleanup(self.PREFIX)) def test_missing_prefix(self): self.assertFalse(self.obj.cleanup(self.PREFIX)) def test_pending_none(self): self.assertIsNone(self.obj.pending(self.PREFIX_CLASS, self.KEY)) def test_pending_one(self): self.obj.add(self.PREFIX, self.KEY, None) self.assertEqual(self.obj.pending(self.PREFIX_CLASS, self.KEY), 1) def test_pending_two(self): self.obj.add(self.PREFIX, self.KEY, None) self.obj.add(self.PREFIX, self.KEY, lambda x: True) self.assertEqual(self.obj.pending(self.PREFIX_CLASS, self.KEY), 2) def test_process_callback_false(self): self.obj._stack = dict() self.assertFalse(self.obj.process('FAIL', 'False', 'Empty', self.mock_caller, [])) def test_process_false(self): self.assertFalse(self.obj.process(self.PREFIX_CLASS, self.KEY, self)) def test_process_true(self): self.obj.add(self.PREFIX, self.KEY, self.callback_mock) self.assertTrue(self.obj.process(self.PREFIX_CLASS, self.KEY, self)) def test_process_mock_called(self): args = (1, None, 'Hi') self.obj.add(self.PREFIX, self.KEY, self.callback_mock) self.obj.process(self.PREFIX, self.KEY, self, args) self.callback_mock.assert_called_once_with(args) def test_process_one_shot_removed(self): args = (1, None, 'Hi') self.obj.add(self.PREFIX, self.KEY, self.callback_mock) self.obj.process(self.PREFIX, self.KEY, self, args) self.assertNotIn(self.PREFIX, self.obj._stack) def test_process_non_one_shot_prefix_not_removed(self): self.obj.add(self.PREFIX, self.KEY, self.callback_mock, one_shot=False) self.obj.process(self.PREFIX, self.KEY, self) self.assertIn(self.PREFIX, self.obj._stack) def test_process_non_one_shot_key_not_removed(self): self.obj.add(self.PREFIX, self.KEY, self.callback_mock, one_shot=False) self.obj.process(self.PREFIX, self.KEY, self) self.assertIn(self.KEY, self.obj._stack[self.PREFIX]) def test_process_non_one_shot_callback_not_removed(self): self.obj.add(self.PREFIX, self.KEY, self.callback_mock, one_shot=False) self.obj.process(self.PREFIX, self.KEY, self) self.assertEqual( self.obj._stack[self.PREFIX][self.KEY][0][self.CALLBACK], self.callback_mock) def test_process_only_caller_fails(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock, only_caller=self.mock_caller) self.obj.process(self.PREFIX_CLASS, self.KEY, self) self.assertFalse(self.callback_mock.called) def test_process_only_caller_fails_no_removal(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock, only_caller=self.mock_caller) self.obj.process(self.PREFIX_CLASS, self.KEY, self) self.assertEqual( self.obj._stack[self.PREFIX][self.KEY][0][self.CALLBACK], self.callback_mock) def test_remove_with_no_callbacks_pending(self): self.obj = callback.CallbackManager() self.assertFalse(self.obj.remove(self.PREFIX, self.KEY, self.callback_mock)) def test_remove_with_callback_true(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.assertTrue(self.obj.remove(self.PREFIX, self.KEY, self.callback_mock)) def test_remove_with_callback_false(self): self.obj.add(self.PREFIX_CLASS, self.KEY, None) self.assertTrue(self.obj.remove(self.PREFIX, self.KEY, self.callback_mock)) def test_remove_with_callback_true_empty_stack(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.remove(prefix=self.PREFIX, key=self.KEY, callback_value=self.callback_mock) self.assertDictEqual(self.obj._stack, dict()) def test_remove_with_callback_true_non_empty_stack(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.add(self.PREFIX_CLASS, self.KEY, self.mock_caller) self.obj.remove(self.PREFIX, self.KEY, self.callback_mock) self.assertEqual( self.mock_caller, self.obj._stack[self.PREFIX][self.KEY][0][self.CALLBACK]) def test_remove_prefix_key_with_other_key_prefix_remains(self): OTHER_KEY = 'Other Key' self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.add(self.PREFIX_CLASS, OTHER_KEY, self.mock_caller) self.obj.remove(self.PREFIX, self.KEY, self.callback_mock) self.assertIn(self.PREFIX, self.obj._stack) def test_remove_prefix_key_with_other_key_remains(self): OTHER_KEY = 'Other Key' self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.add(prefix=self.PREFIX_CLASS, key=OTHER_KEY, callback=self.mock_caller) self.obj.remove(self.PREFIX, self.KEY) self.assertIn(OTHER_KEY, self.obj._stack[self.PREFIX]) def test_remove_prefix_key_with_other_key_callback_remains(self): OTHER_KEY = 'Other Key' self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.add(self.PREFIX_CLASS, OTHER_KEY, self.mock_caller) self.obj.remove(self.PREFIX, self.KEY) self.assertEqual( self.mock_caller, self.obj._stack[self.PREFIX][OTHER_KEY][0][self.CALLBACK]) def test_remove_all(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) self.obj.remove_all(self.PREFIX, self.KEY) self.assertNotIn(self.PREFIX, self.obj._stack) def test_should_process_callback_true(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) value = self.obj._callback_dict(self.callback_mock, False, None, None) self.assertTrue( self.obj._should_process_callback(value, self.mock_caller, [])) def test_should_process_callback_false_argument_fail(self): self.obj.clear() self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock, arguments={'foo': 'baz'}) self.assertFalse(self.obj._should_process_callback(self._callback_dict, self.mock_caller, [{'foo': 'baz'}])) def test_should_process_callback_false_only_caller_failure(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) value = self.obj._callback_dict(self.callback_mock, False, self, None) self.assertTrue( self.obj._should_process_callback(value, self.mock_caller, [])) def test_should_process_callback_false_only_caller_failure(self): self.obj.add(self.PREFIX_CLASS, self.KEY, self.callback_mock) value = self.obj._callback_dict(self.callback_mock, False, self.mock_caller, None) self.assertTrue( self.obj._should_process_callback(value, self.mock_caller, [])) def test_dict(self): self.assertDictEqual(self.obj._callback_dict(self.callback_mock, True, self.mock_caller, self.ARGUMENTS_VALUE), self._callback_dict) def test_arguments_match_no_arguments(self): self.assertFalse(self.obj._arguments_match(self._callback_dict, [])) def test_arguments_match_dict_argument(self): self.assertTrue(self.obj._arguments_match(self._callback_dict, [self.ARGUMENTS_VALUE])) def test_arguments_match_dict_argument_no_attribute(self): self.assertFalse(self.obj._arguments_match(self._callback_dict, [{}])) def test_arguments_match_dict_argument_no_match(self): self.assertFalse(self.obj._arguments_match(self._callback_dict, [{'foo': 'baz'}])) def test_arguments_match_obj_argument(self): class TestObj(object): foo = 'bar' test_instance = TestObj() self.assertTrue(self.obj._arguments_match(self._callback_dict, [test_instance])) def test_arguments_match_obj_no_attribute(self): class TestObj(object): qux = 'bar' test_instance = TestObj() self.assertFalse(self.obj._arguments_match(self._callback_dict, [test_instance])) def test_arguments_match_obj_argument_no_match(self): class TestObj(object): foo = 'baz' test_instance = TestObj() self.assertFalse(self.obj._arguments_match(self._callback_dict, [test_instance])) def test_arguments_match_obj_argument_with_method(self): class TestFrame(object): method = None class MethodObj(object): foo = 'bar' test_instance = TestFrame() test_instance.method = MethodObj() self.assertTrue(self.obj._arguments_match(self._callback_dict, [test_instance])) def test_arguments_match_obj_argument_with_method_no_match(self): class TestFrame(object): method = None class MethodObj(object): foo = 'baz' test_instance = TestFrame() test_instance.method = MethodObj() self.assertFalse(self.obj._arguments_match(self._callback_dict, [test_instance])) pika-0.10.0/tests/unit/channel_tests.py000066400000000000000000001551231257163076400200630ustar00rootroot00000000000000""" Tests for pika.channel.ContentFrameDispatcher """ import collections import logging try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest import warnings from pika import channel from pika import exceptions from pika import frame from pika import spec class ChannelTests(unittest.TestCase): @mock.patch('pika.connection.Connection') def _create_connection(self, connection=None): return connection def setUp(self): self.connection = self._create_connection() self._on_openok_callback = mock.Mock() self.obj = channel.Channel(self.connection, 1, self._on_openok_callback) warnings.resetwarnings() def tearDown(self): del self.connection del self._on_openok_callback del self.obj warnings.resetwarnings() def test_init_invalid_channel_number(self): self.assertRaises(exceptions.InvalidChannelNumber, channel.Channel, 'Foo', self.connection) def test_init_channel_number(self): self.assertEqual(self.obj.channel_number, 1) def test_init_callbacks(self): self.assertEqual(self.obj.callbacks, self.connection.callbacks) def test_init_connection(self): self.assertEqual(self.obj.connection, self.connection) def test_init_frame_dispatcher(self): self.assertIsInstance(self.obj.frame_dispatcher, channel.ContentFrameDispatcher) def test_init_blocked(self): self.assertIsInstance(self.obj._blocked, collections.deque) def test_init_blocking(self): self.assertEqual(self.obj._blocking, None) def test_init_on_flowok_callback(self): self.assertEqual(self.obj._on_flowok_callback, None) def test_init_has_on_flow_callback(self): self.assertEqual(self.obj._has_on_flow_callback, False) def test_init_on_openok_callback(self): self.assertEqual(self.obj._on_openok_callback, self._on_openok_callback) def test_init_state(self): self.assertEqual(self.obj._state, channel.Channel.CLOSED) def test_init_cancelled(self): self.assertIsInstance(self.obj._cancelled, set) def test_init_consumers(self): self.assertEqual(self.obj._consumers, dict()) def test_init_pending(self): self.assertEqual(self.obj._pending, dict()) def test_init_on_getok_callback(self): self.assertEqual(self.obj._on_getok_callback, None) def test_add_callback(self): mock_callback = mock.Mock() self.obj.add_callback(mock_callback, [spec.Basic.Qos]) self.connection.callbacks.add.assert_called_once_with( self.obj.channel_number, spec.Basic.Qos, mock_callback, True) def test_add_callback_multiple_replies(self): mock_callback = mock.Mock() self.obj.add_callback(mock_callback, [spec.Basic.Qos, spec.Basic.QosOk]) calls = [mock.call(self.obj.channel_number, spec.Basic.Qos, mock_callback, True), mock.call(self.obj.channel_number, spec.Basic.QosOk, mock_callback, True)] self.connection.callbacks.add.assert_has_calls(calls) def test_add_on_cancel_callback(self): mock_callback = mock.Mock() self.obj.add_on_cancel_callback(mock_callback) self.connection.callbacks.add.assert_called_once_with( self.obj.channel_number, spec.Basic.Cancel, mock_callback, False) def test_add_on_close_callback(self): mock_callback = mock.Mock() self.obj.add_on_close_callback(mock_callback) self.connection.callbacks.add.assert_called_once_with( self.obj.channel_number, '_on_channel_close', mock_callback, False, self.obj) def test_add_on_flow_callback(self): mock_callback = mock.Mock() self.obj.add_on_flow_callback(mock_callback) self.connection.callbacks.add.assert_called_once_with( self.obj.channel_number, spec.Channel.Flow, mock_callback, False) def test_add_on_return_callback(self): mock_callback = mock.Mock() self.obj.add_on_return_callback(mock_callback) self.connection.callbacks.add.assert_called_once_with( self.obj.channel_number, '_on_return', mock_callback, False) def test_basic_ack_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_ack) @mock.patch('pika.channel.Channel._validate_channel_and_callback') def test_basic_cancel_calls_validate(self, validate): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' callback_mock = mock.Mock() self.obj._consumers[consumer_tag] = callback_mock self.obj.basic_cancel(callback_mock, consumer_tag) validate.assert_called_once_with(callback_mock) @mock.patch('pika.spec.Basic.Ack') @mock.patch('pika.channel.Channel._send_method') def test_basic_send_method_calls_rpc(self, send_method, unused): self.obj._set_state(self.obj.OPEN) self.obj.basic_ack(1, False) send_method.assert_called_once_with(spec.Basic.Ack(1, False)) @mock.patch('pika.channel.Channel._rpc') def test_basic_cancel_no_consumer_tag(self, rpc): self.obj._set_state(self.obj.OPEN) callback_mock = mock.Mock() consumer_tag = 'ctag0' self.obj.basic_cancel(callback_mock, consumer_tag) self.assertFalse(rpc.called) @mock.patch('pika.channel.Channel._rpc') def test_basic_cancel_channel_cancelled_appended(self, unused): self.obj._set_state(self.obj.OPEN) callback_mock = mock.Mock() consumer_tag = 'ctag0' self.obj._consumers[consumer_tag] = mock.Mock() self.obj.basic_cancel(callback_mock, consumer_tag) self.assertListEqual(list(self.obj._cancelled), [consumer_tag]) def test_basic_cancel_callback_appended(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' callback_mock = mock.Mock() self.obj._consumers[consumer_tag] = callback_mock self.obj.basic_cancel(callback_mock, consumer_tag) expectation = [self.obj.channel_number, spec.Basic.CancelOk, callback_mock] self.obj.callbacks.add.assert_any_call(*expectation) def test_basic_cancel_raises_value_error(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' callback_mock = mock.Mock() self.obj._consumers[consumer_tag] = callback_mock self.assertRaises(ValueError, self.obj.basic_cancel, callback_mock, consumer_tag, nowait=True) def test_basic_cancel_then_close(self): self.obj._set_state(self.obj.OPEN) callback_mock = mock.Mock() consumer_tag = 'ctag0' self.obj._consumers[consumer_tag] = mock.Mock() self.obj.basic_cancel(callback_mock, consumer_tag) try: self.obj.close() except exceptions.ChannelClosed: self.fail('unable to cancel consumers as channel is closing') self.assertTrue(self.obj.is_closing) def test_basic_cancel_on_cancel_appended(self): self.obj._set_state(self.obj.OPEN) self.obj._consumers['ctag0'] = logging.debug self.obj.basic_cancel(consumer_tag='ctag0') expectation = [self.obj.channel_number, spec.Basic.CancelOk, self.obj._on_cancelok] self.obj.callbacks.add.assert_any_call( *expectation, arguments={'consumer_tag': 'ctag0'}) def test_basic_consume_channel_closed(self): mock_callback = mock.Mock() self.assertRaises(exceptions.ChannelClosed, self.obj.basic_consume, mock_callback, 'test-queue') @mock.patch('pika.channel.Channel._validate_channel_and_callback') def test_basic_consume_calls_validate(self, validate): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue') validate.assert_called_once_with(mock_callback) def test_basic_consume_consumer_tag(self): self.obj._set_state(self.obj.OPEN) expectation = 'ctag1.' mock_callback = mock.Mock() self.assertEqual( self.obj.basic_consume(mock_callback, 'test-queue')[:6], expectation) def test_basic_consume_consumer_tag_cancelled_full(self): self.obj._set_state(self.obj.OPEN) expectation = 'ctag1.' mock_callback = mock.Mock() for ctag in ['ctag1.%i' % ii for ii in range(11)]: self.obj._cancelled.add(ctag) self.assertEqual( self.obj.basic_consume(mock_callback, 'test-queue')[:6], expectation) def test_basic_consume_consumer_tag_in_consumers(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue', consumer_tag=consumer_tag) self.assertIn(consumer_tag, self.obj._consumers) def test_basic_consume_duplicate_consumer_tag_raises(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj._consumers[consumer_tag] = logging.debug self.assertRaises(exceptions.DuplicateConsumerTag, self.obj.basic_consume, mock_callback, 'test-queue', False, False, consumer_tag) def test_basic_consume_consumers_callback_value(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue', consumer_tag=consumer_tag) self.assertEqual(self.obj._consumers[consumer_tag], mock_callback) def test_basic_consume_has_pending_list(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue', consumer_tag=consumer_tag) self.assertIn(consumer_tag, self.obj._pending) def test_basic_consume_consumers_pending_list_is_empty(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue', consumer_tag=consumer_tag) self.assertEqual(self.obj._pending[consumer_tag], list()) @mock.patch('pika.spec.Basic.Consume') @mock.patch('pika.channel.Channel._rpc') def test_basic_consume_consumers_rpc_called(self, rpc, unused): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag1.0' mock_callback = mock.Mock() self.obj.basic_consume(mock_callback, 'test-queue', consumer_tag=consumer_tag) expectation = spec.Basic.Consume(queue='test-queue', consumer_tag=consumer_tag, no_ack=False, exclusive=False) rpc.assert_called_once_with(expectation, self.obj._on_eventok, [(spec.Basic.ConsumeOk, {'consumer_tag': consumer_tag})]) @mock.patch('pika.channel.Channel._validate_channel_and_callback') def test_basic_get_calls_validate(self, validate): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_get(mock_callback, 'test-queue') validate.assert_called_once_with(mock_callback) @mock.patch('pika.channel.Channel._send_method') def test_basic_get_callback(self, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_get(mock_callback, 'test-queue') self.assertEqual(self.obj._on_getok_callback, mock_callback) @mock.patch('pika.spec.Basic.Get') @mock.patch('pika.channel.Channel._send_method') def test_basic_get_send_method_called(self, send_method, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_get(mock_callback, 'test-queue', False) send_method.assert_called_once_with(spec.Basic.Get(queue='test-queue', no_ack=False)) def test_basic_nack_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_nack, 0, False, True) @mock.patch('pika.spec.Basic.Nack') @mock.patch('pika.channel.Channel._send_method') def test_basic_nack_send_method_request(self, send_method, unused): self.obj._set_state(self.obj.OPEN) self.obj.basic_nack(1, False, True) send_method.assert_called_once_with(spec.Basic.Nack(1, False, True)) def test_basic_publish_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_publish, 'foo', 'bar', 'baz') @mock.patch('pika.channel.LOGGER') @mock.patch('pika.spec.Basic.Publish') @mock.patch('pika.channel.Channel._send_method') def test_immediate_called_logger_warning(self, send_method, unused, logger): self.obj._set_state(self.obj.OPEN) exchange = 'basic_publish_test' routing_key = 'routing-key-fun' body = b'This is my body' properties = spec.BasicProperties(content_type='text/plain') mandatory = False immediate = True self.obj.basic_publish(exchange, routing_key, body, properties, mandatory, immediate) logger.warning.assert_called_once_with('The immediate flag is ' 'deprecated in RabbitMQ') @mock.patch('pika.spec.Basic.Publish') @mock.patch('pika.channel.Channel._send_method') def test_basic_publish_send_method_request(self, send_method, unused): self.obj._set_state(self.obj.OPEN) exchange = 'basic_publish_test' routing_key = 'routing-key-fun' body = b'This is my body' properties = spec.BasicProperties(content_type='text/plain') mandatory = False immediate = False self.obj.basic_publish(exchange, routing_key, body, properties, mandatory, immediate) send_method.assert_called_once_with( spec.Basic.Publish(exchange=exchange, routing_key=routing_key, mandatory=mandatory, immediate=immediate), (properties, body)) def test_basic_qos_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_qos, 0, False, True) @mock.patch('pika.spec.Basic.Qos') @mock.patch('pika.channel.Channel._rpc') def test_basic_qos_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_qos(mock_callback, 10, 20, False) rpc.assert_called_once_with(spec.Basic.Qos(mock_callback, 10, 20, False), mock_callback, [spec.Basic.QosOk]) def test_basic_reject_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_reject, 1, False) @mock.patch('pika.spec.Basic.Reject') @mock.patch('pika.channel.Channel._send_method') def test_basic_reject_send_method_request(self, send_method, unused): self.obj._set_state(self.obj.OPEN) self.obj.basic_reject(1, True) send_method.assert_called_once_with(spec.Basic.Reject(1, True)) def test_basic_recover_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.basic_qos, 0, False, True) @mock.patch('pika.spec.Basic.Recover') @mock.patch('pika.channel.Channel._rpc') def test_basic_recover_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.basic_recover(mock_callback, True) rpc.assert_called_once_with(spec.Basic.Recover(mock_callback, True), mock_callback, [spec.Basic.RecoverOk]) def test_close_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.close) def test_close_state(self): self.obj._set_state(self.obj.OPEN) self.obj.close() self.assertEqual(self.obj._state, channel.Channel.CLOSING) def test_close_basic_cancel_called(self): self.obj._set_state(self.obj.OPEN) self.obj._consumers['abc'] = None with mock.patch.object(self.obj, 'basic_cancel') as basic_cancel: self.obj.close() basic_cancel.assert_called_once_with(consumer_tag='abc') def test_confirm_delivery_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.confirm_delivery) def test_confirm_delivery_raises_method_not_implemented_for_confirms(self): self.obj._set_state(self.obj.OPEN) # Since connection is a mock.Mock, overwrite the method def with False self.obj.connection.publisher_confirms = False self.assertRaises(exceptions.MethodNotImplemented, self.obj.confirm_delivery, logging.debug) def test_confirm_delivery_raises_method_not_implemented_for_nack(self): self.obj._set_state(self.obj.OPEN) # Since connection is a mock.Mock, overwrite the method def with False self.obj.connection.basic_nack = False self.assertRaises(exceptions.MethodNotImplemented, self.obj.confirm_delivery, logging.debug) def test_confirm_delivery_callback_without_nowait_selectok(self): self.obj._set_state(self.obj.OPEN) expectation = [self.obj.channel_number, spec.Confirm.SelectOk, self.obj._on_selectok] self.obj.confirm_delivery(logging.debug) self.obj.callbacks.add.assert_called_with(*expectation, arguments=None) def test_confirm_delivery_callback_with_nowait(self): self.obj._set_state(self.obj.OPEN) expectation = [self.obj.channel_number, spec.Confirm.SelectOk, self.obj._on_selectok] self.obj.confirm_delivery(logging.debug, True) self.assertNotIn(mock.call(*expectation, arguments=None), self.obj.callbacks.add.call_args_list) def test_confirm_delivery_callback_basic_ack(self): self.obj._set_state(self.obj.OPEN) expectation = (self.obj.channel_number, spec.Basic.Ack, logging.debug, False) self.obj.confirm_delivery(logging.debug) self.obj.callbacks.add.assert_any_call(*expectation) def test_confirm_delivery_callback_basic_nack(self): self.obj._set_state(self.obj.OPEN) expectation = (self.obj.channel_number, spec.Basic.Nack, logging.debug, False) self.obj.confirm_delivery(logging.debug) self.obj.callbacks.add.assert_any_call(*expectation) def test_confirm_delivery_no_callback_callback_call_count(self): self.obj._set_state(self.obj.OPEN) self.obj.confirm_delivery() expectation = [mock.call(*[self.obj.channel_number, spec.Confirm.SelectOk, self.obj._on_synchronous_complete], arguments=None), mock.call(*[self.obj.channel_number, spec.Confirm.SelectOk, self.obj._on_selectok,], arguments=None)] self.assertEqual(self.obj.callbacks.add.call_args_list, expectation) def test_confirm_delivery_no_callback_no_basic_ack_callback(self): self.obj._set_state(self.obj.OPEN) expectation = [self.obj.channel_number, spec.Basic.Ack, None, False] self.obj.confirm_delivery() self.assertNotIn(mock.call(*expectation), self.obj.callbacks.add.call_args_list) def test_confirm_delivery_no_callback_no_basic_nack_callback(self): self.obj._set_state(self.obj.OPEN) expectation = [self.obj.channel_number, spec.Basic.Nack, None, False] self.obj.confirm_delivery() self.assertNotIn(mock.call(*expectation), self.obj.callbacks.add.call_args_list) def test_consumer_tags(self): self.assertListEqual(self.obj.consumer_tags, list(self.obj._consumers.keys())) def test_exchange_bind_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.exchange_bind, None, 'foo', 'bar', 'baz') def test_exchange_bind_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.exchange_bind, 'callback', 'foo', 'bar', 'baz') @mock.patch('pika.spec.Exchange.Bind') @mock.patch('pika.channel.Channel._rpc') def test_exchange_bind_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_bind(mock_callback, 'foo', 'bar', 'baz') rpc.assert_called_once_with(spec.Exchange.Bind(0, 'foo', 'bar', 'baz'), mock_callback, [spec.Exchange.BindOk]) @mock.patch('pika.spec.Exchange.Bind') @mock.patch('pika.channel.Channel._rpc') def test_exchange_bind_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_bind(mock_callback, 'foo', 'bar', 'baz', nowait=True) rpc.assert_called_once_with(spec.Exchange.Bind(0, 'foo', 'bar', 'baz'), mock_callback, []) def test_exchange_declare_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.exchange_declare, exchange='foo') @mock.patch('pika.channel.Channel._rpc') def test_exchange_declare_with_type_arg_raises_deprecation_warning(self, _rpc): self.obj._set_state(self.obj.OPEN) with warnings.catch_warnings(record=True) as w: warnings.simplefilter('always') self.obj.exchange_declare(None, 'foo', type='direct') self.assertEqual(len(w), 1) self.assertIs(w[-1].category, DeprecationWarning) @mock.patch('pika.spec.Exchange.Declare') @mock.patch('pika.channel.Channel._rpc') def test_exchange_declare_with_type_arg_assigns_to_exchange_type(self, rpc, unused): with warnings.catch_warnings(record=True) as w: warnings.simplefilter('always') self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_declare(mock_callback, exchange='foo', type='topic') rpc.assert_called_once_with(spec.Exchange.Declare(0, 'foo', 'topic'), mock_callback, [spec.Exchange.DeclareOk]) def test_exchange_declare_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.exchange_declare, 'callback', 'foo') @mock.patch('pika.spec.Exchange.Declare') @mock.patch('pika.channel.Channel._rpc') def test_exchange_declare_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_declare(mock_callback, 'foo') rpc.assert_called_once_with(spec.Exchange.Declare(0, 'foo'), mock_callback, [spec.Exchange.DeclareOk]) @mock.patch('pika.spec.Exchange.Declare') @mock.patch('pika.channel.Channel._rpc') def test_exchange_declare_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_declare(mock_callback, 'foo', nowait=True) rpc.assert_called_once_with(spec.Exchange.Declare(0, 'foo'), mock_callback, []) def test_exchange_delete_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.exchange_delete, exchange='foo') def test_exchange_delete_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.exchange_delete, 'callback', 'foo') @mock.patch('pika.spec.Exchange.Delete') @mock.patch('pika.channel.Channel._rpc') def test_exchange_delete_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_delete(mock_callback, 'foo') rpc.assert_called_once_with(spec.Exchange.Delete(0, 'foo'), mock_callback, [spec.Exchange.DeleteOk]) @mock.patch('pika.spec.Exchange.Delete') @mock.patch('pika.channel.Channel._rpc') def test_exchange_delete_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_delete(mock_callback, 'foo', nowait=True) rpc.assert_called_once_with(spec.Exchange.Delete(0, 'foo'), mock_callback, []) def test_exchange_unbind_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.exchange_unbind, None, 'foo', 'bar', 'baz') def test_exchange_unbind_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.exchange_unbind, 'callback', 'foo', 'bar', 'baz') @mock.patch('pika.spec.Exchange.Unbind') @mock.patch('pika.channel.Channel._rpc') def test_exchange_unbind_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_unbind(mock_callback, 'foo', 'bar', 'baz') rpc.assert_called_once_with( spec.Exchange.Unbind(0, 'foo', 'bar', 'baz'), mock_callback, [spec.Exchange.UnbindOk]) @mock.patch('pika.spec.Exchange.Unbind') @mock.patch('pika.channel.Channel._rpc') def test_exchange_unbind_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.exchange_unbind(mock_callback, 'foo', 'bar', 'baz', nowait=True) rpc.assert_called_once_with( spec.Exchange.Unbind(0, 'foo', 'bar', 'baz'), mock_callback, []) def test_flow_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.flow, 'foo', True) def test_flow_raises_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.flow, 'foo', True) @mock.patch('pika.spec.Channel.Flow') @mock.patch('pika.channel.Channel._rpc') def test_flow_on_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.flow(mock_callback, True) rpc.assert_called_once_with(spec.Channel.Flow(True), self.obj._on_flowok, [spec.Channel.FlowOk]) @mock.patch('pika.spec.Channel.Flow') @mock.patch('pika.channel.Channel._rpc') def test_flow_off_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.flow(mock_callback, False) rpc.assert_called_once_with(spec.Channel.Flow(False), self.obj._on_flowok, [spec.Channel.FlowOk]) @mock.patch('pika.channel.Channel._rpc') def test_flow_on_flowok_callback(self, rpc): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.flow(mock_callback, True) self.assertEqual(self.obj._on_flowok_callback, mock_callback) def test_is_closed_true(self): self.obj._set_state(self.obj.CLOSED) self.assertTrue(self.obj.is_closed) def test_is_closed_false(self): self.obj._set_state(self.obj.OPEN) self.assertFalse(self.obj.is_closed) def test_is_closing_true(self): self.obj._set_state(self.obj.CLOSING) self.assertTrue(self.obj.is_closing) def test_is_closing_false(self): self.obj._set_state(self.obj.OPEN) self.assertFalse(self.obj.is_closing) @mock.patch('pika.channel.Channel._rpc') def test_channel_open_add_callbacks_called(self, rpc): with mock.patch.object(self.obj, '_add_callbacks') as _add_callbacks: self.obj.open() _add_callbacks.assert_called_once_with() def test_queue_bind_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.queue_bind, None, 'foo', 'bar', 'baz') def test_queue_bind_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.queue_bind, 'callback', 'foo', 'bar', 'baz') @mock.patch('pika.spec.Queue.Bind') @mock.patch('pika.channel.Channel._rpc') def test_queue_bind_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_bind(mock_callback, 'foo', 'bar', 'baz') rpc.assert_called_once_with(spec.Queue.Bind(0, 'foo', 'bar', 'baz'), mock_callback, [spec.Queue.BindOk]) @mock.patch('pika.spec.Queue.Bind') @mock.patch('pika.channel.Channel._rpc') def test_queue_bind_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_bind(mock_callback, 'foo', 'bar', 'baz', nowait=True) rpc.assert_called_once_with(spec.Queue.Bind(0, 'foo', 'bar', 'baz'), mock_callback, []) def test_queue_declare_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.queue_declare, None, queue='foo') def test_queue_declare_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.queue_declare, 'callback', 'foo') @mock.patch('pika.spec.Queue.Declare') @mock.patch('pika.channel.Channel._rpc') def test_queue_declare_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_declare(mock_callback, 'foo') rpc.assert_called_once_with(spec.Queue.Declare(0, 'foo'), mock_callback, [(spec.Queue.DeclareOk, {'queue': 'foo'})]) @mock.patch('pika.spec.Queue.Declare') @mock.patch('pika.channel.Channel._rpc') def test_queue_declare_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_declare(mock_callback, 'foo', nowait=True) rpc.assert_called_once_with(spec.Queue.Declare(0, 'foo'), mock_callback, []) def test_queue_delete_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.queue_delete, queue='foo') def test_queue_delete_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.queue_delete, 'callback', 'foo') @mock.patch('pika.spec.Queue.Delete') @mock.patch('pika.channel.Channel._rpc') def test_queue_delete_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_delete(mock_callback, 'foo') rpc.assert_called_once_with(spec.Queue.Delete(0, 'foo'), mock_callback, [spec.Queue.DeleteOk]) @mock.patch('pika.spec.Queue.Delete') @mock.patch('pika.channel.Channel._rpc') def test_queue_delete_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_delete(mock_callback, 'foo', nowait=True) rpc.assert_called_once_with(spec.Queue.Delete(0, 'foo'), mock_callback, []) def test_queue_purge_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.queue_purge, queue='foo') def test_queue_purge_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.queue_purge, 'callback', 'foo') @mock.patch('pika.spec.Queue.Purge') @mock.patch('pika.channel.Channel._rpc') def test_queue_purge_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_purge(mock_callback, 'foo') rpc.assert_called_once_with(spec.Queue.Purge(0, 'foo'), mock_callback, [spec.Queue.PurgeOk]) @mock.patch('pika.spec.Queue.Purge') @mock.patch('pika.channel.Channel._rpc') def test_queue_purge_rpc_request_nowait(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_purge(mock_callback, 'foo', nowait=True) rpc.assert_called_once_with(spec.Queue.Purge(0, 'foo'), mock_callback, []) def test_queue_unbind_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.queue_unbind, None, 'foo', 'bar', 'baz') def test_queue_unbind_raises_value_error_on_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj.queue_unbind, 'callback', 'foo', 'bar', 'baz') @mock.patch('pika.spec.Queue.Unbind') @mock.patch('pika.channel.Channel._rpc') def test_queue_unbind_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.queue_unbind(mock_callback, 'foo', 'bar', 'baz') rpc.assert_called_once_with(spec.Queue.Unbind(0, 'foo', 'bar', 'baz'), mock_callback, [spec.Queue.UnbindOk]) def test_tx_commit_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj.tx_commit, None) @mock.patch('pika.spec.Tx.Commit') @mock.patch('pika.channel.Channel._rpc') def test_tx_commit_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.tx_commit(mock_callback) rpc.assert_called_once_with(spec.Tx.Commit(mock_callback), mock_callback, [spec.Tx.CommitOk]) @mock.patch('pika.spec.Tx.Rollback') @mock.patch('pika.channel.Channel._rpc') def test_tx_rollback_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.tx_rollback(mock_callback) rpc.assert_called_once_with(spec.Tx.Rollback(mock_callback), mock_callback, [spec.Tx.RollbackOk]) @mock.patch('pika.spec.Tx.Select') @mock.patch('pika.channel.Channel._rpc') def test_tx_select_rpc_request(self, rpc, unused): self.obj._set_state(self.obj.OPEN) mock_callback = mock.Mock() self.obj.tx_select(mock_callback) rpc.assert_called_once_with(spec.Tx.Select(mock_callback), mock_callback, [spec.Tx.SelectOk]) # Test internal methods def test_add_callbacks_basic_cancel_empty_added(self): self.obj._add_callbacks() self.obj.callbacks.add.assert_any_calls(self.obj.channel_number, spec.Basic.Cancel, self.obj._on_getempty, False) def test_add_callbacks_basic_get_empty_added(self): self.obj._add_callbacks() self.obj.callbacks.add.assert_any_calls(self.obj.channel_number, spec.Basic.GetEmpty, self.obj._on_getempty, False) def test_add_callbacks_channel_close_added(self): self.obj._add_callbacks() self.obj.callbacks.add.assert_any_calls(self.obj.channel_number, spec.Channel.Close, self.obj._on_getempty, False) def test_add_callbacks_channel_flow_added(self): self.obj._add_callbacks() self.obj.callbacks.add.assert_any_calls(self.obj.channel_number, spec.Channel.Flow, self.obj._on_getempty, False) def test_cleanup(self): self.obj._cleanup() self.obj.callbacks.cleanup.assert_called_once_with( str(self.obj.channel_number)) def test_get_pending_message(self): key = 'foo' expectation = 'abc1234' self.obj._pending = {key: [expectation]} self.assertEqual(self.obj._get_pending_msg(key), expectation) def test_get_pending_message_item_popped(self): key = 'foo' expectation = 'abc1234' self.obj._pending = {key: [expectation]} self.obj._get_pending_msg(key) self.assertEqual(len(self.obj._pending[key]), 0) def test_handle_content_frame_method_returns_none(self): frame_value = frame.Method(1, spec.Basic.Deliver('ctag0', 1)) self.assertEqual(self.obj._handle_content_frame(frame_value), None) def test_handle_content_frame_sets_method_frame(self): frame_value = frame.Method(1, spec.Basic.Deliver('ctag0', 1)) self.obj._handle_content_frame(frame_value) self.assertEqual(self.obj.frame_dispatcher._method_frame, frame_value) def test_handle_content_frame_sets_header_frame(self): frame_value = frame.Header(1, 10, spec.BasicProperties()) self.obj._handle_content_frame(frame_value) self.assertEqual(self.obj.frame_dispatcher._header_frame, frame_value) def test_handle_content_frame_basic_deliver_called(self): method_value = frame.Method(1, spec.Basic.Deliver('ctag0', 1)) self.obj._handle_content_frame(method_value) header_value = frame.Header(1, 10, spec.BasicProperties()) self.obj._handle_content_frame(header_value) body_value = frame.Body(1, b'0123456789') with mock.patch.object(self.obj, '_on_deliver') as deliver: self.obj._handle_content_frame(body_value) deliver.assert_called_once_with(method_value, header_value, b'0123456789') def test_handle_content_frame_basic_get_called(self): method_value = frame.Method(1, spec.Basic.GetOk('ctag0', 1)) self.obj._handle_content_frame(method_value) header_value = frame.Header(1, 10, spec.BasicProperties()) self.obj._handle_content_frame(header_value) body_value = frame.Body(1, b'0123456789') with mock.patch.object(self.obj, '_on_getok') as getok: self.obj._handle_content_frame(body_value) getok.assert_called_once_with(method_value, header_value, b'0123456789') def test_handle_content_frame_basic_return_called(self): method_value = frame.Method(1, spec.Basic.Return(999, 'Reply Text', 'exchange_value', 'routing.key')) self.obj._handle_content_frame(method_value) header_value = frame.Header(1, 10, spec.BasicProperties()) self.obj._handle_content_frame(header_value) body_value = frame.Body(1, b'0123456789') with mock.patch.object(self.obj, '_on_return') as basic_return: self.obj._handle_content_frame(body_value) basic_return.assert_called_once_with(method_value, header_value, b'0123456789') def test_has_content_true(self): self.assertTrue(self.obj._has_content(spec.Basic.GetOk)) def test_has_content_false(self): self.assertFalse(self.obj._has_content(spec.Basic.Ack)) def test_on_cancel_not_appended_cancelled(self): consumer_tag = 'ctag0' frame_value = frame.Method(1, spec.Basic.Cancel(consumer_tag)) self.obj._on_cancel(frame_value) self.assertNotIn(consumer_tag, self.obj._cancelled) def test_on_cancel_removed_consumer(self): consumer_tag = 'ctag0' self.obj._consumers[consumer_tag] = logging.debug frame_value = frame.Method(1, spec.Basic.Cancel(consumer_tag)) self.obj._on_cancel(frame_value) self.assertNotIn(consumer_tag, self.obj._consumers) def test_on_cancelok_removed_consumer(self): consumer_tag = 'ctag0' self.obj._consumers[consumer_tag] = logging.debug frame_value = frame.Method(1, spec.Basic.CancelOk(consumer_tag)) self.obj._on_cancelok(frame_value) self.assertNotIn(consumer_tag, self.obj._consumers) def test_on_cancelok_removed_pending(self): consumer_tag = 'ctag0' self.obj._pending[consumer_tag] = logging.debug frame_value = frame.Method(1, spec.Basic.CancelOk(consumer_tag)) self.obj._on_cancelok(frame_value) self.assertNotIn(consumer_tag, self.obj._pending) def test_on_deliver_pending_called(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' mock_callback = mock.Mock() self.obj._pending[consumer_tag] = mock_callback method_value = frame.Method(1, spec.Basic.Deliver(consumer_tag, 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' with mock.patch.object(self.obj, '_add_pending_msg') as add_pending: self.obj._on_deliver(method_value, header_value, body_value) add_pending.assert_called_with(consumer_tag, method_value, header_value, body_value) def test_on_deliver_callback_called(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' mock_callback = mock.Mock() self.obj._pending[consumer_tag] = list() self.obj._consumers[consumer_tag] = mock_callback method_value = frame.Method(1, spec.Basic.Deliver(consumer_tag, 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' self.obj._on_deliver(method_value, header_value, body_value) mock_callback.assert_called_with(self.obj, method_value.method, header_value.properties, body_value) def test_on_deliver_pending_callbacks_called(self): self.obj._set_state(self.obj.OPEN) consumer_tag = 'ctag0' mock_callback = mock.Mock() self.obj._pending[consumer_tag] = list() method_value = frame.Method(1, spec.Basic.Deliver(consumer_tag, 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' expectation = [mock.call(self.obj, method_value.method, header_value.properties, body_value)] self.obj._on_deliver(method_value, header_value, body_value) self.obj._consumers[consumer_tag] = mock_callback method_value = frame.Method(1, spec.Basic.Deliver(consumer_tag, 2)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' self.obj._on_deliver(method_value, header_value, body_value) expectation.append(mock.call(self.obj, method_value.method, header_value.properties, body_value)) self.assertListEqual(mock_callback.call_args_list, expectation) @mock.patch('logging.Logger.debug') def test_on_getempty(self, debug): method_frame = frame.Method(self.obj.channel_number, spec.Basic.GetEmpty) self.obj._on_getempty(method_frame) debug.assert_called_with('Received Basic.GetEmpty: %r', method_frame) @mock.patch('logging.Logger.error') def test_on_getok_no_callback(self, error): method_value = frame.Method(1, spec.Basic.GetOk('ctag0', 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' self.obj._on_getok(method_value, header_value, body_value) error.assert_called_with('Basic.GetOk received with no active callback') def test_on_getok_callback_called(self): mock_callback = mock.Mock() self.obj._on_getok_callback = mock_callback method_value = frame.Method(1, spec.Basic.GetOk('ctag0', 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' self.obj._on_getok(method_value, header_value, body_value) mock_callback.assert_called_once_with(self.obj, method_value.method, header_value.properties, body_value) def test_on_getok_callback_reset(self): mock_callback = mock.Mock() self.obj._on_getok_callback = mock_callback method_value = frame.Method(1, spec.Basic.GetOk('ctag0', 1)) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = b'0123456789' self.obj._on_getok(method_value, header_value, body_value) self.assertIsNone(self.obj._on_getok_callback) @mock.patch('logging.Logger.debug') def test_on_confirm_selectok(self, debug): method_frame = frame.Method(self.obj.channel_number, spec.Confirm.SelectOk()) self.obj._on_selectok(method_frame) debug.assert_called_with('Confirm.SelectOk Received: %r', method_frame) @mock.patch('logging.Logger.debug') def test_on_eventok(self, debug): method_frame = frame.Method(self.obj.channel_number, spec.Basic.GetEmpty()) self.obj._on_eventok(method_frame) debug.assert_called_with('Discarding frame %r', method_frame) @mock.patch('logging.Logger.warning') def test_on_flow(self, warning): self.obj._has_on_flow_callback = False method_frame = frame.Method(self.obj.channel_number, spec.Channel.Flow()) self.obj._on_flow(method_frame) warning.assert_called_with('Channel.Flow received from server') @mock.patch('logging.Logger.warning') def test_on_flow_with_callback(self, warning): method_frame = frame.Method(self.obj.channel_number, spec.Channel.Flow()) self.obj._on_flowok_callback = logging.debug self.obj._on_flow(method_frame) self.assertEqual(len(warning.call_args_list), 1) @mock.patch('logging.Logger.warning') def test_on_flowok(self, warning): method_frame = frame.Method(self.obj.channel_number, spec.Channel.FlowOk()) self.obj._on_flowok(method_frame) warning.assert_called_with('Channel.FlowOk received with no active ' 'callbacks') def test_on_flowok_calls_callback(self): method_frame = frame.Method(self.obj.channel_number, spec.Channel.FlowOk()) mock_callback = mock.Mock() self.obj._on_flowok_callback = mock_callback self.obj._on_flowok(method_frame) mock_callback.assert_called_once_with(method_frame.method.active) def test_on_flowok_callback_reset(self): method_frame = frame.Method(self.obj.channel_number, spec.Channel.FlowOk()) mock_callback = mock.Mock() self.obj._on_flowok_callback = mock_callback self.obj._on_flowok(method_frame) self.assertIsNone(self.obj._on_flowok_callback) def test_on_openok_no_callback(self): mock_callback = mock.Mock() self.obj._on_openok_callback = None method_value = frame.Method(1, spec.Channel.OpenOk()) self.obj._on_openok(method_value) self.assertEqual(self.obj._state, self.obj.OPEN) def test_on_openok_callback_called(self): mock_callback = mock.Mock() self.obj._on_openok_callback = mock_callback method_value = frame.Method(1, spec.Channel.OpenOk()) self.obj._on_openok(method_value) mock_callback.assert_called_once_with(self.obj) def test_onreturn(self): method_value = frame.Method(1, spec.Basic.Return(999, 'Reply Text', 'exchange_value', 'routing.key')) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = frame.Body(1, b'0123456789') self.obj._on_return(method_value, header_value, body_value) self.obj.callbacks.process.assert_called_with(self.obj.channel_number, '_on_return', self.obj, self.obj, method_value.method, header_value.properties, body_value) @mock.patch('logging.Logger.warning') def test_onreturn_warning(self, warning): method_value = frame.Method(1, spec.Basic.Return(999, 'Reply Text', 'exchange_value', 'routing.key')) header_value = frame.Header(1, 10, spec.BasicProperties()) body_value = frame.Body(1, b'0123456789') self.obj.callbacks.process.return_value = False self.obj._on_return(method_value, header_value, body_value) warning.assert_called_with('Basic.Return received from server (%r, %r)', method_value.method, header_value.properties) @mock.patch('pika.channel.Channel._rpc') def test_on_synchronous_complete(self, rpc): mock_callback = mock.Mock() expectation = [spec.Queue.Unbind(0, 'foo', 'bar', 'baz'), mock_callback, [spec.Queue.UnbindOk]] self.obj._blocked = collections.deque([expectation]) self.obj._on_synchronous_complete(frame.Method(self.obj.channel_number, spec.Basic.Ack(1))) rpc.assert_called_once_with(*expectation) def test_rpc_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj._rpc, frame.Method(self.obj.channel_number, spec.Basic.Ack(1))) def test_rpc_while_blocking_appends_blocked_collection(self): self.obj._set_state(self.obj.OPEN) self.obj._blocking = spec.Confirm.Select() expectation = [frame.Method(self.obj.channel_number, spec.Basic.Ack(1)), 'Foo', None] self.obj._rpc(*expectation) self.assertIn(expectation, self.obj._blocked) def test_rpc_throws_value_error_with_unacceptable_replies(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(TypeError, self.obj._rpc, spec.Basic.Ack(1), logging.debug, 'Foo') def test_rpc_throws_type_error_with_invalid_callback(self): self.obj._set_state(self.obj.OPEN) self.assertRaises(TypeError, self.obj._rpc, spec.Channel.Open(1), ['foo'], [spec.Channel.OpenOk]) def test_rpc_adds_on_synchronous_complete(self): self.obj._set_state(self.obj.OPEN) method_frame = spec.Channel.Open() self.obj._rpc(method_frame, None, [spec.Channel.OpenOk]) self.obj.callbacks.add.assert_called_with( self.obj.channel_number, spec.Channel.OpenOk, self.obj._on_synchronous_complete, arguments=None) def test_rpc_adds_callback(self): self.obj._set_state(self.obj.OPEN) method_frame = spec.Channel.Open() mock_callback = mock.Mock() self.obj._rpc(method_frame, mock_callback, [spec.Channel.OpenOk]) self.obj.callbacks.add.assert_called_with(self.obj.channel_number, spec.Channel.OpenOk, mock_callback, arguments=None) def test_send_method(self): expectation = [2, 3] with mock.patch.object(self.obj.connection, '_send_method') as send_method: self.obj._send_method(*expectation) send_method.assert_called_once_with( *[self.obj.channel_number] + expectation) def test_set_state(self): self.obj._state = channel.Channel.CLOSED self.obj._set_state(channel.Channel.OPENING) self.assertEqual(self.obj._state, channel.Channel.OPENING) def test_validate_channel_and_callback_raises_channel_closed(self): self.assertRaises(exceptions.ChannelClosed, self.obj._validate_channel_and_callback, None) def test_validate_channel_and_callback_raises_value_error_not_callable(self ): self.obj._set_state(self.obj.OPEN) self.assertRaises(ValueError, self.obj._validate_channel_and_callback, 'foo') @mock.patch('logging.Logger.warning') def test_on_close_warning(self, warning): method_frame = frame.Method(self.obj.channel_number, spec.Channel.Close(999, 'Test_Value')) self.obj._on_close(method_frame) warning.assert_called_with('Received remote Channel.Close (%s): %s', method_frame.method.reply_code, method_frame.method.reply_text) pika-0.10.0/tests/unit/connection_tests.py000066400000000000000000000453001257163076400206050ustar00rootroot00000000000000""" Tests for pika.connection.Connection """ try: import mock except ImportError: from unittest import mock import random import urllib import copy try: import unittest2 as unittest except ImportError: import unittest from pika import connection from pika import channel from pika import credentials from pika import frame from pika import spec from pika.compat import xrange, urlencode def callback_method(): """Callback method to use in tests""" pass class ConnectionTests(unittest.TestCase): @mock.patch('pika.connection.Connection.connect') def setUp(self, connect): self.connection = connection.Connection() self.channel = mock.Mock(spec=channel.Channel) self.channel.is_open = True self.connection._channels[1] = self.channel self.connection._set_connection_state( connection.Connection.CONNECTION_OPEN) def tearDown(self): del self.connection del self.channel @mock.patch('pika.connection.Connection._send_connection_close') def test_close_closes_open_channels(self, send_connection_close): self.connection.close() self.channel.close.assert_called_once_with(200, 'Normal shutdown') @mock.patch('pika.connection.Connection._send_connection_close') def test_close_ignores_closed_channels(self, send_connection_close): for closed_state in (self.connection.CONNECTION_CLOSED, self.connection.CONNECTION_CLOSING): self.connection.connection_state = closed_state self.connection.close() self.assertFalse(self.channel.close.called) @mock.patch('pika.connection.Connection._on_close_ready') def test_on_close_ready_open_channels(self, on_close_ready): """if open channels _on_close_ready shouldn't be called""" self.connection.close() self.assertFalse(on_close_ready.called, '_on_close_ready should not have been called') @mock.patch('pika.connection.Connection._on_close_ready') def test_on_close_ready_no_open_channels(self, on_close_ready): self.connection._channels = dict() self.connection.close() self.assertTrue(on_close_ready.called, '_on_close_ready should have been called') @mock.patch('pika.connection.Connection._on_close_ready') def test_on_channel_cleanup_no_open_channels(self, on_close_ready): """Should call _on_close_ready if connection is closing and there are no open channels """ self.connection._channels = dict() self.connection.close() self.assertTrue(on_close_ready.called, '_on_close_ready should been called') @mock.patch('pika.connection.Connection._on_close_ready') def test_on_channel_cleanup_open_channels(self, on_close_ready): """if connection is closing but channels remain open do not call _on_close_ready """ self.connection.close() self.assertFalse(on_close_ready.called, '_on_close_ready should not have been called') @mock.patch('pika.connection.Connection._on_close_ready') def test_on_channel_cleanup_non_closing_state(self, on_close_ready): """if connection isn't closing _on_close_ready should not be called""" self.connection._on_channel_cleanup(mock.Mock()) self.assertFalse(on_close_ready.called, '_on_close_ready should not have been called') def test_on_disconnect(self): """if connection isn't closing _on_close_ready should not be called""" self.connection._on_disconnect(0, 'Undefined') self.assertTrue(self.channel._on_close.called, 'channel._on_close should have been called') method_frame = self.channel._on_close.call_args[0][0] self.assertEqual(method_frame.method.reply_code, 0) self.assertEqual(method_frame.method.reply_text, 'Undefined') @mock.patch('pika.connection.Connection.connect') def test_new_conn_should_use_first_channel(self, connect): """_next_channel_number in new conn should always be 1""" conn = connection.Connection() self.assertEqual(1, conn._next_channel_number()) def test_next_channel_number_returns_lowest_unused(self): """_next_channel_number must return lowest available channel number""" for channel_num in xrange(1, 50): self.connection._channels[channel_num] = True expectation = random.randint(5, 49) del self.connection._channels[expectation] self.assertEqual(self.connection._next_channel_number(), expectation) def test_add_callbacks(self): """make sure the callback adding works""" self.connection.callbacks = mock.Mock(spec=self.connection.callbacks) for test_method, expected_key in ( (self.connection.add_backpressure_callback, self.connection.ON_CONNECTION_BACKPRESSURE), (self.connection.add_on_open_callback, self.connection.ON_CONNECTION_OPEN), (self.connection.add_on_close_callback, self.connection.ON_CONNECTION_CLOSED)): self.connection.callbacks.reset_mock() test_method(callback_method) self.connection.callbacks.add.assert_called_once_with( 0, expected_key, callback_method, False) def test_add_on_close_callback(self): """make sure the add on close callback is added""" self.connection.callbacks = mock.Mock(spec=self.connection.callbacks) self.connection.add_on_open_callback(callback_method) self.connection.callbacks.add.assert_called_once_with( 0, self.connection.ON_CONNECTION_OPEN, callback_method, False) def test_add_on_open_error_callback(self): """make sure the add on open error callback is added""" self.connection.callbacks = mock.Mock(spec=self.connection.callbacks) #Test with remove default first (also checks default is True) self.connection.add_on_open_error_callback(callback_method) self.connection.callbacks.remove.assert_called_once_with( 0, self.connection.ON_CONNECTION_ERROR, self.connection._on_connection_error) self.connection.callbacks.add.assert_called_once_with( 0, self.connection.ON_CONNECTION_ERROR, callback_method, False) def test_channel(self): """test the channel method""" self.connection._next_channel_number = mock.Mock(return_value=42) test_channel = mock.Mock(spec=channel.Channel) self.connection._create_channel = mock.Mock(return_value=test_channel) self.connection._add_channel_callbacks = mock.Mock() ret_channel = self.connection.channel(callback_method) self.assertEqual(test_channel, ret_channel) self.connection._create_channel.assert_called_once_with(42, callback_method) self.connection._add_channel_callbacks.assert_called_once_with(42) test_channel.open.assert_called_once_with() def test_process_url(self): """test for the different query stings checked by process url""" url_params = { 'backpressure_detection': None, 'channel_max': 1, 'connection_attempts': 2, 'frame_max': 30000, 'heartbeat_interval': 4, 'locale': 'en', 'retry_delay': 5, 'socket_timeout': 6, 'ssl_options': {'ssl': 'dict'} } for backpressure in ('t', 'f'): test_params = copy.deepcopy(url_params) test_params['backpressure_detection'] = backpressure query_string = urlencode(test_params) test_url = 'https://www.test.com?%s' % query_string params = connection.URLParameters(test_url) #check each value for t_param in ('channel_max', 'connection_attempts', 'frame_max', 'locale', 'retry_delay', 'socket_timeout', 'ssl_options'): self.assertEqual(test_params[t_param], getattr(params, t_param), t_param) self.assertEqual(params.backpressure_detection, backpressure == 't') self.assertEqual(test_params['heartbeat_interval'], params.heartbeat) def test_good_connection_parameters(self): """make sure connection kwargs get set correctly""" kwargs = { 'host': 'https://www.test.com', 'port': 5678, 'virtual_host': u'vvhost', 'channel_max': 3, 'frame_max': 40000, 'credentials': credentials.PlainCredentials('very', 'secure'), 'heartbeat_interval': 7, 'backpressure_detection': False, 'retry_delay': 3, 'ssl': True, 'connection_attempts': 2, 'locale': 'en', 'ssl_options': {'ssl': 'options'} } conn = connection.ConnectionParameters(**kwargs) #check values for t_param in ('host', 'port', 'virtual_host', 'channel_max', 'frame_max', 'backpressure_detection', 'ssl', 'credentials', 'retry_delay', 'connection_attempts', 'locale'): self.assertEqual(kwargs[t_param], getattr(conn, t_param), t_param) self.assertEqual(kwargs['heartbeat_interval'], conn.heartbeat) def test_bad_type_connection_parameters(self): """test connection kwargs type checks throw errors for bad input""" kwargs = { 'host': 'https://www.test.com', 'port': 5678, 'virtual_host': 'vvhost', 'channel_max': 3, 'frame_max': 40000, 'heartbeat_interval': 7, 'backpressure_detection': False, 'ssl': True } #Test Type Errors for bad_field, bad_value in ( ('host', 15672), ('port', '5672'), ('virtual_host', True), ('channel_max', '4'), ('frame_max', '5'), ('credentials', 'bad'), ('locale', 1), ('heartbeat_interval', '6'), ('socket_timeout', '42'), ('retry_delay', 'two'), ('backpressure_detection', 'true'), ('ssl', {'ssl': 'dict'}), ('ssl_options', True), ('connection_attempts', 'hello')): bkwargs = copy.deepcopy(kwargs) bkwargs[bad_field] = bad_value self.assertRaises(TypeError, connection.ConnectionParameters, **bkwargs) @mock.patch('pika.frame.ProtocolHeader') def test_connect(self, frame_protocol_header): """make sure the connect method sets the state and sends a frame""" self.connection._adapter_connect = mock.Mock(return_value=None) self.connection._send_frame = mock.Mock() frame_protocol_header.spec = frame.ProtocolHeader frame_protocol_header.return_value = 'frame object' self.connection.connect() self.assertEqual(self.connection.CONNECTION_PROTOCOL, self.connection.connection_state) self.connection._send_frame.assert_called_once_with('frame object') def test_connect_reconnect(self): """try the different reconnect logic, check state & other class vars""" self.connection._adapter_connect = mock.Mock(return_value='error') self.connection.callbacks = mock.Mock(spec=self.connection.callbacks) self.connection.remaining_connection_attempts = 2 self.connection.params.retry_delay = 555 self.connection.params.connection_attempts = 99 self.connection.add_timeout = mock.Mock() #first failure self.connection.connect() self.connection.add_timeout.assert_called_once_with( 555, self.connection.connect) self.assertEqual(1, self.connection.remaining_connection_attempts) self.assertFalse(self.connection.callbacks.process.called) self.assertEqual(self.connection.CONNECTION_INIT, self.connection.connection_state) #fail with no attempts remaining self.connection.add_timeout.reset_mock() self.connection.connect() self.assertFalse(self.connection.add_timeout.called) self.assertEqual(99, self.connection.remaining_connection_attempts) self.connection.callbacks.process.assert_called_once_with( 0, self.connection.ON_CONNECTION_ERROR, self.connection, self.connection, 'error') self.assertEqual(self.connection.CONNECTION_CLOSED, self.connection.connection_state) def test_client_properties(self): """make sure client properties has some important keys""" client_props = self.connection._client_properties self.assertTrue(isinstance(client_props, dict)) for required_key in ('product', 'platform', 'capabilities', 'information', 'version'): self.assertTrue(required_key in client_props, '%s missing' % required_key) def test_set_backpressure_multiplier(self): """test setting the backpressure multiplier""" self.connection._backpressure = None self.connection.set_backpressure_multiplier(value=5) self.assertEqual(5, self.connection._backpressure) def test_close_channels(self): """test closing all channels""" self.connection.connection_state = self.connection.CONNECTION_OPEN self.connection.callbacks = mock.Mock(spec=self.connection.callbacks) open_channel = mock.Mock(is_open=True) closed_channel = mock.Mock(is_open=False) self.connection._channels = {'oc': open_channel, 'cc': closed_channel} self.connection._close_channels('reply code', 'reply text') open_channel.close.assert_called_once_with('reply code', 'reply text') self.assertTrue('oc' in self.connection._channels) self.assertTrue('cc' not in self.connection._channels) self.connection.callbacks.cleanup.assert_called_once_with('cc') #Test on closed channel self.connection.connection_state = self.connection.CONNECTION_CLOSED self.connection._close_channels('reply code', 'reply text') self.assertEqual({}, self.connection._channels) def test_on_connection_start(self): """make sure starting a connection sets the correct class vars""" method_frame = mock.Mock() method_frame.method = mock.Mock() method_frame.method.mechanisms = str(credentials.PlainCredentials.TYPE) method_frame.method.version_major = 0 method_frame.method.version_minor = 9 #This may be incorrectly mocked, or the code is wrong #TODO: Code does hasattr check, should this be a has_key/in check? method_frame.method.server_properties = { 'capabilities': { 'basic.nack': True, 'consumer_cancel_notify': False, 'exchange_exchange_bindings': False } } #This will be called, but shoudl not be implmented here, just mock it self.connection._flush_outbound = mock.Mock() self.connection._on_connection_start(method_frame) self.assertEqual(True, self.connection.basic_nack) self.assertEqual(False, self.connection.consumer_cancel_notify) self.assertEqual(False, self.connection.exchange_exchange_bindings) self.assertEqual(False, self.connection.publisher_confirms) @mock.patch('pika.heartbeat.HeartbeatChecker') @mock.patch('pika.frame.Method') def test_on_connection_tune(self, method, heartbeat_checker): """make sure on connection tune turns the connection params""" heartbeat_checker.return_value = 'hearbeat obj' self.connection._flush_outbound = mock.Mock() marshal = mock.Mock(return_value='ab') method.return_value = mock.Mock(marshal=marshal) #may be good to test this here, but i don't want to test too much self.connection._rpc = mock.Mock() method_frame = mock.Mock() method_frame.method = mock.Mock() method_frame.method.channel_max = 40 method_frame.method.frame_max = 10 method_frame.method.heartbeat = 0 self.connection.params.channel_max = 20 self.connection.params.frame_max = 20 self.connection.params.heartbeat = 20 #Test self.connection._on_connection_tune(method_frame) #verfy self.assertEqual(self.connection.CONNECTION_TUNE, self.connection.connection_state) self.assertEqual(20, self.connection.params.channel_max) self.assertEqual(10, self.connection.params.frame_max) self.assertEqual(20, self.connection.params.heartbeat) self.assertEqual(2, self.connection._body_max_length) heartbeat_checker.assert_called_once_with(self.connection, 20) self.assertEqual(['ab'], list(self.connection.outbound_buffer)) self.assertEqual('hearbeat obj', self.connection.heartbeat) def test_on_connection_closed(self): """make sure connection close sends correct frames""" method_frame = mock.Mock() method_frame.method = mock.Mock(spec=spec.Connection.Close) method_frame.method.reply_code = 1 method_frame.method.reply_text = 'hello' heartbeat = mock.Mock() self.connection.heartbeat = heartbeat self.connection._adapter_disconnect = mock.Mock() self.connection._on_connection_closed(method_frame, from_adapter=False) #Check self.assertTupleEqual((1, 'hello'), self.connection.closing) heartbeat.stop.assert_called_once_with() self.connection._adapter_disconnect.assert_called_once_with() @mock.patch('pika.frame.decode_frame') def test_on_data_available(self, decode_frame): """test on data available and process frame""" data_in = ['data'] self.connection._frame_buffer = ['old_data'] for frame_type in (frame.Method, spec.Basic.Deliver, frame.Heartbeat): frame_value = mock.Mock(spec=frame_type) frame_value.frame_type = 2 frame_value.method = 2 frame_value.channel_number = 1 self.connection.bytes_received = 0 self.connection.heartbeat = mock.Mock() self.connection.frames_received = 0 decode_frame.return_value = (2, frame_value) self.connection._on_data_available(data_in) #test value self.assertListEqual([], self.connection._frame_buffer) self.assertEqual(2, self.connection.bytes_received) self.assertEqual(1, self.connection.frames_received) if frame_type == frame.Heartbeat: self.assertTrue(self.connection.heartbeat.received.called) pika-0.10.0/tests/unit/connection_timeout_tests.py000066400000000000000000000100021257163076400223420ustar00rootroot00000000000000# -*- coding: utf8 -*- """ Tests for connection parameters. """ import socket try: from mock import patch except ImportError: from unittest.mock import patch try: import unittest2 as unittest except ImportError: import unittest import pika from pika.adapters import base_connection from pika.adapters import blocking_connection from pika.adapters import select_connection try: from pika.adapters import tornado_connection except ImportError: tornado_connection = None try: from pika.adapters import twisted_connection except ImportError: twisted_connection = None try: from pika.adapters import libev_connection except ImportError: libev_connection = None from pika import exceptions def mock_timeout(*args, **kwargs): raise socket.timeout class ConnectionTests(unittest.TestCase): def test_parameters(self): params = pika.ConnectionParameters(socket_timeout=0.5, retry_delay=0.1, connection_attempts=3) self.assertEqual(params.socket_timeout, 0.5) self.assertEqual(params.retry_delay, 0.1) self.assertEqual(params.connection_attempts, 3) @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) base_connection.BaseConnection(params) settimeout.assert_called_with(2.0) @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_blocking_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) blocking_connection.BlockingConnection(params) settimeout.assert_called_with(2.0) @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_select_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) select_connection.SelectConnection(params) settimeout.assert_called_with(2.0) @unittest.skipUnless(tornado_connection is not None, 'tornado is not installed') @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_tornado_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) tornado_connection.TornadoConnection(params) settimeout.assert_called_with(2.0) @unittest.skipUnless(twisted_connection is not None, 'twisted is not installed') @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_twisted_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) twisted_connection.TwistedConnection(params) settimeout.assert_called_with(2.0) @unittest.skipUnless(libev_connection is not None, 'pyev is not installed') @patch.object(socket.socket, 'settimeout') @patch.object(socket.socket, 'connect') def test_libev_connection_timeout(self, connect, settimeout): connect.side_effect = mock_timeout with self.assertRaises(exceptions.AMQPConnectionError): params = pika.ConnectionParameters(socket_timeout=2.0) libev_connection.LibevConnection(params) settimeout.assert_called_with(2.0) pika-0.10.0/tests/unit/content_frame_dispatcher_tests.py000066400000000000000000000141411257163076400234770ustar00rootroot00000000000000# -*- encoding: utf-8 -*- """ Tests for pika.channel.ContentFrameDispatcher """ import marshal try: import unittest2 as unittest except ImportError: import unittest from pika import channel from pika import exceptions from pika import frame from pika import spec class ContentFrameDispatcherTests(unittest.TestCase): def setUp(self): self.obj = channel.ContentFrameDispatcher() def test_init_method_frame(self): self.assertEqual(self.obj._method_frame, None) def test_init_header_frame(self): self.assertEqual(self.obj._header_frame, None) def test_init_seen_so_far(self): self.assertEqual(self.obj._seen_so_far, 0) def test_init_body_fragments(self): self.assertEqual(self.obj._body_fragments, list()) def test_process_with_basic_deliver(self): value = frame.Method(1, spec.Basic.Deliver()) self.obj.process(value) self.assertEqual(self.obj._method_frame, value) def test_process_with_content_header(self): value = frame.Header(1, 100, spec.BasicProperties) self.obj.process(value) self.assertEqual(self.obj._header_frame, value) def test_process_with_body_frame_partial(self): value = frame.Header(1, 100, spec.BasicProperties) self.obj.process(value) value = frame.Method(1, spec.Basic.Deliver()) self.obj.process(value) value = frame.Body(1, b'abc123') self.obj.process(value) self.assertEqual(self.obj._body_fragments, [value.fragment]) def test_process_with_full_message(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 6, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') response = self.obj.process(body_frame) self.assertEqual(response, (method_frame, header_frame, b'abc123')) def test_process_with_body_frame_six_bytes(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 10, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') self.obj.process(body_frame) self.assertEqual(self.obj._seen_so_far, 6) def test_process_with_body_frame_too_big(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 6, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abcd1234') self.assertRaises(exceptions.BodyTooLongError, self.obj.process, body_frame) def test_process_with_unexpected_frame_type(self): value = frame.Method(1, spec.Basic.Qos()) self.assertRaises(exceptions.UnexpectedFrameError, self.obj.process, value) def test_reset_method_frame(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 10, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') self.obj.process(body_frame) self.obj._reset() self.assertEqual(self.obj._method_frame, None) def test_reset_header_frame(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 10, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') self.obj.process(body_frame) self.obj._reset() self.assertEqual(self.obj._header_frame, None) def test_reset_seen_so_far(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 10, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') self.obj.process(body_frame) self.obj._reset() self.assertEqual(self.obj._seen_so_far, 0) def test_reset_body_fragments(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 10, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'abc123') self.obj.process(body_frame) self.obj._reset() self.assertEqual(self.obj._body_fragments, list()) def test_ascii_bytes_body_instance(self): method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 11, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'foo-bar-baz') method_frame, header_frame, body_value = self.obj.process(body_frame) self.assertIsInstance(body_value, bytes) def test_ascii_body_value(self): expectation = b'foo-bar-baz' self.obj = channel.ContentFrameDispatcher() method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) header_frame = frame.Header(1, 11, spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, b'foo-bar-baz') method_frame, header_frame, body_value = self.obj.process(body_frame) self.assertEqual(body_value, expectation) self.assertIsInstance(body_value, bytes) def test_binary_non_unicode_value(self): expectation = ('a', 0.8) self.obj = channel.ContentFrameDispatcher() method_frame = frame.Method(1, spec.Basic.Deliver()) self.obj.process(method_frame) marshalled_body = marshal.dumps(expectation) header_frame = frame.Header(1, len(marshalled_body), spec.BasicProperties) self.obj.process(header_frame) body_frame = frame.Body(1, marshalled_body) method_frame, header_frame, body_value = self.obj.process(body_frame) self.assertEqual(marshal.loads(body_value), expectation) pika-0.10.0/tests/unit/credentials_tests.py000066400000000000000000000042441257163076400207450ustar00rootroot00000000000000""" Tests for pika.credentials """ try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika import credentials from pika import spec class PlainCredentialsTests(unittest.TestCase): CREDENTIALS = 'guest', 'guest' def test_response_for(self): obj = credentials.PlainCredentials(*self.CREDENTIALS) start = spec.Connection.Start() self.assertEqual(obj.response_for(start), ('PLAIN', b'\x00guest\x00guest')) def test_erase_response_for_no_mechanism_match(self): obj = credentials.PlainCredentials(*self.CREDENTIALS) start = spec.Connection.Start() start.mechanisms = 'FOO BAR BAZ' self.assertEqual(obj.response_for(start), (None, None)) def test_erase_credentials_false(self): obj = credentials.PlainCredentials(*self.CREDENTIALS) obj.erase_credentials() self.assertEqual((obj.username, obj.password), self.CREDENTIALS) def test_erase_credentials_true(self): obj = credentials.PlainCredentials(self.CREDENTIALS[0], self.CREDENTIALS[1], True) obj.erase_credentials() self.assertEqual((obj.username, obj.password), (None, None)) class ExternalCredentialsTest(unittest.TestCase): def test_response_for(self): obj = credentials.ExternalCredentials() start = spec.Connection.Start() start.mechanisms = 'PLAIN EXTERNAL' self.assertEqual(obj.response_for(start), ('EXTERNAL', b'')) def test_erase_response_for_no_mechanism_match(self): obj = credentials.ExternalCredentials() start = spec.Connection.Start() start.mechanisms = 'FOO BAR BAZ' self.assertEqual(obj.response_for(start), (None, None)) def test_erase_credentials(self): with mock.patch('pika.credentials.LOGGER', autospec=True) as logger: obj = credentials.ExternalCredentials() obj.erase_credentials() logger.debug.assert_called_once_with('Not supported by this ' 'Credentials type') pika-0.10.0/tests/unit/data_tests.py000066400000000000000000000047031257163076400173610ustar00rootroot00000000000000# -*- encoding: utf-8 -*- """ pika.data tests """ import datetime import decimal import platform try: import unittest2 as unittest except ImportError: import unittest try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict from pika import data from pika import exceptions from pika.compat import long class DataTests(unittest.TestCase): FIELD_TBL_ENCODED = ( b'\x00\x00\x00\xbb' b'\x05arrayA\x00\x00\x00\x0fI\x00\x00\x00\x01I\x00\x00\x00\x02I\x00\x00\x00\x03' b'\x07boolvalt\x01' b'\x07decimalD\x02\x00\x00\x01:' b'\x0bdecimal_tooD\x00\x00\x00\x00d' b'\x07dictvalF\x00\x00\x00\x0c\x03fooS\x00\x00\x00\x03bar' b'\x06intvalI\x00\x00\x00\x01' b'\x07longvall\x00\x00\x00\x006e&U' b'\x04nullV' b'\x06strvalS\x00\x00\x00\x04Test' b'\x0ctimestampvalT\x00\x00\x00\x00Ec)\x92' b'\x07unicodeS\x00\x00\x00\x08utf8=\xe2\x9c\x93' ) FIELD_TBL_VALUE = OrderedDict([ ('array', [1, 2, 3]), ('boolval', True), ('decimal', decimal.Decimal('3.14')), ('decimal_too', decimal.Decimal('100')), ('dictval', {'foo': 'bar'}), ('intval', 1) , ('longval', long(912598613)), ('null', None), ('strval', 'Test'), ('timestampval', datetime.datetime(2006, 11, 21, 16, 30, 10)), ('unicode', u'utf8=✓') ]) def test_encode_table(self): result = [] data.encode_table(result, self.FIELD_TBL_VALUE) self.assertEqual(b''.join(result), self.FIELD_TBL_ENCODED) def test_encode_table_bytes(self): result = [] byte_count = data.encode_table(result, self.FIELD_TBL_VALUE) self.assertEqual(byte_count, 191) def test_decode_table(self): value, byte_count = data.decode_table(self.FIELD_TBL_ENCODED, 0) self.assertDictEqual(value, self.FIELD_TBL_VALUE) def test_decode_table_bytes(self): value, byte_count = data.decode_table(self.FIELD_TBL_ENCODED, 0) self.assertEqual(byte_count, 191) def test_encode_raises(self): self.assertRaises(exceptions.UnsupportedAMQPFieldException, data.encode_table, [], {'foo': set([1, 2, 3])}) def test_decode_raises(self): self.assertRaises(exceptions.InvalidFieldTypeException, data.decode_table, b'\x00\x00\x00\t\x03fooZ\x00\x00\x04\xd2', 0) pika-0.10.0/tests/unit/exceptions_test.py000066400000000000000000000026441257163076400204500ustar00rootroot00000000000000""" Tests for pika.exceptions """ try: import unittest2 as unittest except ImportError: import unittest from pika import exceptions class ExceptionTests(unittest.TestCase): def test_amqp_connection_error_one_param_repr(self): self.assertEqual( repr(exceptions.AMQPConnectionError(10)), "No connection could be opened after 10 connection attempts") def test_amqp_connection_error_two_params_repr(self): self.assertEqual(repr(exceptions.AMQPConnectionError(1, 'Test')), "1: Test") def test_authentication_error_repr(self): self.assertEqual(repr(exceptions.AuthenticationError('PLAIN')), 'Server and client could not negotiate use of the ' 'PLAIN authentication mechanism') def test_body_too_long_error_repr(self): self.assertEqual(repr(exceptions.BodyTooLongError(100, 50)), 'Received too many bytes for a message delivery: ' 'Received 100, expected 50') def test_invalid_minimum_frame_size_repr(self): self.assertEqual(repr(exceptions.InvalidMinimumFrameSize()), 'AMQP Minimum Frame Size is 4096 Bytes') def test_invalid_maximum_frame_size_repr(self): self.assertEqual(repr(exceptions.InvalidMaximumFrameSize()), 'AMQP Maximum Frame Size is 131072 Bytes') pika-0.10.0/tests/unit/frame_tests.py000066400000000000000000000105101257163076400175330ustar00rootroot00000000000000""" Tests for pika.frame """ try: import unittest2 as unittest except ImportError: import unittest from pika import exceptions from pika import frame from pika import spec class FrameTests(unittest.TestCase): BASIC_ACK = (b'\x01\x00\x01\x00\x00\x00\r\x00<\x00P\x00\x00\x00\x00\x00\x00' b'\x00d\x00\xce') BODY_FRAME = b'\x03\x00\x01\x00\x00\x00\x14I like it that sound\xce' BODY_FRAME_VALUE = b'I like it that sound' CONTENT_HEADER = (b'\x02\x00\x01\x00\x00\x00\x0f\x00<\x00\x00\x00' b'\x00\x00\x00\x00\x00\x00d\x10\x00\x02\xce') HEARTBEAT = b'\x08\x00\x00\x00\x00\x00\x00\xce' PROTOCOL_HEADER = b'AMQP\x00\x00\t\x01' def frame_marshal_not_implemented_test(self): frame_obj = frame.Frame(0x000A000B, 1) self.assertRaises(NotImplementedError, frame_obj.marshal) def frame_underscore_marshal_test(self): basic_ack = frame.Method(1, spec.Basic.Ack(100)) self.assertEqual(basic_ack.marshal(), self.BASIC_ACK) def headers_marshal_test(self): header = frame.Header(1, 100, spec.BasicProperties(delivery_mode=2)) self.assertEqual(header.marshal(), self.CONTENT_HEADER) def body_marshal_test(self): body = frame.Body(1, b'I like it that sound') self.assertEqual(body.marshal(), self.BODY_FRAME) def heartbeat_marshal_test(self): heartbeat = frame.Heartbeat() self.assertEqual(heartbeat.marshal(), self.HEARTBEAT) def protocol_header_marshal_test(self): protocol_header = frame.ProtocolHeader() self.assertEqual(protocol_header.marshal(), self.PROTOCOL_HEADER) def decode_protocol_header_instance_test(self): self.assertIsInstance(frame.decode_frame(self.PROTOCOL_HEADER)[1], frame.ProtocolHeader) def decode_protocol_header_bytes_test(self): self.assertEqual(frame.decode_frame(self.PROTOCOL_HEADER)[0], 8) def decode_method_frame_instance_test(self): self.assertIsInstance(frame.decode_frame(self.BASIC_ACK)[1], frame.Method) def decode_protocol_header_failure_test(self): self.assertEqual(frame.decode_frame(b'AMQPa'), (0, None)) def decode_method_frame_bytes_test(self): self.assertEqual(frame.decode_frame(self.BASIC_ACK)[0], 21) def decode_method_frame_method_test(self): self.assertIsInstance(frame.decode_frame(self.BASIC_ACK)[1].method, spec.Basic.Ack) def decode_header_frame_instance_test(self): self.assertIsInstance(frame.decode_frame(self.CONTENT_HEADER)[1], frame.Header) def decode_header_frame_bytes_test(self): self.assertEqual(frame.decode_frame(self.CONTENT_HEADER)[0], 23) def decode_header_frame_properties_test(self): frame_value = frame.decode_frame(self.CONTENT_HEADER)[1] self.assertIsInstance(frame_value.properties, spec.BasicProperties) def decode_frame_decoding_failure_test(self): self.assertEqual(frame.decode_frame(b'\x01\x00\x01\x00\x00\xce'), (0, None)) def decode_frame_decoding_no_end_byte_test(self): self.assertEqual(frame.decode_frame(self.BASIC_ACK[:-1]), (0, None)) def decode_frame_decoding_wrong_end_byte_test(self): self.assertRaises(exceptions.InvalidFrameError, frame.decode_frame, self.BASIC_ACK[:-1] + b'A') def decode_body_frame_instance_test(self): self.assertIsInstance(frame.decode_frame(self.BODY_FRAME)[1], frame.Body) def decode_body_frame_fragment_test(self): self.assertEqual(frame.decode_frame(self.BODY_FRAME)[1].fragment, self.BODY_FRAME_VALUE) def decode_body_frame_fragment_consumed_bytes_test(self): self.assertEqual(frame.decode_frame(self.BODY_FRAME)[0], 28) def decode_heartbeat_frame_test(self): self.assertIsInstance(frame.decode_frame(self.HEARTBEAT)[1], frame.Heartbeat) def decode_heartbeat_frame_bytes_consumed_test(self): self.assertEqual(frame.decode_frame(self.HEARTBEAT)[0], 8) def decode_frame_invalid_frame_type_test(self): self.assertRaises(exceptions.InvalidFrameError, frame.decode_frame, b'\x09\x00\x00\x00\x00\x00\x00\xce') pika-0.10.0/tests/unit/heartbeat_tests.py000066400000000000000000000155211257163076400204070ustar00rootroot00000000000000""" Tests for pika.heartbeat """ try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest from pika import connection from pika import frame from pika import heartbeat class HeartbeatTests(unittest.TestCase): INTERVAL = 5 def setUp(self): self.mock_conn = mock.Mock(spec=connection.Connection) self.mock_conn.bytes_received = 100 self.mock_conn.bytes_sent = 100 self.mock_conn.heartbeat = mock.Mock(spec=heartbeat.HeartbeatChecker) self.obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) def tearDown(self): del self.obj del self.mock_conn def test_default_initialization_max_idle_count(self): self.assertEqual(self.obj._max_idle_count, self.obj.MAX_IDLE_COUNT) def test_constructor_assignment_connection(self): self.assertEqual(self.obj._connection, self.mock_conn) def test_constructor_assignment_heartbeat_interval(self): self.assertEqual(self.obj._interval, self.INTERVAL) def test_constructor_initial_bytes_received(self): self.assertEqual(self.obj._bytes_received, 0) def test_constructor_initial_bytes_sent(self): self.assertEqual(self.obj._bytes_received, 0) def test_constructor_initial_heartbeat_frames_received(self): self.assertEqual(self.obj._heartbeat_frames_received, 0) def test_constructor_initial_heartbeat_frames_sent(self): self.assertEqual(self.obj._heartbeat_frames_sent, 0) def test_constructor_initial_idle_byte_intervals(self): self.assertEqual(self.obj._idle_byte_intervals, 0) @mock.patch('pika.heartbeat.HeartbeatChecker._setup_timer') def test_constructor_called_setup_timer(self, timer): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) timer.assert_called_once_with() def test_active_true(self): self.mock_conn.heartbeat = self.obj self.assertTrue(self.obj.active) def test_active_false(self): self.mock_conn.heartbeat = mock.Mock() self.assertFalse(self.obj.active) def test_bytes_received_on_connection(self): self.mock_conn.bytes_received = 128 self.assertEqual(self.obj.bytes_received_on_connection, 128) def test_connection_is_idle_false(self): self.assertFalse(self.obj.connection_is_idle) def test_connection_is_idle_true(self): self.obj._idle_byte_intervals = self.INTERVAL self.assertTrue(self.obj.connection_is_idle) def test_received(self): self.obj.received() self.assertTrue(self.obj._heartbeat_frames_received, 1) @mock.patch('pika.heartbeat.HeartbeatChecker._close_connection') def test_send_and_check_not_closed(self, close_connection): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) obj.send_and_check() close_connection.assert_not_called() @mock.patch('pika.heartbeat.HeartbeatChecker._close_connection') def test_send_and_check_missed_bytes(self, close_connection): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) obj._idle_byte_intervals = self.INTERVAL obj.send_and_check() close_connection.assert_called_once_with() def test_send_and_check_increment_no_bytes(self): self.mock_conn.bytes_received = 100 self.obj._bytes_received = 100 self.obj.send_and_check() self.assertEqual(self.obj._idle_byte_intervals, 1) def test_send_and_check_increment_bytes(self): self.mock_conn.bytes_received = 100 self.obj._bytes_received = 128 self.obj.send_and_check() self.assertEqual(self.obj._idle_byte_intervals, 0) @mock.patch('pika.heartbeat.HeartbeatChecker._update_counters') def test_send_and_check_update_counters(self, update_counters): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) obj.send_and_check() update_counters.assert_called_once_with() @mock.patch('pika.heartbeat.HeartbeatChecker._send_heartbeat_frame') def test_send_and_check_send_heartbeat_frame(self, send_heartbeat_frame): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) obj.send_and_check() send_heartbeat_frame.assert_called_once_with() @mock.patch('pika.heartbeat.HeartbeatChecker._start_timer') def test_send_and_check_start_timer(self, start_timer): obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL) obj.send_and_check() start_timer.assert_called_once_with() def test_connection_close(self): self.obj._idle_byte_intervals = 3 self.obj._idle_heartbeat_intervals = 4 self.obj._close_connection() reason = self.obj._STALE_CONNECTION % (self.obj._max_idle_count * self.obj._interval) self.mock_conn.close.assert_called_once_with( self.obj._CONNECTION_FORCED, reason) self.mock_conn._on_disconnect.assert_called_once_with( self.obj._CONNECTION_FORCED, reason) self.mock_conn._adapter_disconnect.assert_called_once_with() def test_has_received_data_false(self): self.obj._bytes_received = 100 self.assertFalse(self.obj._has_received_data) def test_has_received_data_true(self): self.mock_conn.bytes_received = 128 self.obj._bytes_received = 100 self.assertTrue(self.obj._has_received_data) def test_new_heartbeat_frame(self): self.assertIsInstance(self.obj._new_heartbeat_frame(), frame.Heartbeat) def test_send_heartbeat_send_frame_called(self): self.obj._send_heartbeat_frame() self.mock_conn._send_frame.assert_called_once() def test_send_heartbeat_counter_incremented(self): self.obj._send_heartbeat_frame() self.assertEqual(self.obj._heartbeat_frames_sent, 1) def test_setup_timer_called(self): self.obj._setup_timer() self.mock_conn.add_timeout.called_once_with(self.INTERVAL, self.obj.send_and_check) @mock.patch('pika.heartbeat.HeartbeatChecker._setup_timer') def test_start_timer_not_active(self, setup_timer): self.obj._start_timer() setup_timer.assert_not_called() @mock.patch('pika.heartbeat.HeartbeatChecker._setup_timer') def test_start_timer_active(self, setup_timer): self.mock_conn.heartbeat = self.obj self.obj._start_timer() self.assertTrue(setup_timer.called) def test_update_counters_bytes_received(self): self.mock_conn.bytes_received = 256 self.obj._update_counters() self.assertEqual(self.obj._bytes_received, 256) def test_update_counters_bytes_sent(self): self.mock_conn.bytes_sent = 256 self.obj._update_counters() self.assertEqual(self.obj._bytes_sent, 256) pika-0.10.0/tests/unit/parameter_tests.py000066400000000000000000000063101257163076400204240ustar00rootroot00000000000000import unittest import pika class ParameterTests(unittest.TestCase): def test_parameters_accepts_plain_string_virtualhost(self): parameters = pika.ConnectionParameters(virtual_host="prtfqpeo") self.assertEqual(parameters.virtual_host, "prtfqpeo") def test_parameters_accepts_plain_string_virtualhost(self): parameters = pika.ConnectionParameters(virtual_host=u"prtfqpeo") self.assertEqual(parameters.virtual_host, "prtfqpeo") def test_parameters_accept_plain_string_locale(self): parameters = pika.ConnectionParameters(locale="en_US") self.assertEqual(parameters.locale, "en_US") def test_parameters_accept_unicode_locale(self): parameters = pika.ConnectionParameters(locale=u"en_US") self.assertEqual(parameters.locale, "en_US") def test_urlparameters_accepts_plain_string(self): parameters = pika.URLParameters( "amqp://prtfqpeo:oihdglkhcp0@myserver.mycompany.com:5672/prtfqpeo?locale=en_US") self.assertEqual(parameters.port, 5672) self.assertEqual(parameters.virtual_host, "prtfqpeo") self.assertEqual(parameters.credentials.password, "oihdglkhcp0") self.assertEqual(parameters.credentials.username, "prtfqpeo") self.assertEqual(parameters.locale, "en_US") def test_urlparameters_accepts_unicode_string(self): parameters = pika.URLParameters( u"amqp://prtfqpeo:oihdglkhcp0@myserver.mycompany.com:5672/prtfqpeo?locale=en_US") self.assertEqual(parameters.port, 5672) self.assertEqual(parameters.virtual_host, "prtfqpeo") self.assertEqual(parameters.credentials.password, "oihdglkhcp0") self.assertEqual(parameters.credentials.username, "prtfqpeo") self.assertEqual(parameters.locale, "en_US") def test_urlparameters_uses_default_port_if_not_specified(self): parameters = pika.URLParameters("amqp://myserver.mycompany.com") self.assertEqual(parameters.port, pika.URLParameters.DEFAULT_PORT) def test_urlparameters_uses_default_virtual_host_if_not_specified(self): parameters = pika.URLParameters("amqp://myserver.mycompany.com") self.assertEqual(parameters.virtual_host, pika.URLParameters.DEFAULT_VIRTUAL_HOST) def test_urlparameters_uses_default_virtual_host_if_only_slash_is_specified( self ): parameters = pika.URLParameters("amqp://myserver.mycompany.com/") self.assertEqual(parameters.virtual_host, pika.URLParameters.DEFAULT_VIRTUAL_HOST) def test_urlparameters_uses_default_username_and_password_if_not_specified( self ): parameters = pika.URLParameters("amqp://myserver.mycompany.com") self.assertEqual(parameters.credentials.username, pika.URLParameters.DEFAULT_USERNAME) self.assertEqual(parameters.credentials.password, pika.URLParameters.DEFAULT_PASSWORD) def test_urlparameters_accepts_blank_username_and_password(self): parameters = pika.URLParameters("amqp://:@myserver.mycompany.com") self.assertEqual(parameters.credentials.username, "") self.assertEqual(parameters.credentials.password, "") pika-0.10.0/tests/unit/select_connection_ioloop_tests.py000066400000000000000000000223321257163076400235250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for SelectConnection IOLoops """ # Disable warnings about initialization of members outside of __init__ # pylint: disable=W0201 # Disable warnings about too many public methods as they are in base classes # pylint: disable=R0904 import logging try: import unittest2 as unittest except ImportError: import unittest import os import socket import errno import time import threading import pika from pika.adapters import select_connection from pika.adapters.select_connection import READ, WRITE, ERROR from functools import partial class IOLoopBaseTest(unittest.TestCase): SELECT_POLLER = None TIMEOUT = 1.0 def setUp(self): select_connection.SELECT_TYPE = self.SELECT_POLLER self.ioloop = select_connection.IOLoop() def tearDown(self): self.ioloop.remove_timeout(self.fail_timer) self.ioloop = None def start(self): self.fail_timer = self.ioloop.add_timeout(self.TIMEOUT, self.on_timeout) self.ioloop.start() def on_timeout(self): """called when stuck waiting for connection to close""" # force the ioloop to stop self.ioloop.stop() raise AssertionError('Test timed out') class IOLoopThreadStopTestSelect(IOLoopBaseTest): SELECT_POLLER = 'select' def start_test(self): t = threading.Timer(0.1, self.ioloop.stop) t.start() self.start() class IOLoopThreadStopTestPoll(IOLoopThreadStopTestSelect): SELECT_POLLER = 'poll' class IOLoopThreadStopTestEPoll(IOLoopThreadStopTestSelect): SELECT_POLLER = 'epoll' class IOLoopThreadStopTestKqueue(IOLoopThreadStopTestSelect): SELECT_POLLER = 'kqueue' class IOLoopTimerTestSelect(IOLoopBaseTest): """ Set a bunch of very short timers to fire in reverse order and check that they fire in order of time, not """ NUM_TIMERS = 5 TIMER_INTERVAL = 0.02 SELECT_POLLER = 'select' def set_timers(self): """Set timers that timers that fires in succession with the sepecified interval.""" self.timer_stack = list() for i in range(self.NUM_TIMERS, 0, -1): deadline = i * self.TIMER_INTERVAL self.ioloop.add_timeout(deadline, partial(self.on_timer, i)) self.timer_stack.append(i) def start_test(self): """Set timers and start ioloop.""" self.set_timers() self.start() def on_timer(self, val): """A timeout handler that verifies that the given parameter matches what is expected.""" self.assertEqual(val, self.timer_stack.pop()) if not self.timer_stack: self.ioloop.stop() def test_normal(self): """Setup 5 timeout handlers and observe them get invoked one by one.""" self.start_test() def test_timer_for_deleting_itself(self): """Verifies that an attempt to delete a timeout within the corresponding handler generates no exceptions.""" self.timer_stack = list() handle_holder = [] self.timer_got_fired = False self.handle = self.ioloop.add_timeout( 0.1, partial(self._on_timer_delete_itself, handle_holder)) handle_holder.append(self.handle) self.start() self.assertTrue(self.timer_got_called) def _on_timer_delete_itself(self, handle_holder): """ A timeout hanlder that tries to remove itself. """ self.assertEqual(self.handle, handle_holder.pop()) # This removal here should not raise exception by itself nor # in the caller SelectPoller.process_timeouts(). self.timer_got_called = True self.ioloop.remove_timeout(self.handle) self.ioloop.stop() def test_timer_delete_another(self): """Verifies that an attempt by a timeout handler to delete another, that is ready to run, cancels the execution of the latter without generating an exception. This should pose no issues.""" holder_for_target_timer = [] self.ioloop.add_timeout( 0.01, partial(self._on_timer_delete_another, holder_for_target_timer)) timer_2 = self.ioloop.add_timeout( 0.02, self._on_timer_no_call) holder_for_target_timer.append(timer_2) time.sleep(0.03) # so that timer_1 and timer_2 fires at the same time. self.start() self.assertTrue(self.deleted_another_timer) self.assertTrue(self.concluded) def _on_timer_delete_another(self, holder): """A timeout handler that tries to remove another timeout handler that is ready to run. This should pose no issues.""" target_timer = holder[0] self.ioloop.remove_timeout(target_timer) self.deleted_another_timer = True def _on_timer_conclude(): """A timeout handler that is called to verify outcome of calling or not calling of previously set handlers.""" self.concluded = True self.assertTrue(self.deleted_another_timer) self.assertNotIn(target_timer, getattr(self.ioloop, '_timeouts')) self.ioloop.stop() self.ioloop.add_timeout(0.01, _on_timer_conclude) def _on_timer_no_call(self): """A timeout handler that is used when it's assumed not be called.""" self.fail('deleted timer callback was called.') class IOLoopTimerTestPoll(IOLoopTimerTestSelect): SELECT_POLLER = 'poll' class IOLoopTimerTestEPoll(IOLoopTimerTestSelect): SELECT_POLLER = 'epoll' class IOLoopTimerTestKqueue(IOLoopTimerTestSelect): SELECT_POLLER = 'kqueue' class IOLoopSleepTimerTestSelect(IOLoopTimerTestSelect): """Sleep until all the timers should have passed and check they still fire in deadline order""" def start_test(self): self.set_timers() time.sleep(self.NUM_TIMERS * self.TIMER_INTERVAL) self.start() class IOLoopSleepTimerTestPoll(IOLoopSleepTimerTestSelect): SELECT_POLLER = 'poll' class IOLoopSleepTimerTestEPoll(IOLoopSleepTimerTestSelect): SELECT_POLLER = 'epoll' class IOLoopSleepTimerTestKqueue(IOLoopSleepTimerTestSelect): SELECT_POLLER = 'kqueue' class IOLoopSocketBaseSelect(IOLoopBaseTest): SELECT_POLLER = 'select' READ_SIZE = 1024 def save_sock(self, sock): fd = sock.fileno() self.sock_map[fd] = sock return fd def setUp(self): super(IOLoopSocketBaseSelect, self).setUp() self.sock_map = dict() self.create_accept_socket() def tearDown(self): for fd in self.sock_map: self.ioloop.remove_handler(fd) self.sock_map[fd].close() super(IOLoopSocketBaseSelect, self).tearDown() def create_accept_socket(self): listen_sock = socket.socket() listen_sock.setblocking(0) listen_sock.bind(('localhost', 0)) listen_sock.listen(1) fd = self.save_sock(listen_sock) self.listen_addr = listen_sock.getsockname() self.ioloop.add_handler(fd, self.do_accept, READ) def create_write_socket(self, on_connected): write_sock = socket.socket() write_sock.setblocking(0) err = write_sock.connect_ex(self.listen_addr) self.assertEqual(err, errno.EINPROGRESS) fd = self.save_sock(write_sock) self.ioloop.add_handler(fd, on_connected, WRITE) return write_sock def do_accept(self, fd, events, write_only): self.assertEqual(events, READ) listen_sock = self.sock_map[fd] read_sock, _ = listen_sock.accept() fd = self.save_sock(read_sock) self.ioloop.add_handler(fd, self.do_read, READ) def connected(self, fd, events, write_only): raise AssertionError("IOLoopSocketBase.connected not extended") def do_read(self, fd, events, write_only): self.assertEqual(events, READ) self.verify_message(os.read(fd, self.READ_SIZE)) def verify_message(self, msg): raise AssertionError("IOLoopSocketBase.verify_message not extended") def on_timeout(self): """called when stuck waiting for connection to close""" # force the ioloop to stop self.ioloop.stop() raise AssertionError('Test timed out') class IOLoopSocketBasePoll(IOLoopSocketBaseSelect): SELECT_POLLER = 'poll' class IOLoopSocketBaseEPoll(IOLoopSocketBaseSelect): SELECT_POLLER = 'epoll' class IOLoopSocketBaseKqueue(IOLoopSocketBaseSelect): SELECT_POLLER = 'kqueue' class IOLoopSimpleMessageTestCaseSelect(IOLoopSocketBaseSelect): def start(self): self.create_write_socket(self.connected) super(IOLoopSimpleMessageTestCaseSelect, self).start() def connected(self, fd, events, write_only): self.assertEqual(events, WRITE) logging.debug("Writing to %d message: %s", fd, 'X') os.write(fd, b'X') self.ioloop.update_handler(fd, 0) def verify_message(self, msg): self.assertEqual(msg, b'X') self.ioloop.stop() def start_test(self): self.start() class IOLoopSimpleMessageTestCasetPoll(IOLoopSimpleMessageTestCaseSelect): SELECT_POLLER = 'poll' class IOLoopSimpleMessageTestCasetEPoll(IOLoopSimpleMessageTestCaseSelect): SELECT_POLLER = 'epoll' class IOLoopSimpleMessageTestCasetKqueue(IOLoopSimpleMessageTestCaseSelect): SELECT_POLLER = 'kqueue' pika-0.10.0/tests/unit/tornado_tests.py000066400000000000000000000013351257163076400201140ustar00rootroot00000000000000""" Tests for pika.adapters.tornado_connection """ try: from tornado import ioloop except ImportError: ioloop = None try: import mock except ImportError: from unittest import mock try: import unittest2 as unittest except ImportError: import unittest try: from pika.adapters import tornado_connection except ImportError: tornado_connection = None class TornadoConnectionTests(unittest.TestCase): @unittest.skipIf(ioloop is None, 'requires Tornado') @mock.patch('pika.adapters.base_connection.BaseConnection.__init__') def test_tornado_connection_call_parent(self, mock_init): obj = tornado_connection.TornadoConnection() mock_init.called_once_with(None, None, False) pika-0.10.0/tests/unit/utils_tests.py000066400000000000000000000005071257163076400176060ustar00rootroot00000000000000try: import unittest2 as unittest except ImportError: import unittest from pika import utils class UtilsTests(unittest.TestCase): def test_is_callable_true(self): self.assertTrue(utils.is_callable(utils.is_callable)) def test_is_callable_false(self): self.assertFalse(utils.is_callable(1)) pika-0.10.0/utils/000077500000000000000000000000001257163076400136675ustar00rootroot00000000000000pika-0.10.0/utils/codegen.py000066400000000000000000000350121257163076400156460ustar00rootroot00000000000000# ***** BEGIN LICENSE BLOCK ***** # # For copyright and licensing please refer to COPYING. # # ***** END LICENSE BLOCK ***** from __future__ import nested_scopes import os import sys RABBITMQ_PUBLIC_UMBRELLA = '../../rabbitmq-public-umbrella' RABBITMQ_CODEGEN = 'rabbitmq-codegen' PIKA_SPEC = '../pika/spec.py' CODEGEN_PATH = os.path.realpath('%s/%s' % (RABBITMQ_PUBLIC_UMBRELLA, RABBITMQ_CODEGEN)) print('codegen-path: %s' % CODEGEN_PATH) sys.path.append(CODEGEN_PATH) import amqp_codegen import re DRIVER_METHODS = { "Exchange.Bind": ["Exchange.BindOk"], "Exchange.Unbind": ["Exchange.UnbindOk"], "Exchange.Declare": ["Exchange.DeclareOk"], "Exchange.Delete": ["Exchange.DeleteOk"], "Queue.Declare": ["Queue.DeclareOk"], "Queue.Bind": ["Queue.BindOk"], "Queue.Purge": ["Queue.PurgeOk"], "Queue.Delete": ["Queue.DeleteOk"], "Queue.Unbind": ["Queue.UnbindOk"], "Basic.Qos": ["Basic.QosOk"], "Basic.Get": ["Basic.GetOk", "Basic.GetEmpty"], "Basic.Ack": [], "Basic.Reject": [], "Basic.Recover": ["Basic.RecoverOk"], "Basic.RecoverAsync": [], "Tx.Select": ["Tx.SelectOk"], "Tx.Commit": ["Tx.CommitOk"], "Tx.Rollback": ["Tx.RollbackOk"] } def fieldvalue(v): if isinstance(v, unicode): return repr(v.encode('ascii')) else: return repr(v) def normalize_separators(s): s = s.replace('-', '_') s = s.replace(' ', '_') return s def pyize(s): s = normalize_separators(s) if s in ('global', 'class'): s += '_' return s def camel(s): return normalize_separators(s).title().replace('_', '') amqp_codegen.AmqpMethod.structName = lambda m: camel( m.klass.name) + '.' + camel(m.name) amqp_codegen.AmqpClass.structName = lambda c: camel(c.name) + "Properties" def constantName(s): return '_'.join(re.split('[- ]', s.upper())) def flagName(c, f): if c: return c.structName() + '.' + constantName('flag_' + f.name) else: return constantName('flag_' + f.name) def generate(specPath): spec = amqp_codegen.AmqpSpec(specPath) def genSingleDecode(prefix, cLvalue, unresolved_domain): type = spec.resolveDomain(unresolved_domain) if type == 'shortstr': print(prefix + "%s, offset = data.decode_short_string(encoded, offset)" % cLvalue) elif type == 'longstr': print(prefix + "length = struct.unpack_from('>I', encoded, offset)[0]") print(prefix + "offset += 4") print(prefix + "%s = encoded[offset:offset + length]" % cLvalue) print(prefix + "try:") print(prefix + " %s = str(%s)" % (cLvalue, cLvalue)) print(prefix + "except UnicodeEncodeError:") print(prefix + " pass") print(prefix + "offset += length") elif type == 'octet': print(prefix + "%s = struct.unpack_from('B', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 1") elif type == 'short': print(prefix + "%s = struct.unpack_from('>H', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 2") elif type == 'long': print(prefix + "%s = struct.unpack_from('>I', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 4") elif type == 'longlong': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'timestamp': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'bit': raise Exception("Can't decode bit in genSingleDecode") elif type == 'table': print(Exception(prefix + "(%s, offset) = data.decode_table(encoded, offset)" % \ cLvalue)) else: raise Exception("Illegal domain in genSingleDecode", type) def genSingleEncode(prefix, cValue, unresolved_domain): type = spec.resolveDomain(unresolved_domain) if type == 'shortstr': print(prefix + \ "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" \ % (cValue, prefix, cValue)) print(prefix + "data.encode_short_string(pieces, %s)" % cValue) elif type == 'longstr': print(prefix + \ "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" \ % (cValue, prefix, cValue)) print( prefix + "value = %s.encode('utf-8') if isinstance(%s, unicode_type) else %s" % (cValue, cValue, cValue)) print(prefix + "pieces.append(struct.pack('>I', len(value)))") print(prefix + "pieces.append(value)") elif type == 'octet': print(prefix + "pieces.append(struct.pack('B', %s))" % cValue) elif type == 'short': print(prefix + "pieces.append(struct.pack('>H', %s))" % cValue) elif type == 'long': print(prefix + "pieces.append(struct.pack('>I', %s))" % cValue) elif type == 'longlong': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'timestamp': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'bit': raise Exception("Can't encode bit in genSingleEncode") elif type == 'table': print(Exception(prefix + "data.encode_table(pieces, %s)" % cValue)) else: raise Exception("Illegal domain in genSingleEncode", type) def genDecodeMethodFields(m): print(" def decode(self, encoded, offset=0):") bitindex = None for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 if bitindex >= 8: bitindex = 0 if not bitindex: print( " bit_buffer = struct.unpack_from('B', encoded, offset)[0]") print(" offset += 1") print(" self.%s = (bit_buffer & (1 << %d)) != 0" % \ (pyize(f.name), bitindex)) bitindex += 1 else: bitindex = None genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" return self") print('') def genDecodeProperties(c): print(" def decode(self, encoded, offset=0):") print(" flags = 0") print(" flagword_index = 0") print(" while True:") print( " partial_flags = struct.unpack_from('>H', encoded, offset)[0]") print(" offset += 2") print( " flags = flags | (partial_flags << (flagword_index * 16))") print(" if not (partial_flags & 1):") print(" break") print(" flagword_index += 1") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" self.%s = (flags & %s) != 0" % (pyize(f.name), flagName(c, f))) else: print(" if flags & %s:" % (flagName(c, f),)) genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" else:") print(" self.%s = None" % (pyize(f.name),)) print(" return self") print('') def genEncodeMethodFields(m): print(" def encode(self):") print(" pieces = list()") bitindex = None def finishBits(): if bitindex is not None: print(" pieces.append(struct.pack('B', bit_buffer))") for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 print(" bit_buffer = 0") if bitindex >= 8: finishBits() print(" bit_buffer = 0") bitindex = 0 print(" if self.%s:" % pyize(f.name)) print(" bit_buffer = bit_buffer | (1 << %d)" % \ bitindex) bitindex += 1 else: finishBits() bitindex = None genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) finishBits() print(" return pieces") print('') def genEncodeProperties(c): print(" def encode(self):") print(" pieces = list()") print(" flags = 0") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" if self.%s: flags = flags | %s" % (pyize(f.name), flagName(c, f))) else: print(" if self.%s is not None:" % (pyize(f.name),)) print(" flags = flags | %s" % (flagName(c, f),)) genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" flag_pieces = list()") print(" while True:") print(" remainder = flags >> 16") print(" partial_flags = flags & 0xFFFE") print(" if remainder != 0:") print(" partial_flags |= 1") print( " flag_pieces.append(struct.pack('>H', partial_flags))") print(" flags = remainder") print(" if not flags:") print(" break") print(" return flag_pieces + pieces") print('') def fieldDeclList(fields): return ''.join([", %s=%s" % (pyize(f.name), fieldvalue(f.defaultvalue)) for f in fields]) def fieldInitList(prefix, fields): if fields: return ''.join(["%sself.%s = %s\n" % (prefix, pyize(f.name), pyize(f.name)) \ for f in fields]) else: return '%spass\n' % (prefix,) print("""# ***** BEGIN LICENSE BLOCK ***** # # For copyright and licensing please refer to COPYING. # # ***** END LICENSE BLOCK ***** # NOTE: Autogenerated code by codegen.py, do not edit import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type str = bytes """) print("PROTOCOL_VERSION = (%d, %d, %d)" % (spec.major, spec.minor, spec.revision)) print("PORT = %d" % spec.port) print('') # Append some constants that arent in the spec json file spec.constants.append(('FRAME_MAX_SIZE', 131072, '')) spec.constants.append(('FRAME_HEADER_SIZE', 7, '')) spec.constants.append(('FRAME_END_SIZE', 1, '')) constants = {} for c, v, cls in spec.constants: constants[constantName(c)] = v for key in sorted(constants.keys()): print("%s = %s" % (key, constants[key])) print('') for c in spec.allClasses(): print('') print('class %s(amqp_object.Class):' % (camel(c.name),)) print('') print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(camel(c.name)),)) print('') for m in c.allMethods(): print(' class %s(amqp_object.Method):' % (camel(m.name),)) print('') methodid = m.klass.index << 16 | m.index print(" INDEX = 0x%.08X # %d, %d; %d" % \ (methodid, m.klass.index, m.index, methodid)) print(" NAME = %s" % (fieldvalue(m.structName(),))) print('') print(" def __init__(self%s):" % (fieldDeclList(m.arguments),)) print(fieldInitList(' ', m.arguments)) print(" @property") print(" def synchronous(self):") print(" return %s" % m.isSynchronous) print('') genDecodeMethodFields(m) genEncodeMethodFields(m) for c in spec.allClasses(): if c.fields: print('') print('class %s(amqp_object.Properties):' % (c.structName(),)) print('') print(" CLASS = %s" % (camel(c.name),)) print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(c.structName(),))) print('') index = 0 if c.fields: for f in c.fields: if index % 16 == 15: index += 1 shortnum = index / 16 partialindex = 15 - (index % 16) bitindex = shortnum * 16 + partialindex print(' %s = (1 << %d)' % (flagName(None, f), bitindex)) index += 1 print('') print(" def __init__(self%s):" % (fieldDeclList(c.fields),)) print(fieldInitList(' ', c.fields)) genDecodeProperties(c) genEncodeProperties(c) print("methods = {") print(',\n'.join([" 0x%08X: %s" % (m.klass.index << 16 | m.index, m.structName()) \ for m in spec.allMethods()])) print("}") print('') print("props = {") print(',\n'.join([" 0x%04X: %s" % (c.index, c.structName()) \ for c in spec.allClasses() \ if c.fields])) print("}") print('') print('') print("def has_content(methodNumber):") print(' return methodNumber in (') for m in spec.allMethods(): if m.hasContent: print(' %s.INDEX,' % m.structName()) print(' )') if __name__ == "__main__": with open(PIKA_SPEC, 'w') as handle: sys.stdout = handle generate(['%s/amqp-rabbitmq-0.9.1.json' % CODEGEN_PATH])