././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.167062 amqp-5.3.1/0000755000076500000240000000000014714731266011747 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441318.0 amqp-5.3.1/Changelog0000644000076500000240000010606214714731246013564 0ustar00nusnusstaffChanges ======= py-amqp is fork of amqplib used by Kombu containing additional features and improvements. The previous amqplib changelog is here: http://code.google.com/p/py-amqplib/source/browse/CHANGES .. _version-5.3.1: 5.3.1 ===== :release-date: 2024-11-12 :release-by: Tomer Nosrati - Fixed readthedocs (#443) - Prepare for release: 5.3.1 (#444) .. _version-5.3.0: 5.3.0 ===== :release-date: 2024-11-12 :release-by: Tomer Nosrati - Hard-code requests version because of the bug in docker-py (#432) - fix AbstractTransport repr socket error (#361) (#431) - blacksmith.sh: Migrate workflows to Blacksmith (#436) - Added Python 3.13 to CI (#440) - Prepare for release: 5.3.0 (#441) .. _version-5.2.0: 5.2.0 ===== :release-date: 2023-11-06 10:55 A.M. UTC+6:00 :release-by: Asif Saif Uddin - Added python 3.12 and drop python 3.7 (#423). - Test vine 5.1.0 (#424). - Set an explicit timeout on SSL handshake to prevent hangs. - Add MessageNacked to recoverable errors. - Send heartbeat frames more often. .. _version-5.1.1: 5.1.1 ===== :release-date: 2022-04-17 12:45 P.M. UTC+6:00 :release-by: Asif Saif Uddin - Use AF_UNSPEC for name resolution (#389). .. _version-5.1.0: 5.1.0 ===== :release-date: 2022-03-06 10:05 A.M. UTC+6:00 :release-by: Asif Saif Uddin - Improve performance of _get_free_channel_id, fix channel max bug (#385). - Document memoryview usage, minor frame_writer.write_frame refactor (#384). - Start dropping python 3.6 (#387). - Added experimental __slots__ to some classes (#368) - Relaxed vine version for upcoming release. - Upgraded topytest 7 (#388). .. _version-5.0.9: 5.0.9 ===== :release-date: 2021-12-20 11:00 A.M. UTC+6:00 :release-by: Asif Saif Uddin - Append to _used_channel_ids in _used_channel_ids .. _version-5.0.8: 5.0.8 ===== :release-date: 2021-12-19 11:15 A.M. UTC+6:00 :release-by: Asif Saif Uddin - Reduce memory usage of Connection (#377) - Add additional error handling around code where an OSError may be raised on failed connections. Fixes (#378) .. _version-5.0.7: 5.0.7 ===== :release-date: 2021-12-13 15:45 P.M. UTC+6:00 :release-by: Asif Saif Uddin - Remove dependency to case - Bugfix: not closing socket after server disconnect .. _version-5.0.6: 5.0.6 ===== :release-date: 2021-04-01 10:45 A.M. UTC+6:00 :release-by: Asif Saif Uddin - Change the order in which context.check_hostname and context.verify_mode get set in SSLTransport._wrap_socket_sni. Fixes bug introduced in 5.0.3 where setting context.verify_mode = ssl.CERT_NONE would raise "ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled." Setting context.check_hostname prior to setting context.verify_mode resolves the issue. - Remove TCP_USER_TIMEOUT option for Solaris (#355) - Pass long_description to setup() (#353) - Fix for tox-docker 2.0 - Moved to GitHub actions CI (#359) .. _version-5.0.5: 5.0.5 ===== :release-date: 2021-01-28 4:30 P.M UTC+6:00 :release-by: Asif Saif Uddin - Removed mistakenly introduced code which was causing import errors .. _version-5.0.4: 5.0.4 ===== :release-date: 2021-01-28 2:30 P.M UTC+6:00 :release-by: Asif Saif Uddin - Add missing load_default_certs() call to fix a regression in v5.0.3 release. (#350) .. _version-5.0.3: 5.0.3 ===== :release-date: 2021-01-19 9:00 P.M UTC+6:00 :release-by: Asif Saif Uddin - Change the default value of ssl_version to None. When not set, the proper value between ssl.PROTOCOL_TLS_CLIENT and ssl.PROTOCOL_TLS_SERVER will be selected based on the param server_side in order to create a TLS Context object with better defaults that fit the desired connection side. - Change the default value of cert_reqs to None. The default value of ctx.verify_mode is ssl.CERT_NONE, but when ssl.PROTOCOL_TLS_CLIENT is used, ctx.verify_mode defaults to ssl.CERT_REQUIRED. - Fix context.check_hostname logic. Checking the hostname depends on having support of the SNI TLS extension and being provided with a server_hostname value. Another important thing to mention is that enabling hostname checking automatically sets verify_mode from ssl.CERT_NONE to ssl.CERT_REQUIRED in the stdlib ssl and it cannot be set back to ssl.CERT_NONE as long as hostname checking is enabled. - Refactor the SNI tests to test one thing at a time and removing some tests that were being repeated over and over. .. _version-5.0.2: 5.0.2 ===== :release-date: 2020-11-08 6:50 P.M UTC+3:00 :release-by: Omer Katz - Whhels are no longer universal. Contributed by **Omer Katz** - Added debug representation to Connection and *Transport classes Contributed by **Matus Valo** - Reintroduce ca_certs and ciphers parameters of SSLTransport._wrap_socket_sni() This fixes issue introduced in commit: 53d6777 Contributed by **Matus Valo** - Fix infinite wait when using confirm_publish Contributed by **Omer Katz** & **RezaSi** .. _version-5.0.1: 5.0.1 ===== :release-date: 2020-09-06 6:10 P.M UTC+3:00 :release-by: Omer Katz - Require vine 5.0.0. Contributed by **Omer Katz** .. _version-5.0.0: 5.0.0 ===== :release-date: 2020-09-03 3:20 P.M UTC+3:00 :release-by: Omer Katz - Stop to use deprecated method ssl.wrap_socket. Contributed by **Hervé Beraud** .. _version-5.0.0b1: 5.0.0b1 ======= :release-date: 2020-09-01 6:20 P.M UTC+3:00 :release-by: Omer Katz - Dropped Python 3.5 support. Contributed by **Omer Katz** - Removed additional compatibility code. Contributed by **Omer Katz** .. _version-5.0.0a1: 5.0.0a1 ======= :release-date: 2019-04-01 4:30 P.M UTC+3:00 :release-by: Omer Katz - Dropped Python 2.x support. Contributed by **Omer Katz** - Dropped Python 3.4 support. Contributed by **Omer Katz** - Depend on :pypi:`vine` 5.0.0a1. Contributed by **Omer Katz** Code Cleanups & Improvements: - **Omer Katz** .. _version-2.6.0: 2.6.1 ===== :release-date: 2020-07-31 10.30 P.M UTC+6:00 :release-by: Asif Saif Uddin - Fix buffer overflow in frame_writer after frame_max is increased. `frame_writer` allocates a `bytearray` on initialization with a length based on the `connection.frame_max` value. If `connection.frame_max` is changed to a larger value, this causes an error like `pack_into requires a buffer of at least 408736 bytes`. .. _version-2.6.0: 2.6.0 ===== :release-date: 20-06-01 12.00 P.M UTC+6:00 :release-by: Asif Saif Uddin - Implement speedups in cython (#311) - Updated some tests & code improvements - Separate logger for Connection.heartbeat_tick method - Cython generic content (#315) - Improve documentation a_global parameter of basic_qos() method. - Fix saving partial read buffer on windows during socket timeout. (#321) - Fix deserialization of long string field values that are not utf-8. - Added simple cythonization of abstract_channel.py - Speedups of serialization.py are more restrictive .. _version-2.5.2: 2.5.2 ===== :release-date: 2019-09-30 19.00 P.M UTC+6:00 :release-by: Asif Saif Uddin - Fixed a channel issue against a connection already closed - Updated some tests & code improvements .. _version-2.5.1: 2.5.1 ===== :release-date: 2019-08-14 22.00 P.M UTC+6:00 :release-by: Asif Saif Uddin - Ignore all methods except Close and Close-OK when channel/connection is closing - Fix faulty ssl sni intiation parameters (#283) - Undeprecate auto_delete flag for exchanges. (#287) - Improved tests and testing environments .. _version-2.5.0: 2.5.0 ===== :release-date: 2019-05-30 17.30 P.M UTC+6:00 :release-by: Asif Saif Uddin - Drop Python 3.4 - Add new platform - Numerious bug fixes .. _version-2.4.2: 2.4.2 ===== :release-date: 2019-03-03 10:45 P.M UTC+2:00 :release-by: Omer Katz - Added support for the Cygwin platform Contributed by **Matus Valo** - Correct offset incrementation when parsing bitmaps. Contributed by **Allan Simon** & **Omer Katz** - Consequent bitmaps are now parsed correctly. Previously the bit counter was reset with every bit. We now reset it once per 8 bits, when we consume the next byte. Contributed by **Omer Katz** Code Cleanups & Improvements: - **Patrick Cloke** - **Matus Valo** - **Jeremiah Cooper** - **Omer Katz** Test Coverage & CI Improvements: - **Matus Valo** - **Omer Katz** - **Jeremiah Cooper** - **Omer Katz** .. _version-2.4.1: 2.4.1 ===== :release-date: 2018-04-02 9:00 A.M UTC+2 :release-by: Omer Katz - To avoid breaking the API basic_consume() now returns the consumer tag instead of a tuple when nowait is True. Fix contributed by **Matus Valo** - Fix crash in basic_publish when broker does not support connection.blocked capability. Fix contributed by **Matus Valo** - read_frame() is now Python 3 compatible for large payloads. Fix contributed by **Antonio Ojea** - Support float read_timeout/write_timeout. Fix contributed by **:github_user:`cadl`** - Always treat SSLError timeouts as socket timeouts. Fix contributed by **Dirk Mueller** and **Antonio Ojea** - Treat EWOULDBLOCK as timeout. This fixes a regression on Windows from 2.4.0. Fix contributed by **Lucian Petrut** Test Coverage & CI Improvements: - **Matus Valo** - **Antonio Ojea** .. _version-2.4.0: 2.4.0 ===== :release-date: 2018-13-01 1:00 P.M UTC+2 :release-by: Omer Katz - Fix inconsistent frame_handler return value. The function returned by frame_handler is meant to return True once the complete message is received and the callback is called, False otherwise. This fixes the return value for messages with a body split across multiple frames, and heartbeat frames. Fix contributed by **:github_user:`evanunderscore`** - Don't default content_encoding to utf-8 for bytes. This is not an acceptable default as the content may not be valid utf-8, and even if it is, the producer likely does not expect the message to be decoded by the consumer. Fix contributed by **:github_user:`evanunderscore`** - Fix encoding of messages with multibyte characters. Body length was previously calculated using string length, which may be less than the length of the encoded body when it contains multibyte sequences. This caused the body of the frame to be truncated. Fix contributed by **:github_user:`evanunderscore`** - Respect content_encoding when encoding messages. Previously the content_encoding was ignored and messages were always encoded as utf-8. This caused messages to be incorrectly decoded if content_encoding is properly respected when decoding. Fix contributed by **:github_user:`evanunderscore`** - Fix AMQP protocol header for AMQP 0-9-1. Previously it was set to a different value for unknown reasons. Fix contributed by **Carl Hörberg** - Add support for Python 3.7. Change direct SSLSocket instantiation with wrap_socket. Added Python 3.7 to CI. Fix contributed by **Omer Katz** and **:github_user:`avborhanian`** - Add support for field type "x" (byte array). Fix contributed by **Davis Kirkendall** - If there is an exception raised on Connection.connect or Connection.close, ensure that the underlying transport socket is closed. Adjust exception message on connection errors as well. Fix contributed by **:github_user:`tomc797`** - TCP_USER_TIMEOUT has to be excluded from KNOWN_TCP_OPTS in BSD platforms. Fix contributed by **George Tantiras** - Handle negative acknowledgments. Fix contributed by **Matus Valo** - Added integration tests. Fix contributed by **Matus Valo** - Fix basic_consume() with no consumer_tag provided. Fix contributed by **Matus Valo** - Improved empty AMQPError string representation. Fix contributed by **Matus Valo** - Drain events before publish. This is needed to capture out of memory messages for clients that only publish. Otherwise on_blocked is never called. Fix contributed by **Jelte Fennema** and **Matus Valo** - Don't revive channel when connection is closing. When connection is closing don't raise error when Channel.Close method is received. Fix contributed by **Matus Valo** .. _version-2.3.2: 2.3.2 ===== :release-date: 2018-05-29 15:30 P.M UTC+3 :release-by: Omer Katz - Fix a regression that occurs when running amqp on OSX. TCP_USER_TIMEOUT is not available when running on OSX. We now remove it from the set of known TCP options. Fix contributed by **Ofer Horowitz** .. _version-2.3.1: 2.3.1 ===== :release-date: 2018-05-28 16:30 P.M UTC+3 :release-by: Omer Katz - Fix a regression that occurs when running amqp under Python 2.7. #182 mistakenly replaced a type check with unicode to string_t which is str in Python 2.7. text_t should have been used instead. This is now fixed and the tests have been adjusted to ensure this never regresses again. Fix contributed by **Omer Katz** .. _version-2.3.0: 2.3.0 ===== :release-date: 2018-05-27 16:30 P.M UTC+3 :release-by: Omer Katz - Cleanup TCP configurations across platforms and unified defaults. Fix contributed by **Dan Chowdhury** - Fix for TypeError when setting socket options. Fix contributed by **Matthias Erll** - Ensure that all call sites for decoding bytes to str allow surrogates, as the encoding mechanism now supports. Fix contributed by **Stephen Hatch** - Don't send AAAA DNS request when domain resolved to IPv4 address. Fix contributed by **Ihar Hrachyshka & Omer Katz** - Support for EXTERNAL authentication and specific login_method. Fix contributed by **Matthias Erll** - If the old python-gssapi library is installed the gssapi module will be available. We now ensure that we only use the new gssapi library. Fix contributed by **Jacopo Notarstefano** Code Cleanups & Test Coverage: - :github_user:`eric-eric-eric` - **Omer Katz** - **Jon Dufresne** - **Matthias Urlichs** .. _version-2.2.2: 2.2.2 ===== :release-date: 2017-09-14 09:00 A.M UTC+2 :release-by: Omer Katz - Sending empty messages no longer hangs. Instead an empty message is sent correctly.(addresses #151) Fix contributed by **Christian Blades** - Fixed compatibility issues in UTF-8 encoding behavior between Py2/Py3 (#164) Fix contributed by **Tyler James Harden** .. _version-2.2.1: 2.2.1 ===== :release-date: 2017-07-14 09:00 A.M UTC+2 :release-by: Omer Katz - Fix implicit conversion from bytes to string on the connection object. (Issue #155) This issue has caused Celery to crash on connection to RabbitMQ. Fix contributed by **Omer Katz** .. _version-2.2.0: 2.2.0 ===== :release-date: 2017-07-12 10:00 A.M UTC+2 :release-by: Ask Solem - Fix random delays in task execution. This is a bug that caused performance issues due to polling timeouts that occur when receiving incomplete AMQP frames. (Issues #3978 #3737 #3814) Fix contributed by **Robert Kopaczewski** - Calling ``conn.collect()`` multiple times will no longer raise an ``AttributeError`` when no channels exist. Fix contributed by **Gord Chung** - Fix compatibility code for Python 2.7.6. Fix contributed by **Jonathan Schuff** - When running in Windows, py-amqp will no longer use the unsupported TCP option TCP_MAXSEG. Fix contributed by **Tony Breeds** - Added support for setting the SNI hostname header. The SSL protocol version is now set to SSLv23 Contributed by **Dhananjay Sathe** - Authentication mechanisms were refactored to be more modular. GSSAPI authentication is now supported. Contributed by **Alexander Dutton** - Do not reconnect on collect. Fix contributed by **Gord Chung** .. _version-2.1.4: 2.1.4 ===== :release-date: 2016-12-14 03:40 P.M PST :release-by: Ask Solem - Removes byte string comparison warnings when running under ``python -b``. Fix contributed by **Jon Dufresne**. - Linux version parsing broke when the version included a '+' character (Issue #119). - Now sets default TCP settings for platforms that support them (e.g. Linux). +----------------------+---------------+ | Constant | Value | +======================+===============+ | ``TCP_KEEPIDLE`` | ``60`` | +----------------------+---------------+ | ``TCP_KEEPINTVL`` | ``10`` | +----------------------+---------------+ | ``TCP_KEEPCNT`` | ``9`` | +----------------------+---------------+ | ``TCP_USER_TIMEOUT`` | ``1000`` (1s) | +----------------------+---------------+ This will help detecting the socket being closed earlier, which is very important in failover and load balancing scenarios. .. _version-2.1.3: 2.1.3 ===== :release-date: 2016-12-07 06:00 P.M PST :release-by: Ask Solem - Fixes compatibility with Python 2.7.5 and below (Issue #107). .. _version-2.1.2: 2.1.2 ===== :release-date: 2016-12-07 02:00 P.M PST - Linux: Now sets the :data:`~socket.TCP_USER_TIMEOUT` flag if available for better failed connection detection. Contributed by **Jelte Fennema**. The timeout is set to the ``connect_timeout`` value by default, but can also be specified by using the ``socket_settings`` argument to :class:`~amqp.Connection`: .. code-block:: python from amqp import Connection from amqp.platform import TCP_USER_TIMEOUT conn = Connection(socket_settings={ TCP_USER_TIMEOUT: int(60 * 1000), # six minutes in ms. }) When using :pypi:`Kombu` this can be specified as part of the ``transport_options``: .. code-block:: python from amqp.platform import TCP_USER_TIMEOUT from kombu import Connection conn = Connection(transport_options={ 'socket_settings': { TCP_USER_TIMEOUT: int(60 * 1000), # six minutes in ms. }, }) Or when using :pypi:`Celery` it can be specified using the ``broker_transport_options`` setting: .. code-block:: python from amqp.platform import TCP_USER_TIMEOUT from celery import Celery app = Celery() app.conf.update( broker_transport_options={ TCP_USER_TIMEOUT: int(60 * 1000), # six minutes in ms. } ) - Python compatibility: Fixed compatibility when using the python ``-b`` flag. Fix contributed by Jon Dufresne. .. _version-2.1.1: 2.1.1 ===== :release-date: 2016-10-13 06:36 P.M PDT :release-by: Ask Solem .. _version-2.1.0: - **Requirements** - Now depends on :ref:`Vine 1.1.3 `. - Frame writer: Account for overhead when calculating frame size. The client would crash if the message was within a certain size. - Fixed struct unicode problems (#108) * Standardize pack invocations on bytestrings. * Leave some literals as strings to enable interpolation. * Fix flake8 fail. Fix contributed by **Brendan Smithyman**. 2.1.0 ===== :release-date: 2016-09-07 04:23 P.M PDT :release-by: Ask Solem - **Requirements** - Now depends on :ref:`Vine 1.1.2 `. - Now licensed under the BSD license! Thanks to Barry Pederson for approving the license change, which unifies the license used across all projects in the Celery organization. - Datetimes in method frame arguments are now handled properly. - Fixed compatibility with Python <= 2.7.6 - Frame_writer is no longer a generator, which should solve a rare "generator already executing" error (Issue #103). .. _version-2.0.3: 2.0.3 ===== :release-date: 2016-07-11 08:00 P.M PDT :release-by: Ask Solem - SSLTransport: Fixed crash "no attribute sslopts" when ``ssl=True`` (Issue #100). - Fixed incompatible argument spec for ``Connection.Close`` (Issue #45). This caused the RabbitMQ server to raise an exception (INTERNAL ERROR). - Transport: No longer implements `__del__` to make sure gc can collect connections. It's the responsibility of the caller to close connections, this was simply a relic from the amqplib library. .. _version-2.0.2: 2.0.2 ===== :release-date: 2016-06-10 5:40 P.M PDT :release-by: Ask Solem - Python 3: Installation requirements ended up being a generator and crashed setup.py. Fix contributed by ChangBo Guo(gcb). - Python <= 2.7.7: struct.pack arguments cannot be unicode Fix contributed by Alan Justino and Xin Li. - Python 3.4: Fixed use of `bytes % int`. Fix contributed by Alan Justino. - Connection/Transport: Fixed handling of default port. Fix contributed by Quentin Pradet. .. _version-2.0.1: 2.0.1 ===== :release-date: 2016-05-31 6:20 P.M PDT :release-by: Ask Solem - Adds backward compatibility layer for the 1.4 API. Using the connection without calling ``.connect()`` first will now work, but a warning is emitted and the behavior is deprecated and will be removed in version 2.2. - Fixes kombu 3.0/celery 3.1 compatibility (Issue #88). Fix contributed by Bas ten Berge. - Fixed compatibility with Python 2.7.3 (Issue #85) Fix contributed by Bas ten Berge. - Fixed bug where calling drain_events() with a timeout of 0 would actually block until a frame is received. - Documentation moved to http://amqp.readthedocs.io (Issue #89). See https://blog.readthedocs.com/securing-subdomains/ for the reasoning behind this change. Fix contributed by Adam Chainz. .. _version-2.0.0: 2.0.0 ===== :release-date: 2016-05-26 1:44 P.M PDT :release-by: Ask Solem - No longer supports Python 2.6 - You must now call Connection.connect() to establish the connection. The Connection constructor no longer has side effects, so you have to explicitly call connect first. - Library rewritten to anticipate async changes. - Connection now exposes underlying socket options. This change allows to set arbitrary TCP socket options during the creation of the transport. Those values can be set passing a dictionray where the key is the name of the parameter we want to set. The names of the keys are the ones reported above. Contributed by Andrea Rosa, Dallas Marlow and Rongze Zhu. - Additional logging for heartbeats. Contributed by Davanum Srinivas, and Dmitry Mescheryakov. - SSL: Fixes issue with remote connection hanging Fix contributed by Adrien Guinet. - SSL: ``ssl`` dict argument now supports the ``check_hostname`` key (Issue #63). Contributed by Vic Kumar. - Contributions by: Adrien Guinet Andrea Rosa Artyom Koval Corey Farwell Craig Jellick Dallas Marlow Davanum Srinivas Federico Ficarelli Jared Lewis Rémy Greinhofer Rongze Zhu Yury Selivanov Vic Kumar Vladimir Bolshakov :github_user:`lezeroq` .. _version-1.4.9: 1.4.9 ===== :release-date: 2016-01-08 5:50 P.M PST :release-by: Ask Solem - Fixes compatibility with Linux/macOS instances where the ``ctypes`` module does not exist. Fix contributed by Jared Lewis. .. _version-1.4.8: 1.4.8 ===== :release-date: 2015-12-07 12:25 A.M :release-by: Ask Solem - ``abstract_channel.wait`` now accepts a float `timeout` parameter expressed in seconds Contributed by Goir. .. _version-1.4.7: 1.4.7 ===== :release-date: 2015-10-02 05:30 P.M PDT :release-by: Ask Solem - Fixed libSystem error on macOS 10.11 (El Capitan) Fix contributed by Eric Wang. - ``channel.basic_publish`` now raises :exc:`amqp.exceptions.NotConfirmed` on ``basic.nack``. - AMQP timestamps received are now converted from GMT instead of local time (Issue #67). - Wheel package installation now supported by both Python 2 and Python3. Fix contributed by Rémy Greinhofer. .. _version-1.4.6: 1.4.6 ===== :release-date: 2014-08-11 06:00 P.M UTC :release-by: Ask Solem - Now keeps buffer when socket times out. Fix contributed by Artyom Koval. - Adds ``Connection.Transport`` attribute that can be used to specify a different transport implementation. Contributed by Yury Selivanov. .. _version-1.4.5: 1.4.5 ===== :release-date: 2014-04-15 09:00 P.M UTC :release-by: Ask Solem - Can now deserialize more AMQP types. Now handles types ``short string``, ``short short int``, ``short short unsigned int``, ``short int``, ``short unsigned int``, ``long unsigned int``, ``long long int``, ``long long unsigned int`` and ``float`` which for some reason was missing, even in the original amqplib module. - SSL: Workaround for Python SSL bug. A bug in the python socket library causes ``ssl.read/write()`` on a closed socket to raise :exc:`AttributeError` instead of :exc:`IOError`. Fix contributed by Craig Jellick. - ``Transport.__del_`` now handles errors occurring at late interpreter shutdown (Issue #36). .. _version-1.4.4: 1.4.4 ===== :release-date: 2014-03-03 04:00 P.M UTC :release-by: Ask Solem - SSL transport accidentally disconnected after read timeout. Fix contributed by Craig Jellick. .. _version-1.4.3: 1.4.3 ===== :release-date: 2014-02-09 03:00 P.M UTC :release-by: Ask Solem - Fixed bug where more data was requested from the socket than was actually needed. Contributed by Ionel Cristian Mărieș. .. _version-1.4.2: 1.4.2 ===== :release-date: 2014-01-23 05:00 P.M UTC - Heartbeat negotiation would use heartbeat value from server even if heartbeat disabled (Issue #31). .. _version-1.4.1: 1.4.1 ===== :release-date: 2014-01-14 09:30 P.M UTC :release-by: Ask Solem - Fixed error occurring when heartbeats disabled. .. _version-1.4.0: 1.4.0 ===== :release-date: 2014-01-13 03:00 P.M UTC :release-by: Ask Solem - Heartbeat implementation improved (Issue #6). The new heartbeat behavior is the same approach as taken by the RabbitMQ java library. This also means that clients should preferably call the ``heartbeat_tick`` method more frequently (like every second) instead of using the old ``rate`` argument (which is now ignored). - Heartbeat interval is negotiated with the server. - Some delay is allowed if the heartbeat is late. - Monotonic time is used to keep track of the heartbeat instead of relying on the caller to call the checking function at the right time. Contributed by Dustin J. Mitchell. - NoneType is now supported in tables and arrays. Contributed by Dominik Fässler. - SSLTransport: Now handles ``ENOENT``. Fix contributed by Adrien Guinet. .. _version-1.3.3: 1.3.3 ===== :release-date: 2013-11-11 03:30 P.M UTC :release-by: Ask Solem - SSLTransport: Now keeps read buffer if an exception is raised (Issue #26). Fix contributed by Tommie Gannert. .. _version-1.3.2: 1.3.2 ===== :release-date: 2013-10-29 02:00 P.M UTC :release-by: Ask Solem - Message.channel is now a channel object (not the channel id). - Bug in previous version caused the socket to be flagged as disconnected at EAGAIN/EINTR. .. _version-1.3.1: 1.3.1 ===== :release-date: 2013-10-24 04:00 P.M UTC :release-by: Ask Solem - Now implements Connection.connected (Issue #22). - Fixed bug where ``str(AMQPError)`` did not return string. .. _version-1.3.0: 1.3.0 ===== :release-date: 2013-09-04 02:39 P.M UTC :release-by: Ask Solem - Now sets ``Message.channel`` on delivery (Issue #12) amqplib used to make the channel object available as ``Message.delivery_info['channel']``, but this was removed in py-amqp. librabbitmq sets ``Message.channel``, which is a more reasonable solution in our opinion as that keeps the delivery info intact. - New option to wait for publish confirmations (Issue #3) There is now a new Connection ``confirm_publish`` that will force any ``basic_publish`` call to wait for confirmation. Enabling publisher confirms like this degrades performance considerably, but can be suitable for some applications and now it's possible by configuration. - ``queue_declare`` now returns named tuple of type :class:`~amqp.protocol.basic_declare_ok_t`. Supporting fields: ``queue``, ``message_count``, and ``consumer_count``. - Contents of ``Channel.returned_messages`` is now named tuples. Supporting fields: ``reply_code``, ``reply_text``, ``exchange``, ``routing_key``, and ``message``. - Sockets now set to close on exec using the ``FD_CLOEXEC`` flag. Currently only supported on platforms supporting this flag, which does not include Windows. Contributed by Tommie Gannert. .. _version-1.2.1: 1.2.1 ===== :release-date: 2013-08-16 05:30 P.M UTC :release-by: Ask Solem - Adds promise type: :meth:`amqp.utils.promise` - Merges fixes from 1.0.x .. _version-1.2.0: 1.2.0 ===== :release-date: 2012-11-12 04:00 P.M UTC :release-by: Ask Solem - New exception hierarchy: - :class:`~amqp.AMQPError` - :class:`~amqp.ConnectionError` - :class:`~amqp.RecoverableConnectionError` - :class:`~amqp.ConsumerCancelled` - :class:`~amqp.ConnectionForced` - :class:`~amqp.ResourceError` - :class:`~IrrecoverableConnectionError` - :class:`~amqp.ChannelNotOpen` - :class:`~amqp.FrameError` - :class:`~amqp.FrameSyntaxError` - :class:`~amqp.InvalidCommand` - :class:`~amqp.InvalidPath` - :class:`~amqp.NotAllowed` - :class:`~amqp.UnexpectedFrame` - :class:`~amqp.AMQPNotImplementedError` - :class:`~amqp.InternalError` - :class:`~amqp.ChannelError` - :class:`~RecoverableChannelError` - :class:`~amqp.ContentTooLarge` - :class:`~amqp.NoConsumers` - :class:`~amqp.ResourceLocked` - :class:`~IrrecoverableChannelError` - :class:`~amqp.AccessRefused` - :class:`~amqp.NotFound` - :class:`~amqp.PreconditionFailed` .. _version-1.1.0: 1.1.0 ===== :release-date: 2013-11-08 10:36 P.M UTC :release-by: Ask Solem - No longer supports Python 2.5 - Fixed receiving of float table values. - Now Supports Python 3 and Python 2.6+ in the same source code. - Python 3 related fixes. .. _version-1.0.13: 1.0.13 ====== :release-date: 2013-07-31 04:00 P.M BST :release-by: Ask Solem - Fixed problems with the SSL transport (Issue #15). Fix contributed by Adrien Guinet. - Small optimizations .. _version-1.0.12: 1.0.12 ====== :release-date: 2013-06-25 02:00 P.M BST :release-by: Ask Solem - Fixed another Python 3 compatibility problem. .. _version-1.0.11: 1.0.11 ====== :release-date: 2013-04-11 06:00 P.M BST :release-by: Ask Solem - Fixed Python 3 incompatibility in ``amqp/transport.py``. .. _version-1.0.10: 1.0.10 ====== :release-date: 2013-03-21 03:30 P.M UTC :release-by: Ask Solem - Fixed Python 3 incompatibility in ``amqp/serialization.py``. (Issue #11). .. _version-1.0.9: 1.0.9 ===== :release-date: 2013-03-08 10:40 A.M UTC :release-by: Ask Solem - Publisher ack callbacks should now work after typo fix (Issue #9). - ``channel(explicit_id)`` will now claim that id from the array of unused channel ids. - Fixes Jython compatibility. .. _version-1.0.8: 1.0.8 ===== :release-date: 2013-02-08 01:00 P.M UTC :release-by: Ask Solem - Fixed SyntaxError on Python 2.5 .. _version-1.0.7: 1.0.7 ===== :release-date: 2013-02-08 01:00 P.M UTC :release-by: Ask Solem - Workaround for bug on some Python 2.5 installations where (2**32) is 0. - Can now serialize the ARRAY type. Contributed by Adam Wentz. - Fixed tuple format bug in exception (Issue #4). .. _version-1.0.6: 1.0.6 ===== :release-date: 2012-11-29 01:14 P.M UTC :release-by: Ask Solem - ``Channel.close`` is now ignored if the connection attribute is None. .. _version-1.0.5: 1.0.5 ===== :release-date: 2012-11-21 04:00 P.M UTC :release-by: Ask Solem - ``Channel.basic_cancel`` is now ignored if the channel was already closed. - ``Channel.events`` is now a dict of sets:: >>> channel.events['basic_return'].add(on_basic_return) >>> channel.events['basic_return'].discard(on_basic_return) .. _version-1.0.4: 1.0.4 ===== :release-date: 2012-11-13 04:00 P.M UTC :release-by: Ask Solem - Fixes Python 2.5 support .. _version-1.0.3: 1.0.3 ===== :release-date: 2012-11-12 04:00 P.M UTC :release-by: Ask Solem - Now can also handle float in headers/tables when receiving messages. - Now uses :class:`array.array` to keep track of unused channel ids. - The :data:`~amqp.exceptions.METHOD_NAME_MAP` has been updated for amqp/0.9.1 and Rabbit extensions. - Removed a bunch of accidentally included images. .. _version-1.0.2: 1.0.2 ===== :release-date: 2012-11-06 05:00 P.M UTC :release-by: Ask Solem - Now supports float values in headers/tables. .. _version-1.0.1: 1.0.1 ===== :release-date: 2012-11-05 01:00 P.M UTC :release-by: Ask Solem - Connection errors no longer includes :exc:`AttributeError`. - Fixed problem with using the SSL transport in a non-blocking context. Fix contributed by Mher Movsisyan. .. _version-1.0.0: 1.0.0 ===== :release-date: 2012-11-05 01:00 P.M UTC :release-by: Ask Solem - Channels are now restored on channel error, so that the connection does not have to closed. .. _version-0.9.4: Version 0.9.4 ============= - Adds support for ``exchange_bind`` and ``exchange_unbind``. Contributed by Rumyana Neykova - Fixed bugs in funtests and demo scripts. Contributed by Rumyana Neykova .. _version-0.9.3: Version 0.9.3 ============= - Fixed bug that could cause the consumer to crash when reading large message payloads asynchronously. - Serialization error messages now include the invalid value. .. _version-0.9.2: Version 0.9.2 ============= - Consumer cancel notification support was broken (Issue #1) Fix contributed by Andrew Grangaard .. _version-0.9.1: Version 0.9.1 ============= - Supports draining events from multiple channels (``Connection.drain_events``) - Support for timeouts - Support for heartbeats - ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals (half of the heartbeat value if rate is 2). - Or some other scheme by using ``Connection.send_heartbeat``. - Supports RabbitMQ extensions: - Consumer Cancel Notifications - by default a cancel results in ``ChannelError`` being raised - but not if a ``on_cancel`` callback is passed to ``basic_consume``. - Publisher confirms - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Support for ``basic_return`` - Uses AMQP 0-9-1 instead of 0-8. - ``Channel.access_request`` and ``ticket`` arguments to methods **removed**. - Supports the ``arguments`` argument to ``basic_consume``. - ``internal`` argument to ``exchange_declare`` removed. - ``auto_delete`` argument to ``exchange_declare`` deprecated - ``insist`` argument to ``Connection`` removed. - ``Channel.alerts`` has been removed. - Support for ``Channel.basic_recover_async``. - ``Channel.basic_recover`` deprecated. - Exceptions renamed to have idiomatic names: - ``AMQPException`` -> ``AMQPError`` - ``AMQPConnectionException`` -> ConnectionError`` - ``AMQPChannelException`` -> ChannelError`` - ``Connection.known_hosts`` removed. - ``Connection`` no longer supports redirects. - ``exchange`` argument to ``queue_bind`` can now be empty to use the "default exchange". - Adds ``Connection.is_alive`` that tries to detect whether the connection can still be used. - Adds ``Connection.connection_errors`` and ``.channel_errors``, a list of recoverable errors. - Exposes the underlying socket as ``Connection.sock``. - Adds ``Channel.no_ack_consumers`` to keep track of consumer tags that set the no_ack flag. - Slightly better at error recovery ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/LICENSE0000644000076500000240000000450414345343305012750 0ustar00nusnusstaffCopyright (c) 2015-2016 Ask Solem & contributors. All rights reserved. Copyright (c) 2012-2014 GoPivotal, Inc. All rights reserved. Copyright (c) 2009, 2010, 2011, 2012 Ask Solem, and individual contributors. All rights reserved. Copyright (C) 2007-2008 Barry Pederson . All rights reserved. py-amqp is licensed under The BSD License (3 Clause, also known as the new BSD license). The license is an OSI approved Open Source license and is GPL-compatible(1). The license text can also be found here: http://www.opensource.org/licenses/BSD-3-Clause License ======= Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Ask Solem, nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL Ask Solem OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Footnotes ========= (1) A GPL-compatible license makes it possible to combine Celery with other software that is released under the GPL, it does not mean that we're distributing Celery under the GPL license. The BSD license, unlike the GPL, let you distribute a modified version without making your changes open source. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/MANIFEST.in0000644000076500000240000000046514345343305013503 0ustar00nusnusstaffinclude README.rst Changelog LICENSE recursive-include docs * recursive-include demo *.py recursive-include extra README *.py recursive-include requirements *.txt recursive-include t *.py recursive-exclude docs/_build * recursive-exclude * __pycache__ recursive-exclude * *.py[co] recursive-exclude * .*.sw* ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1669405 amqp-5.3.1/PKG-INFO0000644000076500000240000002126714714731266013054 0ustar00nusnusstaffMetadata-Version: 2.1 Name: amqp Version: 5.3.1 Summary: Low-level AMQP client for Python (fork of amqplib). Home-page: http://github.com/celery/py-amqp Author: Barry Pederson Author-email: auvipy@gmail.com Maintainer: Asif Saif Uddin, Matus Valo License: BSD Keywords: amqp rabbitmq cloudamqp messaging Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Requires-Python: >=3.6 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: vine<6.0.0,>=5.0.0 ===================================================================== Python AMQP 0.9.1 client library ===================================================================== |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| :Version: 5.3.1 :Web: https://amqp.readthedocs.io/ :Download: https://pypi.org/project/amqp/ :Source: http://github.com/celery/py-amqp/ :Keywords: amqp, rabbitmq About ===== This is a fork of amqplib_ which was originally written by Barry Pederson. It is maintained by the Celery_ project, and used by `kombu`_ as a pure python alternative when `librabbitmq`_ is not available. This library should be API compatible with `librabbitmq`_. .. _amqplib: https://pypi.org/project/amqplib/ .. _Celery: http://celeryproject.org/ .. _kombu: https://kombu.readthedocs.io/ .. _librabbitmq: https://pypi.org/project/librabbitmq/ Differences from `amqplib`_ =========================== - Supports draining events from multiple channels (``Connection.drain_events``) - Support for timeouts - Channels are restored after channel error, instead of having to close the connection. - Support for heartbeats - ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals (half of the heartbeat value if rate is 2). - Or some other scheme by using ``Connection.send_heartbeat``. - Supports RabbitMQ extensions: - Consumer Cancel Notifications - by default a cancel results in ``ChannelError`` being raised - but not if a ``on_cancel`` callback is passed to ``basic_consume``. - Publisher confirms - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``. - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Authentication Failure Notifications Instead of just closing the connection abruptly on invalid credentials, py-amqp will raise an ``AccessRefused`` error when connected to rabbitmq-server 3.2.0 or greater. - Support for ``basic_return`` - Uses AMQP 0-9-1 instead of 0-8. - ``Channel.access_request`` and ``ticket`` arguments to methods **removed**. - Supports the ``arguments`` argument to ``basic_consume``. - ``internal`` argument to ``exchange_declare`` removed. - ``auto_delete`` argument to ``exchange_declare`` deprecated - ``insist`` argument to ``Connection`` removed. - ``Channel.alerts`` has been removed. - Support for ``Channel.basic_recover_async``. - ``Channel.basic_recover`` deprecated. - Exceptions renamed to have idiomatic names: - ``AMQPException`` -> ``AMQPError`` - ``AMQPConnectionException`` -> ConnectionError`` - ``AMQPChannelException`` -> ChannelError`` - ``Connection.known_hosts`` removed. - ``Connection`` no longer supports redirects. - ``exchange`` argument to ``queue_bind`` can now be empty to use the "default exchange". - Adds ``Connection.is_alive`` that tries to detect whether the connection can still be used. - Adds ``Connection.connection_errors`` and ``.channel_errors``, a list of recoverable errors. - Exposes the underlying socket as ``Connection.sock``. - Adds ``Channel.no_ack_consumers`` to keep track of consumer tags that set the no_ack flag. - Slightly better at error recovery Quick overview ============== Simple producer publishing messages to ``test`` queue using default exchange: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Producer publishing to ``test_exchange`` exchange with publisher confirms enabled and using virtual_host ``test_vhost``: .. code:: python import amqp with amqp.Connection( 'broker.example.com', exchange='test_exchange', confirm_publish=True, virtual_host='test_vhost' ) as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Consumer with acknowledgments enabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_ack(message.delivery_tag) ch.basic_consume(queue='test', callback=on_message) while True: c.drain_events() Consumer with acknowledgments disabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_consume(queue='test', callback=on_message, no_ack=True) while True: c.drain_events() Speedups ======== This library has **experimental** support of speedups. Speedups are implemented using Cython. To enable speedups, ``CELERY_ENABLE_SPEEDUPS`` environment variable must be set during building/installation. Currently speedups can be installed: 1. using source package (using ``--no-binary`` switch): .. code:: shell CELERY_ENABLE_SPEEDUPS=true pip install --no-binary :all: amqp 2. building directly source code: .. code:: shell CELERY_ENABLE_SPEEDUPS=true python setup.py install Further ======= - Differences between AMQP 0.8 and 0.9.1 http://www.rabbitmq.com/amqp-0-8-to-0-9-1.html - AMQP 0.9.1 Quick Reference http://www.rabbitmq.com/amqp-0-9-1-quickref.html - RabbitMQ Extensions http://www.rabbitmq.com/extensions.html - For more information about AMQP, visit http://www.amqp.org - For other Python client libraries see: http://www.rabbitmq.com/devtools.html#python-dev .. |build-status| image:: https://github.com/celery/py-amqp/actions/workflows/ci.yaml/badge.svg :alt: Build status :target: https://github.com/celery/py-amqp/actions/workflows/ci.yaml .. |coverage| image:: https://codecov.io/github/celery/py-amqp/coverage.svg?branch=main :target: https://codecov.io/github/celery/py-amqp?branch=main .. |license| image:: https://img.shields.io/pypi/l/amqp.svg :alt: BSD License :target: https://opensource.org/licenses/BSD-3-Clause .. |wheel| image:: https://img.shields.io/pypi/wheel/amqp.svg :alt: Python AMQP can be installed via wheel :target: https://pypi.org/project/amqp/ .. |pyversion| image:: https://img.shields.io/pypi/pyversions/amqp.svg :alt: Supported Python versions. :target: https://pypi.org/project/amqp/ .. |pyimp| image:: https://img.shields.io/pypi/implementation/amqp.svg :alt: Support Python implementations. :target: https://pypi.org/project/amqp/ py-amqp as part of the Tidelift Subscription ============================================ The maintainers of py-amqp and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/pypi-amqp?utm_source=pypi-amqp&utm_medium=referral&utm_campaign=readme&utm_term=repo) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441318.0 amqp-5.3.1/README.rst0000644000076500000240000001712514714731246013442 0ustar00nusnusstaff===================================================================== Python AMQP 0.9.1 client library ===================================================================== |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| :Version: 5.3.1 :Web: https://amqp.readthedocs.io/ :Download: https://pypi.org/project/amqp/ :Source: http://github.com/celery/py-amqp/ :Keywords: amqp, rabbitmq About ===== This is a fork of amqplib_ which was originally written by Barry Pederson. It is maintained by the Celery_ project, and used by `kombu`_ as a pure python alternative when `librabbitmq`_ is not available. This library should be API compatible with `librabbitmq`_. .. _amqplib: https://pypi.org/project/amqplib/ .. _Celery: http://celeryproject.org/ .. _kombu: https://kombu.readthedocs.io/ .. _librabbitmq: https://pypi.org/project/librabbitmq/ Differences from `amqplib`_ =========================== - Supports draining events from multiple channels (``Connection.drain_events``) - Support for timeouts - Channels are restored after channel error, instead of having to close the connection. - Support for heartbeats - ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals (half of the heartbeat value if rate is 2). - Or some other scheme by using ``Connection.send_heartbeat``. - Supports RabbitMQ extensions: - Consumer Cancel Notifications - by default a cancel results in ``ChannelError`` being raised - but not if a ``on_cancel`` callback is passed to ``basic_consume``. - Publisher confirms - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``. - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Authentication Failure Notifications Instead of just closing the connection abruptly on invalid credentials, py-amqp will raise an ``AccessRefused`` error when connected to rabbitmq-server 3.2.0 or greater. - Support for ``basic_return`` - Uses AMQP 0-9-1 instead of 0-8. - ``Channel.access_request`` and ``ticket`` arguments to methods **removed**. - Supports the ``arguments`` argument to ``basic_consume``. - ``internal`` argument to ``exchange_declare`` removed. - ``auto_delete`` argument to ``exchange_declare`` deprecated - ``insist`` argument to ``Connection`` removed. - ``Channel.alerts`` has been removed. - Support for ``Channel.basic_recover_async``. - ``Channel.basic_recover`` deprecated. - Exceptions renamed to have idiomatic names: - ``AMQPException`` -> ``AMQPError`` - ``AMQPConnectionException`` -> ConnectionError`` - ``AMQPChannelException`` -> ChannelError`` - ``Connection.known_hosts`` removed. - ``Connection`` no longer supports redirects. - ``exchange`` argument to ``queue_bind`` can now be empty to use the "default exchange". - Adds ``Connection.is_alive`` that tries to detect whether the connection can still be used. - Adds ``Connection.connection_errors`` and ``.channel_errors``, a list of recoverable errors. - Exposes the underlying socket as ``Connection.sock``. - Adds ``Channel.no_ack_consumers`` to keep track of consumer tags that set the no_ack flag. - Slightly better at error recovery Quick overview ============== Simple producer publishing messages to ``test`` queue using default exchange: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Producer publishing to ``test_exchange`` exchange with publisher confirms enabled and using virtual_host ``test_vhost``: .. code:: python import amqp with amqp.Connection( 'broker.example.com', exchange='test_exchange', confirm_publish=True, virtual_host='test_vhost' ) as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Consumer with acknowledgments enabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_ack(message.delivery_tag) ch.basic_consume(queue='test', callback=on_message) while True: c.drain_events() Consumer with acknowledgments disabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_consume(queue='test', callback=on_message, no_ack=True) while True: c.drain_events() Speedups ======== This library has **experimental** support of speedups. Speedups are implemented using Cython. To enable speedups, ``CELERY_ENABLE_SPEEDUPS`` environment variable must be set during building/installation. Currently speedups can be installed: 1. using source package (using ``--no-binary`` switch): .. code:: shell CELERY_ENABLE_SPEEDUPS=true pip install --no-binary :all: amqp 2. building directly source code: .. code:: shell CELERY_ENABLE_SPEEDUPS=true python setup.py install Further ======= - Differences between AMQP 0.8 and 0.9.1 http://www.rabbitmq.com/amqp-0-8-to-0-9-1.html - AMQP 0.9.1 Quick Reference http://www.rabbitmq.com/amqp-0-9-1-quickref.html - RabbitMQ Extensions http://www.rabbitmq.com/extensions.html - For more information about AMQP, visit http://www.amqp.org - For other Python client libraries see: http://www.rabbitmq.com/devtools.html#python-dev .. |build-status| image:: https://github.com/celery/py-amqp/actions/workflows/ci.yaml/badge.svg :alt: Build status :target: https://github.com/celery/py-amqp/actions/workflows/ci.yaml .. |coverage| image:: https://codecov.io/github/celery/py-amqp/coverage.svg?branch=main :target: https://codecov.io/github/celery/py-amqp?branch=main .. |license| image:: https://img.shields.io/pypi/l/amqp.svg :alt: BSD License :target: https://opensource.org/licenses/BSD-3-Clause .. |wheel| image:: https://img.shields.io/pypi/wheel/amqp.svg :alt: Python AMQP can be installed via wheel :target: https://pypi.org/project/amqp/ .. |pyversion| image:: https://img.shields.io/pypi/pyversions/amqp.svg :alt: Supported Python versions. :target: https://pypi.org/project/amqp/ .. |pyimp| image:: https://img.shields.io/pypi/implementation/amqp.svg :alt: Support Python implementations. :target: https://pypi.org/project/amqp/ py-amqp as part of the Tidelift Subscription ============================================ The maintainers of py-amqp and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/pypi-amqp?utm_source=pypi-amqp&utm_medium=referral&utm_campaign=readme&utm_term=repo) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.154794 amqp-5.3.1/amqp/0000755000076500000240000000000014714731266012705 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441318.0 amqp-5.3.1/amqp/__init__.py0000644000076500000240000000446514714731246015025 0ustar00nusnusstaff"""Low-level AMQP client for Python (fork of amqplib).""" # Copyright (C) 2007-2008 Barry Pederson import re from collections import namedtuple __version__ = '5.3.1' __author__ = 'Barry Pederson' __maintainer__ = 'Asif Saif Uddin, Matus Valo' __contact__ = 'auvipy@gmail.com' __homepage__ = 'http://github.com/celery/py-amqp' __docformat__ = 'restructuredtext' # -eof meta- version_info_t = namedtuple('version_info_t', ( 'major', 'minor', 'micro', 'releaselevel', 'serial', )) # bumpversion can only search for {current_version} # so we have to parse the version here. _temp = re.match( r'(\d+)\.(\d+).(\d+)(.+)?', __version__).groups() VERSION = version_info = version_info_t( int(_temp[0]), int(_temp[1]), int(_temp[2]), _temp[3] or '', '') del(_temp) del(re) from .basic_message import Message # noqa from .channel import Channel # noqa from .connection import Connection # noqa from .exceptions import (AccessRefused, AMQPError, # noqa AMQPNotImplementedError, ChannelError, ChannelNotOpen, ConnectionError, ConnectionForced, ConsumerCancelled, ContentTooLarge, FrameError, FrameSyntaxError, InternalError, InvalidCommand, InvalidPath, IrrecoverableChannelError, IrrecoverableConnectionError, NoConsumers, NotAllowed, NotFound, PreconditionFailed, RecoverableChannelError, RecoverableConnectionError, ResourceError, ResourceLocked, UnexpectedFrame, error_for_code) from .utils import promise # noqa __all__ = ( 'Connection', 'Channel', 'Message', 'promise', 'AMQPError', 'ConnectionError', 'RecoverableConnectionError', 'IrrecoverableConnectionError', 'ChannelError', 'RecoverableChannelError', 'IrrecoverableChannelError', 'ConsumerCancelled', 'ContentTooLarge', 'NoConsumers', 'ConnectionForced', 'InvalidPath', 'AccessRefused', 'NotFound', 'ResourceLocked', 'PreconditionFailed', 'FrameError', 'FrameSyntaxError', 'InvalidCommand', 'ChannelNotOpen', 'UnexpectedFrame', 'ResourceError', 'NotAllowed', 'AMQPNotImplementedError', 'InternalError', 'error_for_code', ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/abstract_channel.py0000644000076500000240000001151514345343305016546 0ustar00nusnusstaff"""Code common to Connection and Channel objects.""" # Copyright (C) 2007-2008 Barry Pederson ) import logging from vine import ensure_promise, promise from .exceptions import AMQPNotImplementedError, RecoverableConnectionError from .serialization import dumps, loads __all__ = ('AbstractChannel',) AMQP_LOGGER = logging.getLogger('amqp') IGNORED_METHOD_DURING_CHANNEL_CLOSE = """\ Received method %s during closing channel %s. This method will be ignored\ """ class AbstractChannel: """Superclass for Connection and Channel. The connection is treated as channel 0, then comes user-created channel objects. The subclasses must have a _METHOD_MAP class property, mapping between AMQP method signatures and Python methods. """ def __init__(self, connection, channel_id): self.is_closing = False self.connection = connection self.channel_id = channel_id connection.channels[channel_id] = self self.method_queue = [] # Higher level queue for methods self.auto_decode = False self._pending = {} self._callbacks = {} self._setup_listeners() __slots__ = ( "is_closing", "connection", "channel_id", "method_queue", "auto_decode", "_pending", "_callbacks", # adding '__dict__' to get dynamic assignment "__dict__", "__weakref__", ) def __enter__(self): return self def __exit__(self, *exc_info): self.close() def send_method(self, sig, format=None, args=None, content=None, wait=None, callback=None, returns_tuple=False): p = promise() conn = self.connection if conn is None: raise RecoverableConnectionError('connection already closed') args = dumps(format, args) if format else '' try: conn.frame_writer(1, self.channel_id, sig, args, content) except StopIteration: raise RecoverableConnectionError('connection already closed') # TODO temp: callback should be after write_method ... ;) if callback: p.then(callback) p() if wait: return self.wait(wait, returns_tuple=returns_tuple) return p def close(self): """Close this Channel or Connection.""" raise NotImplementedError('Must be overridden in subclass') def wait(self, method, callback=None, timeout=None, returns_tuple=False): p = ensure_promise(callback) pending = self._pending prev_p = [] if not isinstance(method, list): method = [method] for m in method: prev_p.append(pending.get(m)) pending[m] = p try: while not p.ready: self.connection.drain_events(timeout=timeout) if p.value: args, kwargs = p.value args = args[1:] # We are not returning method back return args if returns_tuple else (args and args[0]) finally: for i, m in enumerate(method): if prev_p[i] is not None: pending[m] = prev_p[i] else: pending.pop(m, None) def dispatch_method(self, method_sig, payload, content): if self.is_closing and method_sig not in ( self._ALLOWED_METHODS_WHEN_CLOSING ): # When channel.close() was called we must ignore all methods except # Channel.close and Channel.CloseOk AMQP_LOGGER.warning( IGNORED_METHOD_DURING_CHANNEL_CLOSE, method_sig, self.channel_id ) return if content and \ self.auto_decode and \ hasattr(content, 'content_encoding'): try: content.body = content.body.decode(content.content_encoding) except Exception: pass try: amqp_method = self._METHODS[method_sig] except KeyError: raise AMQPNotImplementedError( f'Unknown AMQP method {method_sig!r}') try: listeners = [self._callbacks[method_sig]] except KeyError: listeners = [] one_shot = None try: one_shot = self._pending.pop(method_sig) except KeyError: if not listeners: return args = [] if amqp_method.args: args, _ = loads(amqp_method.args, payload, 4) if amqp_method.content: args.append(content) for listener in listeners: listener(*args) if one_shot: one_shot(method_sig, *args) #: Placeholder, the concrete implementations will have to #: supply their own versions of _METHOD_MAP _METHODS = {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/basic_message.py0000644000076500000240000000643514345343305016045 0ustar00nusnusstaff"""AMQP Messages.""" # Copyright (C) 2007-2008 Barry Pederson from .serialization import GenericContent # Intended to fix #85: ImportError: cannot import name spec # Encountered on python 2.7.3 # "The submodules often need to refer to each other. For example, the # surround [sic] module might use the echo module. In fact, such # references are so common that the import statement first looks in # the containing package before looking in the standard module search # path." # Source: # http://stackoverflow.com/a/14216937/4982251 from .spec import Basic __all__ = ('Message',) class Message(GenericContent): """A Message for use with the Channel.basic_* methods. Expected arg types body: string children: (not supported) Keyword properties may include: content_type: shortstr MIME content type content_encoding: shortstr MIME content encoding application_headers: table Message header field table, a dict with string keys, and string | int | Decimal | datetime | dict values. delivery_mode: octet Non-persistent (1) or persistent (2) priority: octet The message priority, 0 to 9 correlation_id: shortstr The application correlation identifier reply_to: shortstr The destination to reply to expiration: shortstr Message expiration specification message_id: shortstr The application message identifier timestamp: unsigned long The message timestamp type: shortstr The message type name user_id: shortstr The creating user id app_id: shortstr The creating application id cluster_id: shortstr Intra-cluster routing identifier Unicode bodies are encoded according to the 'content_encoding' argument. If that's None, it's set to 'UTF-8' automatically. Example:: msg = Message('hello world', content_type='text/plain', application_headers={'foo': 7}) """ CLASS_ID = Basic.CLASS_ID #: Instances of this class have these attributes, which #: are passed back and forth as message properties between #: client and server PROPERTIES = [ ('content_type', 's'), ('content_encoding', 's'), ('application_headers', 'F'), ('delivery_mode', 'o'), ('priority', 'o'), ('correlation_id', 's'), ('reply_to', 's'), ('expiration', 's'), ('message_id', 's'), ('timestamp', 'L'), ('type', 's'), ('user_id', 's'), ('app_id', 's'), ('cluster_id', 's') ] def __init__(self, body='', children=None, channel=None, **properties): super().__init__(**properties) #: set by basic_consume/basic_get self.delivery_info = None self.body = body self.channel = channel __slots__ = ( "delivery_info", "body", "channel", ) @property def headers(self): return self.properties.get('application_headers') @property def delivery_tag(self): return self.delivery_info.get('delivery_tag') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670945627.0 amqp-5.3.1/amqp/channel.py0000644000076500000240000022135314346115533014667 0ustar00nusnusstaff"""AMQP Channels.""" # Copyright (C) 2007-2008 Barry Pederson import logging import socket from collections import defaultdict from queue import Queue from vine import ensure_promise from . import spec from .abstract_channel import AbstractChannel from .exceptions import (ChannelError, ConsumerCancelled, MessageNacked, RecoverableChannelError, RecoverableConnectionError, error_for_code) from .protocol import queue_declare_ok_t __all__ = ('Channel',) AMQP_LOGGER = logging.getLogger('amqp') REJECTED_MESSAGE_WITHOUT_CALLBACK = """\ Rejecting message with delivery tag %r for reason of having no callbacks. consumer_tag=%r exchange=%r routing_key=%r.\ """ class VDeprecationWarning(DeprecationWarning): pass class Channel(AbstractChannel): """AMQP Channel. The channel class provides methods for a client to establish a virtual connection - a channel - to a server and for both peers to operate the virtual connection thereafter. GRAMMAR:: channel = open-channel *use-channel close-channel open-channel = C:OPEN S:OPEN-OK use-channel = C:FLOW S:FLOW-OK / S:FLOW C:FLOW-OK / functional-class close-channel = C:CLOSE S:CLOSE-OK / S:CLOSE C:CLOSE-OK Create a channel bound to a connection and using the specified numeric channel_id, and open on the server. The 'auto_decode' parameter (defaults to True), indicates whether the library should attempt to decode the body of Messages to a Unicode string if there's a 'content_encoding' property for the message. If there's no 'content_encoding' property, or the decode raises an Exception, the message body is left as plain bytes. """ _METHODS = { spec.method(spec.Channel.Close, 'BsBB'), spec.method(spec.Channel.CloseOk), spec.method(spec.Channel.Flow, 'b'), spec.method(spec.Channel.FlowOk, 'b'), spec.method(spec.Channel.OpenOk), spec.method(spec.Exchange.DeclareOk), spec.method(spec.Exchange.DeleteOk), spec.method(spec.Exchange.BindOk), spec.method(spec.Exchange.UnbindOk), spec.method(spec.Queue.BindOk), spec.method(spec.Queue.UnbindOk), spec.method(spec.Queue.DeclareOk, 'sll'), spec.method(spec.Queue.DeleteOk, 'l'), spec.method(spec.Queue.PurgeOk, 'l'), spec.method(spec.Basic.Cancel, 's'), spec.method(spec.Basic.CancelOk, 's'), spec.method(spec.Basic.ConsumeOk, 's'), spec.method(spec.Basic.Deliver, 'sLbss', content=True), spec.method(spec.Basic.GetEmpty, 's'), spec.method(spec.Basic.GetOk, 'Lbssl', content=True), spec.method(spec.Basic.QosOk), spec.method(spec.Basic.RecoverOk), spec.method(spec.Basic.Return, 'Bsss', content=True), spec.method(spec.Tx.CommitOk), spec.method(spec.Tx.RollbackOk), spec.method(spec.Tx.SelectOk), spec.method(spec.Confirm.SelectOk), spec.method(spec.Basic.Ack, 'Lb'), spec.method(spec.Basic.Nack, 'Lb'), } _METHODS = {m.method_sig: m for m in _METHODS} _ALLOWED_METHODS_WHEN_CLOSING = ( spec.Channel.Close, spec.Channel.CloseOk ) def __init__(self, connection, channel_id=None, auto_decode=True, on_open=None): if channel_id: connection._claim_channel_id(channel_id) else: channel_id = connection._get_free_channel_id() AMQP_LOGGER.debug('using channel_id: %s', channel_id) super().__init__(connection, channel_id) self.is_open = False self.active = True # Flow control self.returned_messages = Queue() self.callbacks = {} self.cancel_callbacks = {} self.auto_decode = auto_decode self.events = defaultdict(set) self.no_ack_consumers = set() self.on_open = ensure_promise(on_open) # set first time basic_publish_confirm is called # and publisher confirms are enabled for this channel. self._confirm_selected = False if self.connection.confirm_publish: self.basic_publish = self.basic_publish_confirm __slots__ = ( "is_open", "active", "returned_messages", "callbacks", "cancel_callbacks", "events", "no_ack_consumers", "on_open", "_confirm_selected", ) def then(self, on_success, on_error=None): return self.on_open.then(on_success, on_error) def _setup_listeners(self): self._callbacks.update({ spec.Channel.Close: self._on_close, spec.Channel.CloseOk: self._on_close_ok, spec.Channel.Flow: self._on_flow, spec.Channel.OpenOk: self._on_open_ok, spec.Basic.Cancel: self._on_basic_cancel, spec.Basic.CancelOk: self._on_basic_cancel_ok, spec.Basic.Deliver: self._on_basic_deliver, spec.Basic.Return: self._on_basic_return, spec.Basic.Ack: self._on_basic_ack, spec.Basic.Nack: self._on_basic_nack, }) def collect(self): """Tear down this object. Best called after we've agreed to close with the server. """ AMQP_LOGGER.debug('Closed channel #%s', self.channel_id) self.is_open = False channel_id, self.channel_id = self.channel_id, None connection, self.connection = self.connection, None if connection: connection.channels.pop(channel_id, None) try: connection._used_channel_ids.remove(channel_id) except ValueError: # channel id already removed pass self.callbacks.clear() self.cancel_callbacks.clear() self.events.clear() self.no_ack_consumers.clear() def _do_revive(self): self.is_open = False self.open() def close(self, reply_code=0, reply_text='', method_sig=(0, 0), argsig='BsBB'): """Request a channel close. This method indicates that the sender wants to close the channel. This may be due to internal conditions (e.g. a forced shut-down) or due to an error handling a specific method, i.e. an exception. When a close is due to an exception, the sender provides the class and method id of the method which caused the exception. RULE: After sending this method any received method except Channel.Close-OK MUST be discarded. RULE: The peer sending this method MAY use a counter or timeout to detect failure of the other peer to respond correctly with Channel.Close-OK.. PARAMETERS: reply_code: short The reply code. The AMQ reply codes are defined in AMQ RFC 011. reply_text: shortstr The localised reply text. This text can be logged as an aid to resolving issues. class_id: short failing method class When the close is provoked by a method exception, this is the class of the method. method_id: short failing method ID When the close is provoked by a method exception, this is the ID of the method. """ try: if self.connection is None: return if self.connection.channels is None: return if not self.is_open: return self.is_closing = True return self.send_method( spec.Channel.Close, argsig, (reply_code, reply_text, method_sig[0], method_sig[1]), wait=spec.Channel.CloseOk, ) finally: self.is_closing = False self.connection = None def _on_close(self, reply_code, reply_text, class_id, method_id): """Request a channel close. This method indicates that the sender wants to close the channel. This may be due to internal conditions (e.g. a forced shut-down) or due to an error handling a specific method, i.e. an exception. When a close is due to an exception, the sender provides the class and method id of the method which caused the exception. RULE: After sending this method any received method except Channel.Close-OK MUST be discarded. RULE: The peer sending this method MAY use a counter or timeout to detect failure of the other peer to respond correctly with Channel.Close-OK.. PARAMETERS: reply_code: short The reply code. The AMQ reply codes are defined in AMQ RFC 011. reply_text: shortstr The localised reply text. This text can be logged as an aid to resolving issues. class_id: short failing method class When the close is provoked by a method exception, this is the class of the method. method_id: short failing method ID When the close is provoked by a method exception, this is the ID of the method. """ self.send_method(spec.Channel.CloseOk) if not self.connection.is_closing: self._do_revive() raise error_for_code( reply_code, reply_text, (class_id, method_id), ChannelError, ) def _on_close_ok(self): """Confirm a channel close. This method confirms a Channel.Close method and tells the recipient that it is safe to release resources for the channel and close the socket. RULE: A peer that detects a socket closure without having received a Channel.Close-Ok handshake method SHOULD log the error. """ self.collect() def flow(self, active): """Enable/disable flow from peer. This method asks the peer to pause or restart the flow of content data. This is a simple flow-control mechanism that a peer can use to avoid overflowing its queues or otherwise finding itself receiving more messages than it can process. Note that this method is not intended for window control. The peer that receives a request to stop sending content should finish sending the current content, if any, and then wait until it receives a Flow restart method. RULE: When a new channel is opened, it is active. Some applications assume that channels are inactive until started. To emulate this behaviour a client MAY open the channel, then pause it. RULE: When sending content data in multiple frames, a peer SHOULD monitor the channel for incoming methods and respond to a Channel.Flow as rapidly as possible. RULE: A peer MAY use the Channel.Flow method to throttle incoming content data for internal reasons, for example, when exchanging data over a slower connection. RULE: The peer that requests a Channel.Flow method MAY disconnect and/or ban a peer that does not respect the request. PARAMETERS: active: boolean start/stop content frames If True, the peer starts sending content frames. If False, the peer stops sending content frames. """ return self.send_method( spec.Channel.Flow, 'b', (active,), wait=spec.Channel.FlowOk, ) def _on_flow(self, active): """Enable/disable flow from peer. This method asks the peer to pause or restart the flow of content data. This is a simple flow-control mechanism that a peer can use to avoid overflowing its queues or otherwise finding itself receiving more messages than it can process. Note that this method is not intended for window control. The peer that receives a request to stop sending content should finish sending the current content, if any, and then wait until it receives a Flow restart method. RULE: When a new channel is opened, it is active. Some applications assume that channels are inactive until started. To emulate this behaviour a client MAY open the channel, then pause it. RULE: When sending content data in multiple frames, a peer SHOULD monitor the channel for incoming methods and respond to a Channel.Flow as rapidly as possible. RULE: A peer MAY use the Channel.Flow method to throttle incoming content data for internal reasons, for example, when exchanging data over a slower connection. RULE: The peer that requests a Channel.Flow method MAY disconnect and/or ban a peer that does not respect the request. PARAMETERS: active: boolean start/stop content frames If True, the peer starts sending content frames. If False, the peer stops sending content frames. """ self.active = active self._x_flow_ok(self.active) def _x_flow_ok(self, active): """Confirm a flow method. Confirms to the peer that a flow command was received and processed. PARAMETERS: active: boolean current flow setting Confirms the setting of the processed flow method: True means the peer will start sending or continue to send content frames; False means it will not. """ return self.send_method(spec.Channel.FlowOk, 'b', (active,)) def open(self): """Open a channel for use. This method opens a virtual connection (a channel). RULE: This method MUST NOT be called when the channel is already open. PARAMETERS: out_of_band: shortstr (DEPRECATED) out-of-band settings Configures out-of-band transfers on this channel. The syntax and meaning of this field will be formally defined at a later date. """ if self.is_open: return return self.send_method( spec.Channel.Open, 's', ('',), wait=spec.Channel.OpenOk, ) def _on_open_ok(self): """Signal that the channel is ready. This method signals to the client that the channel is ready for use. """ self.is_open = True self.on_open(self) AMQP_LOGGER.debug('Channel open') ############# # # Exchange # # # work with exchanges # # Exchanges match and distribute messages across queues. # Exchanges can be configured in the server or created at runtime. # # GRAMMAR:: # # exchange = C:DECLARE S:DECLARE-OK # / C:DELETE S:DELETE-OK # # RULE: # # The server MUST implement the direct and fanout exchange # types, and predeclare the corresponding exchanges named # amq.direct and amq.fanout in each virtual host. The server # MUST also predeclare a direct exchange to act as the default # exchange for content Publish methods and for default queue # bindings. # # RULE: # # The server SHOULD implement the topic exchange type, and # predeclare the corresponding exchange named amq.topic in # each virtual host. # # RULE: # # The server MAY implement the system exchange type, and # predeclare the corresponding exchanges named amq.system in # each virtual host. If the client attempts to bind a queue to # the system exchange, the server MUST raise a connection # exception with reply code 507 (not allowed). # def exchange_declare(self, exchange, type, passive=False, durable=False, auto_delete=True, nowait=False, arguments=None, argsig='BssbbbbbF'): """Declare exchange, create if needed. This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. RULE: The server SHOULD support a minimum of 16 exchanges per virtual host and ideally, impose no limit except as defined by available resources. PARAMETERS: exchange: shortstr RULE: Exchange names starting with "amq." are reserved for predeclared and standardised exchanges. If the client attempts to create an exchange starting with "amq.", the server MUST raise a channel exception with reply code 403 (access refused). type: shortstr exchange type Each exchange belongs to one of a set of exchange types implemented by the server. The exchange types define the functionality of the exchange - i.e. how messages are routed through it. It is not valid or meaningful to attempt to change the type of an existing exchange. RULE: If the exchange already exists with a different type, the server MUST raise a connection exception with a reply code 507 (not allowed). RULE: If the server does not support the requested exchange type it MUST raise a connection exception with a reply code 503 (command invalid). passive: boolean do not create exchange If set, the server will not create the exchange. The client can use this to check whether an exchange exists without modifying the server state. RULE: If set, and the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). durable: boolean request a durable exchange If set when creating a new exchange, the exchange will be marked as durable. Durable exchanges remain active when a server restarts. Non-durable exchanges (transient exchanges) are purged if/when a server restarts. RULE: The server MUST support both durable and transient exchanges. RULE: The server MUST ignore the durable field if the exchange already exists. auto_delete: boolean auto-delete when unused If set, the exchange is deleted when all queues have finished using it. RULE: The server SHOULD allow for a reasonable delay between the point when it determines that an exchange is not being used (or no longer used), and the point when it deletes the exchange. At the least it must allow a client to create an exchange and then bind a queue to it, with a small but non-zero delay between these two actions. RULE: The server MUST ignore the auto-delete field if the exchange already exists. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. arguments: table arguments for declaration A set of arguments for the declaration. The syntax and semantics of these arguments depends on the server implementation. This field is ignored if passive is True. """ self.send_method( spec.Exchange.Declare, argsig, (0, exchange, type, passive, durable, auto_delete, False, nowait, arguments), wait=None if nowait else spec.Exchange.DeclareOk, ) def exchange_delete(self, exchange, if_unused=False, nowait=False, argsig='Bsbb'): """Delete an exchange. This method deletes an exchange. When an exchange is deleted all queue bindings on the exchange are cancelled. PARAMETERS: exchange: shortstr RULE: The exchange MUST exist. Attempting to delete a non-existing exchange causes a channel exception. if_unused: boolean delete only if unused If set, the server will only delete the exchange if it has no queue bindings. If the exchange has queue bindings the server does not delete it but raises a channel exception instead. RULE: If set, the server SHOULD delete the exchange but only if it has no queue bindings. RULE: If set, the server SHOULD raise a channel exception if the exchange is in use. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. """ return self.send_method( spec.Exchange.Delete, argsig, (0, exchange, if_unused, nowait), wait=None if nowait else spec.Exchange.DeleteOk, ) def exchange_bind(self, destination, source='', routing_key='', nowait=False, arguments=None, argsig='BsssbF'): """Bind an exchange to an exchange. RULE: A server MUST allow and ignore duplicate bindings - that is, two or more bind methods for a specific exchanges, with identical arguments - without treating these as an error. RULE: A server MUST allow cycles of exchange bindings to be created including allowing an exchange to be bound to itself. RULE: A server MUST not deliver the same message more than once to a destination exchange, even if the topology of exchanges and bindings results in multiple (even infinite) routes to that exchange. PARAMETERS: reserved-1: short destination: shortstr Specifies the name of the destination exchange to bind. RULE: A client MUST NOT be allowed to bind a non- existent destination exchange. RULE: The server MUST accept a blank exchange name to mean the default exchange. source: shortstr Specifies the name of the source exchange to bind. RULE: A client MUST NOT be allowed to bind a non- existent source exchange. RULE: The server MUST accept a blank exchange name to mean the default exchange. routing-key: shortstr Specifies the routing key for the binding. The routing key is used for routing messages depending on the exchange configuration. Not all exchanges use a routing key - refer to the specific exchange documentation. no-wait: bit arguments: table A set of arguments for the binding. The syntax and semantics of these arguments depends on the exchange class. """ return self.send_method( spec.Exchange.Bind, argsig, (0, destination, source, routing_key, nowait, arguments), wait=None if nowait else spec.Exchange.BindOk, ) def exchange_unbind(self, destination, source='', routing_key='', nowait=False, arguments=None, argsig='BsssbF'): """Unbind an exchange from an exchange. RULE: If a unbind fails, the server MUST raise a connection exception. PARAMETERS: reserved-1: short destination: shortstr Specifies the name of the destination exchange to unbind. RULE: The client MUST NOT attempt to unbind an exchange that does not exist from an exchange. RULE: The server MUST accept a blank exchange name to mean the default exchange. source: shortstr Specifies the name of the source exchange to unbind. RULE: The client MUST NOT attempt to unbind an exchange from an exchange that does not exist. RULE: The server MUST accept a blank exchange name to mean the default exchange. routing-key: shortstr Specifies the routing key of the binding to unbind. no-wait: bit arguments: table Specifies the arguments of the binding to unbind. """ return self.send_method( spec.Exchange.Unbind, argsig, (0, destination, source, routing_key, nowait, arguments), wait=None if nowait else spec.Exchange.UnbindOk, ) ############# # # Queue # # # work with queues # # Queues store and forward messages. Queues can be configured in # the server or created at runtime. Queues must be attached to at # least one exchange in order to receive messages from publishers. # # GRAMMAR:: # # queue = C:DECLARE S:DECLARE-OK # / C:BIND S:BIND-OK # / C:PURGE S:PURGE-OK # / C:DELETE S:DELETE-OK # # RULE: # # A server MUST allow any content class to be sent to any # queue, in any mix, and queue and delivery these content # classes independently. Note that all methods that fetch # content off queues are specific to a given content class. # def queue_bind(self, queue, exchange='', routing_key='', nowait=False, arguments=None, argsig='BsssbF'): """Bind queue to an exchange. This method binds a queue to an exchange. Until a queue is bound it will not receive any messages. In a classic messaging model, store-and-forward queues are bound to a dest exchange and subscription queues are bound to a dest_wild exchange. RULE: A server MUST allow ignore duplicate bindings - that is, two or more bind methods for a specific queue, with identical arguments - without treating these as an error. RULE: If a bind fails, the server MUST raise a connection exception. RULE: The server MUST NOT allow a durable queue to bind to a transient exchange. If the client attempts this the server MUST raise a channel exception. RULE: Bindings for durable queues are automatically durable and the server SHOULD restore such bindings after a server restart. RULE: The server SHOULD support at least 4 bindings per queue, and ideally, impose no limit except as defined by available resources. PARAMETERS: queue: shortstr Specifies the name of the queue to bind. If the queue name is empty, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). RULE: If the queue does not exist the server MUST raise a channel exception with reply code 404 (not found). exchange: shortstr The name of the exchange to bind to. RULE: If the exchange does not exist the server MUST raise a channel exception with reply code 404 (not found). routing_key: shortstr message routing key Specifies the routing key for the binding. The routing key is used for routing messages depending on the exchange configuration. Not all exchanges use a routing key - refer to the specific exchange documentation. If the routing key is empty and the queue name is empty, the routing key will be the current queue for the channel, which is the last declared queue. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. arguments: table arguments for binding A set of arguments for the binding. The syntax and semantics of these arguments depends on the exchange class. """ return self.send_method( spec.Queue.Bind, argsig, (0, queue, exchange, routing_key, nowait, arguments), wait=None if nowait else spec.Queue.BindOk, ) def queue_unbind(self, queue, exchange, routing_key='', nowait=False, arguments=None, argsig='BsssF'): """Unbind a queue from an exchange. This method unbinds a queue from an exchange. RULE: If a unbind fails, the server MUST raise a connection exception. PARAMETERS: queue: shortstr Specifies the name of the queue to unbind. RULE: The client MUST either specify a queue name or have previously declared a queue on the same channel RULE: The client MUST NOT attempt to unbind a queue that does not exist. exchange: shortstr The name of the exchange to unbind from. RULE: The client MUST NOT attempt to unbind a queue from an exchange that does not exist. RULE: The server MUST accept a blank exchange name to mean the default exchange. routing_key: shortstr routing key of binding Specifies the routing key of the binding to unbind. arguments: table arguments of binding Specifies the arguments of the binding to unbind. """ return self.send_method( spec.Queue.Unbind, argsig, (0, queue, exchange, routing_key, arguments), wait=None if nowait else spec.Queue.UnbindOk, ) def queue_declare(self, queue='', passive=False, durable=False, exclusive=False, auto_delete=True, nowait=False, arguments=None, argsig='BsbbbbbF'): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. RULE: The server MUST create a default binding for a newly- created queue to the default exchange, which is an exchange of type 'direct'. RULE: The server SHOULD support a minimum of 256 queues per virtual host and ideally, impose no limit except as defined by available resources. PARAMETERS: queue: shortstr RULE: The queue name MAY be empty, in which case the server MUST create a new queue with a unique generated name and return this to the client in the Declare-Ok method. RULE: Queue names starting with "amq." are reserved for predeclared and standardised server queues. If the queue name starts with "amq." and the passive option is False, the server MUST raise a connection exception with reply code 403 (access refused). passive: boolean do not create queue If set, the server will not create the queue. The client can use this to check whether a queue exists without modifying the server state. RULE: If set, and the queue does not already exist, the server MUST respond with a reply code 404 (not found) and raise a channel exception. durable: boolean request a durable queue If set when creating a new queue, the queue will be marked as durable. Durable queues remain active when a server restarts. Non-durable queues (transient queues) are purged if/when a server restarts. Note that durable queues do not necessarily hold persistent messages, although it does not make sense to send persistent messages to a transient queue. RULE: The server MUST recreate the durable queue after a restart. RULE: The server MUST support both durable and transient queues. RULE: The server MUST ignore the durable field if the queue already exists. exclusive: boolean request an exclusive queue Exclusive queues may only be consumed from by the current connection. Setting the 'exclusive' flag always implies 'auto-delete'. RULE: The server MUST support both exclusive (private) and non-exclusive (shared) queues. RULE: The server MUST raise a channel exception if 'exclusive' is specified and the queue already exists and is owned by a different connection. auto_delete: boolean auto-delete queue when unused If set, the queue is deleted when all consumers have finished using it. Last consumer can be cancelled either explicitly or because its channel is closed. If there was no consumer ever on the queue, it won't be deleted. RULE: The server SHOULD allow for a reasonable delay between the point when it determines that a queue is not being used (or no longer used), and the point when it deletes the queue. At the least it must allow a client to create a queue and then create a consumer to read from it, with a small but non-zero delay between these two actions. The server should equally allow for clients that may be disconnected prematurely, and wish to re- consume from the same queue without losing messages. We would recommend a configurable timeout, with a suitable default value being one minute. RULE: The server MUST ignore the auto-delete field if the queue already exists. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. arguments: table arguments for declaration A set of arguments for the declaration. The syntax and semantics of these arguments depends on the server implementation. This field is ignored if passive is True. Returns a tuple containing 3 items: the name of the queue (essential for automatically-named queues), message count and consumer count """ self.send_method( spec.Queue.Declare, argsig, (0, queue, passive, durable, exclusive, auto_delete, nowait, arguments), ) if not nowait: return queue_declare_ok_t(*self.wait( spec.Queue.DeclareOk, returns_tuple=True, )) def queue_delete(self, queue='', if_unused=False, if_empty=False, nowait=False, argsig='Bsbbb'): """Delete a queue. This method deletes a queue. When a queue is deleted any pending messages are sent to a dead-letter queue if this is defined in the server configuration, and all consumers on the queue are cancelled. RULE: The server SHOULD use a dead-letter queue to hold messages that were pending on a deleted queue, and MAY provide facilities for a system administrator to move these messages back to an active queue. PARAMETERS: queue: shortstr Specifies the name of the queue to delete. If the queue name is empty, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). RULE: The queue must exist. Attempting to delete a non- existing queue causes a channel exception. if_unused: boolean delete only if unused If set, the server will only delete the queue if it has no consumers. If the queue has consumers the server does does not delete it but raises a channel exception instead. RULE: The server MUST respect the if-unused flag when deleting a queue. if_empty: boolean delete only if empty If set, the server will only delete the queue if it has no messages. If the queue is not empty the server raises a channel exception. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. If nowait is False, returns the number of deleted messages. """ return self.send_method( spec.Queue.Delete, argsig, (0, queue, if_unused, if_empty, nowait), wait=None if nowait else spec.Queue.DeleteOk, ) def queue_purge(self, queue='', nowait=False, argsig='Bsb'): """Purge a queue. This method removes all messages from a queue. It does not cancel consumers. Purged messages are deleted without any formal "undo" mechanism. RULE: A call to purge MUST result in an empty queue. RULE: On transacted channels the server MUST not purge messages that have already been sent to a client but not yet acknowledged. RULE: The server MAY implement a purge queue or log that allows system administrators to recover accidentally-purged messages. The server SHOULD NOT keep purged messages in the same storage spaces as the live messages since the volumes of purged messages may get very large. PARAMETERS: queue: shortstr Specifies the name of the queue to purge. If the queue name is empty, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). RULE: The queue must exist. Attempting to purge a non- existing queue causes a channel exception. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. If nowait is False, returns a number of purged messages. """ return self.send_method( spec.Queue.Purge, argsig, (0, queue, nowait), wait=None if nowait else spec.Queue.PurgeOk, ) ############# # # Basic # # # work with basic content # # The Basic class provides methods that support an industry- # standard messaging model. # # GRAMMAR:: # # basic = C:QOS S:QOS-OK # / C:CONSUME S:CONSUME-OK # / C:CANCEL S:CANCEL-OK # / C:PUBLISH content # / S:RETURN content # / S:DELIVER content # / C:GET ( S:GET-OK content / S:GET-EMPTY ) # / C:ACK # / C:REJECT # # RULE: # # The server SHOULD respect the persistent property of basic # messages and SHOULD make a best-effort to hold persistent # basic messages on a reliable storage mechanism. # # RULE: # # The server MUST NOT discard a persistent basic message in # case of a queue overflow. The server MAY use the # Channel.Flow method to slow or stop a basic message # publisher when necessary. # # RULE: # # The server MAY overflow non-persistent basic messages to # persistent storage and MAY discard or dead-letter non- # persistent basic messages on a priority basis if the queue # size exceeds some configured limit. # # RULE: # # The server MUST implement at least 2 priority levels for # basic messages, where priorities 0-4 and 5-9 are treated as # two distinct levels. The server MAY implement up to 10 # priority levels. # # RULE: # # The server MUST deliver messages of the same priority in # order irrespective of their individual persistence. # # RULE: # # The server MUST support both automatic and explicit # acknowledgments on Basic content. # def basic_ack(self, delivery_tag, multiple=False, argsig='Lb'): """Acknowledge one or more messages. This method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. The client can ask to confirm a single message or a set of messages up to and including a specific message. PARAMETERS: delivery_tag: longlong server-assigned delivery tag The server-assigned and channel-specific delivery tag RULE: The delivery tag is valid only within the channel from which the message was received. I.e. a client MUST NOT receive a message on one channel and then acknowledge it on another. RULE: The server MUST NOT use a zero value for delivery tags. Zero is reserved for client use, meaning "all messages so far received". multiple: boolean acknowledge multiple messages If set to True, the delivery tag is treated as "up to and including", so that the client can acknowledge multiple messages with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is True, and the delivery tag is zero, tells the server to acknowledge all outstanding messages. RULE: The server MUST validate that a non-zero delivery- tag refers to an delivered message, and raise a channel exception if this is not the case. """ return self.send_method( spec.Basic.Ack, argsig, (delivery_tag, multiple), ) def basic_cancel(self, consumer_tag, nowait=False, argsig='sb'): """End a queue consumer. This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. RULE: If the queue no longer exists when the client sends a cancel command, or the consumer has been cancelled for other reasons, this command has no effect. PARAMETERS: consumer_tag: shortstr consumer tag Identifier for the consumer, valid within the current connection. RULE: The consumer tag is valid only within the channel from which the consumer was created. I.e. a client MUST NOT create a consumer in one channel and then use it in another. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. """ if self.connection is not None: self.no_ack_consumers.discard(consumer_tag) return self.send_method( spec.Basic.Cancel, argsig, (consumer_tag, nowait), wait=None if nowait else spec.Basic.CancelOk, ) def _on_basic_cancel(self, consumer_tag): """Consumer cancelled by server. Most likely the queue was deleted. """ callback = self._remove_tag(consumer_tag) if callback: callback(consumer_tag) else: raise ConsumerCancelled(consumer_tag, spec.Basic.Cancel) def _on_basic_cancel_ok(self, consumer_tag): self._remove_tag(consumer_tag) def _remove_tag(self, consumer_tag): self.callbacks.pop(consumer_tag, None) return self.cancel_callbacks.pop(consumer_tag, None) def basic_consume(self, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, callback=None, arguments=None, on_cancel=None, argsig='BssbbbbF'): """Start a queue consumer. This method asks the server to start a "consumer", which is a transient request for messages from a specific queue. Consumers last as long as the channel they were created on, or until the client cancels them. RULE: The server SHOULD support at least 16 consumers per queue, unless the queue was declared as private, and ideally, impose no limit except as defined by available resources. PARAMETERS: queue: shortstr Specifies the name of the queue to consume from. If the queue name is null, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). consumer_tag: shortstr Specifies the identifier for the consumer. The consumer tag is local to a connection, so two clients can use the same consumer tags. If this field is empty the server will generate a unique tag. RULE: The tag MUST NOT refer to an existing consumer. If the client attempts to create two consumers with the same non-empty tag the server MUST raise a connection exception with reply code 530 (not allowed). no_local: boolean do not deliver own messages If the no-local field is set the server will not send messages to the client that published them. no_ack: boolean no acknowledgment needed If this field is set the server does not expect acknowledgments for messages. That is, when a message is delivered to the client the server automatically and silently acknowledges it on behalf of the client. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. exclusive: boolean request exclusive access Request exclusive consumer access, meaning only this consumer can access the queue. RULE: If the server cannot grant exclusive access to the queue when asked, - because there are other consumers active - it MUST raise a channel exception with return code 403 (access refused). nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. callback: Python callable function/method called with each delivered message For each message delivered by the broker, the callable will be called with a Message object as the single argument. If no callable is specified, messages are quietly discarded, no_ack should probably be set to True in that case. """ p = self.send_method( spec.Basic.Consume, argsig, ( 0, queue, consumer_tag, no_local, no_ack, exclusive, nowait, arguments ), wait=None if nowait else spec.Basic.ConsumeOk, returns_tuple=True ) if not nowait: # send_method() returns (consumer_tag,) tuple. # consumer_tag is returned by broker using following rules: # * consumer_tag is not specified by client, random one # is generated by Broker # * consumer_tag is provided by client, the same one # is returned by broker consumer_tag = p[0] elif nowait and not consumer_tag: raise ValueError( 'Consumer tag must be specified when nowait is True' ) self.callbacks[consumer_tag] = callback if on_cancel: self.cancel_callbacks[consumer_tag] = on_cancel if no_ack: self.no_ack_consumers.add(consumer_tag) if not nowait: return consumer_tag else: return p def _on_basic_deliver(self, consumer_tag, delivery_tag, redelivered, exchange, routing_key, msg): msg.channel = self msg.delivery_info = { 'consumer_tag': consumer_tag, 'delivery_tag': delivery_tag, 'redelivered': redelivered, 'exchange': exchange, 'routing_key': routing_key, } try: fun = self.callbacks[consumer_tag] except KeyError: AMQP_LOGGER.warning( REJECTED_MESSAGE_WITHOUT_CALLBACK, delivery_tag, consumer_tag, exchange, routing_key, ) self.basic_reject(delivery_tag, requeue=True) else: fun(msg) def basic_get(self, queue='', no_ack=False, argsig='Bsb'): """Direct access to a queue. This method provides a direct access to the messages in a queue using a synchronous dialogue that is designed for specific types of application where synchronous functionality is more important than performance. PARAMETERS: queue: shortstr Specifies the name of the queue to consume from. If the queue name is null, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). no_ack: boolean no acknowledgment needed If this field is set the server does not expect acknowledgments for messages. That is, when a message is delivered to the client the server automatically and silently acknowledges it on behalf of the client. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. Non-blocking, returns a amqp.basic_message.Message object, or None if queue is empty. """ ret = self.send_method( spec.Basic.Get, argsig, (0, queue, no_ack), wait=[spec.Basic.GetOk, spec.Basic.GetEmpty], returns_tuple=True, ) if not ret or len(ret) < 2: return self._on_get_empty(*ret) return self._on_get_ok(*ret) def _on_get_empty(self, cluster_id=None): pass def _on_get_ok(self, delivery_tag, redelivered, exchange, routing_key, message_count, msg): msg.channel = self msg.delivery_info = { 'delivery_tag': delivery_tag, 'redelivered': redelivered, 'exchange': exchange, 'routing_key': routing_key, 'message_count': message_count } return msg def _basic_publish(self, msg, exchange='', routing_key='', mandatory=False, immediate=False, timeout=None, confirm_timeout=None, argsig='Bssbb'): """Publish a message. This method publishes a message to a specific exchange. The message will be routed to queues as defined by the exchange configuration and distributed to any active consumers when the transaction, if any, is committed. When channel is in confirm mode (when Connection parameter confirm_publish is set to True), each message is confirmed. When broker rejects published message (e.g. due internal broker constrains), MessageNacked exception is raised and set confirm_timeout to wait maximum confirm_timeout second for message to confirm. PARAMETERS: exchange: shortstr Specifies the name of the exchange to publish to. The exchange name can be empty, meaning the default exchange. If the exchange name is specified, and that exchange does not exist, the server will raise a channel exception. RULE: The server MUST accept a blank exchange name to mean the default exchange. RULE: The exchange MAY refuse basic content in which case it MUST raise a channel exception with reply code 540 (not implemented). routing_key: shortstr Message routing key Specifies the routing key for the message. The routing key is used for routing messages depending on the exchange configuration. mandatory: boolean indicate mandatory routing This flag tells the server how to react if the message cannot be routed to a queue. If this flag is True, the server will return an unroutable message with a Return method. If this flag is False, the server silently drops the message. RULE: The server SHOULD implement the mandatory flag. immediate: boolean request immediate delivery This flag tells the server how to react if the message cannot be routed to a queue consumer immediately. If this flag is set, the server will return an undeliverable message with a Return method. If this flag is zero, the server will queue the message, but with no guarantee that it will ever be consumed. RULE: The server SHOULD implement the immediate flag. timeout: short timeout for publish Set timeout to wait maximum timeout second for message to publish. confirm_timeout: short confirm_timeout for publish in confirm mode When the channel is in confirm mode set confirm_timeout to wait maximum confirm_timeout second for message to confirm. """ if not self.connection: raise RecoverableConnectionError( 'basic_publish: connection closed') capabilities = self.connection. \ client_properties.get('capabilities', {}) if capabilities.get('connection.blocked', False): try: # Check if an event was sent, such as the out of memory message self.connection.drain_events(timeout=0) except socket.timeout: pass try: with self.connection.transport.having_timeout(timeout): return self.send_method( spec.Basic.Publish, argsig, (0, exchange, routing_key, mandatory, immediate), msg ) except socket.timeout: raise RecoverableChannelError('basic_publish: timed out') basic_publish = _basic_publish def basic_publish_confirm(self, *args, **kwargs): confirm_timeout = kwargs.pop('confirm_timeout', None) def confirm_handler(method, *args): # When RMQ nacks message we are raising MessageNacked exception if method == spec.Basic.Nack: raise MessageNacked() if not self._confirm_selected: self._confirm_selected = True self.confirm_select() ret = self._basic_publish(*args, **kwargs) # Waiting for confirmation of message. timeout = confirm_timeout or kwargs.get('timeout', None) self.wait([spec.Basic.Ack, spec.Basic.Nack], callback=confirm_handler, timeout=timeout) return ret def basic_qos(self, prefetch_size, prefetch_count, a_global, argsig='lBb'): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The particular properties and semantics of a qos method always depend on the content class semantics. Though the qos method could in principle apply to both peers, it is currently meaningful only for the server. PARAMETERS: prefetch_size: long prefetch window in octets The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. This field specifies the prefetch window size in octets. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set. RULE: The server MUST ignore this setting when the client is not processing any messages - i.e. the prefetch size does not limit the transfer of single messages to a client, only the sending in advance of more messages while the client still has one or more unacknowledged messages. prefetch_count: short prefetch window in messages Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch- count is ignored if the no-ack option is set. RULE: The server MAY send less data in advance than allowed by the client's specified prefetch windows but it MUST NOT send more. a_global: boolean Defines a scope of QoS. Semantics of this parameter differs between AMQP 0-9-1 standard and RabbitMQ broker: MEANING IN AMQP 0-9-1: False: shared across all consumers on the channel True: shared across all consumers on the connection MEANING IN RABBITMQ: False: applied separately to each new consumer on the channel True: shared across all consumers on the channel """ return self.send_method( spec.Basic.Qos, argsig, (prefetch_size, prefetch_count, a_global), wait=spec.Basic.QosOk, ) def basic_recover(self, requeue=False): """Redeliver unacknowledged messages. This method asks the broker to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method is only allowed on non-transacted channels. RULE: The server MUST set the redelivered flag on all messages that are resent. RULE: The server MUST raise a channel exception if this is called on a transacted channel. PARAMETERS: requeue: boolean requeue the message If this field is False, the message will be redelivered to the original recipient. If this field is True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ return self.send_method(spec.Basic.Recover, 'b', (requeue,)) def basic_recover_async(self, requeue=False): return self.send_method(spec.Basic.RecoverAsync, 'b', (requeue,)) def basic_reject(self, delivery_tag, requeue, argsig='Lb'): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. RULE: The server SHOULD be capable of accepting and process the Reject method while sending message content with a Deliver or Get-Ok method. I.e. the server should read and process incoming methods while sending output frames. To cancel a partially-send content, the server sends a content body frame of size 1 (i.e. with no data except the frame-end octet). RULE: The server SHOULD interpret this method as meaning that the client is unable to process the message at this time. RULE: A client MUST NOT use this method as a means of selecting messages to process. A rejected message MAY be discarded or dead-lettered, not necessarily passed to another client. PARAMETERS: delivery_tag: longlong server-assigned delivery tag The server-assigned and channel-specific delivery tag RULE: The delivery tag is valid only within the channel from which the message was received. I.e. a client MUST NOT receive a message on one channel and then acknowledge it on another. RULE: The server MUST NOT use a zero value for delivery tags. Zero is reserved for client use, meaning "all messages so far received". requeue: boolean requeue the message If this field is False, the message will be discarded. If this field is True, the server will attempt to requeue the message. RULE: The server MUST NOT deliver the message to the same client within the context of the current channel. The recommended strategy is to attempt to deliver the message to an alternative consumer, and if that is not possible, to move the message to a dead-letter queue. The server MAY use more sophisticated tracking to hold the message on the queue and redeliver it to the same client at a later stage. """ return self.send_method( spec.Basic.Reject, argsig, (delivery_tag, requeue), ) def _on_basic_return(self, reply_code, reply_text, exchange, routing_key, message): """Return a failed message. This method returns an undeliverable message that was published with the "immediate" flag set, or an unroutable message published with the "mandatory" flag set. The reply code and text provide information about the reason that the message was undeliverable. PARAMETERS: reply_code: short The reply code. The AMQ reply codes are defined in AMQ RFC 011. reply_text: shortstr The localised reply text. This text can be logged as an aid to resolving issues. exchange: shortstr Specifies the name of the exchange that the message was originally published to. routing_key: shortstr Message routing key Specifies the routing key name specified when the message was published. """ exc = error_for_code( reply_code, reply_text, spec.Basic.Return, ChannelError, ) handlers = self.events.get('basic_return') if not handlers: raise exc for callback in handlers: callback(exc, exchange, routing_key, message) ############# # # Tx # # # work with standard transactions # # Standard transactions provide so-called "1.5 phase commit". We # can ensure that work is never lost, but there is a chance of # confirmations being lost, so that messages may be resent. # Applications that use standard transactions must be able to # detect and ignore duplicate messages. # # GRAMMAR:: # # tx = C:SELECT S:SELECT-OK # / C:COMMIT S:COMMIT-OK # / C:ROLLBACK S:ROLLBACK-OK # # RULE: # # An client using standard transactions SHOULD be able to # track all messages received within a reasonable period, and # thus detect and reject duplicates of the same message. It # SHOULD NOT pass these to the application layer. # # def tx_commit(self): """Commit the current transaction. This method commits all messages published and acknowledged in the current transaction. A new transaction starts immediately after a commit. """ return self.send_method(spec.Tx.Commit, wait=spec.Tx.CommitOk) def tx_rollback(self): """Abandon the current transaction. This method abandons all messages published and acknowledged in the current transaction. A new transaction starts immediately after a rollback. """ return self.send_method(spec.Tx.Rollback, wait=spec.Tx.RollbackOk) def tx_select(self): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. """ return self.send_method(spec.Tx.Select, wait=spec.Tx.SelectOk) def confirm_select(self, nowait=False): """Enable publisher confirms for this channel. Note: This is an RabbitMQ extension. Can now be used if the channel is in transactional mode. :param nowait: If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. """ return self.send_method( spec.Confirm.Select, 'b', (nowait,), wait=None if nowait else spec.Confirm.SelectOk, ) def _on_basic_ack(self, delivery_tag, multiple): for callback in self.events['basic_ack']: callback(delivery_tag, multiple) def _on_basic_nack(self, delivery_tag, multiple): for callback in self.events['basic_nack']: callback(delivery_tag, multiple) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1716998002.0 amqp-5.3.1/amqp/connection.py0000644000076500000240000006562514625647562015442 0ustar00nusnusstaff"""AMQP Connections.""" # Copyright (C) 2007-2008 Barry Pederson import logging import socket import uuid import warnings from array import array from time import monotonic from vine import ensure_promise from . import __version__, sasl, spec from .abstract_channel import AbstractChannel from .channel import Channel from .exceptions import (AMQPDeprecationWarning, ChannelError, ConnectionError, ConnectionForced, MessageNacked, RecoverableChannelError, RecoverableConnectionError, ResourceError, error_for_code) from .method_framing import frame_handler, frame_writer from .transport import Transport try: from ssl import SSLError except ImportError: # pragma: no cover class SSLError(Exception): # noqa pass W_FORCE_CONNECT = """\ The .{attr} attribute on the connection was accessed before the connection was established. This is supported for now, but will be deprecated in amqp 2.2.0. Since amqp 2.0 you have to explicitly call Connection.connect() before using the connection. """ START_DEBUG_FMT = """ Start from server, version: %d.%d, properties: %s, mechanisms: %s, locales: %s """.strip() __all__ = ('Connection',) AMQP_LOGGER = logging.getLogger('amqp') AMQP_HEARTBEAT_LOGGER = logging.getLogger( 'amqp.connection.Connection.heartbeat_tick' ) #: Default map for :attr:`Connection.library_properties` LIBRARY_PROPERTIES = { 'product': 'py-amqp', 'product_version': __version__, } #: Default map for :attr:`Connection.negotiate_capabilities` NEGOTIATE_CAPABILITIES = { 'consumer_cancel_notify': True, 'connection.blocked': True, 'authentication_failure_close': True, } class Connection(AbstractChannel): """AMQP Connection. The connection class provides methods for a client to establish a network connection to a server, and for both peers to operate the connection thereafter. GRAMMAR:: connection = open-connection *use-connection close-connection open-connection = C:protocol-header S:START C:START-OK *challenge S:TUNE C:TUNE-OK C:OPEN S:OPEN-OK challenge = S:SECURE C:SECURE-OK use-connection = *channel close-connection = C:CLOSE S:CLOSE-OK / S:CLOSE C:CLOSE-OK Create a connection to the specified host, which should be a 'host[:port]', such as 'localhost', or '1.2.3.4:5672' (defaults to 'localhost', if a port is not specified then 5672 is used) Authentication can be controlled by passing one or more `amqp.sasl.SASL` instances as the `authentication` parameter, or setting the `login_method` string to one of the supported methods: 'GSSAPI', 'EXTERNAL', 'AMQPLAIN', or 'PLAIN'. Otherwise authentication will be performed using any supported method preferred by the server. Userid and passwords apply to AMQPLAIN and PLAIN authentication, whereas on GSSAPI only userid will be used as the client name. For EXTERNAL authentication both userid and password are ignored. The 'ssl' parameter may be simply True/False, or a dictionary of options to pass to :class:`ssl.SSLContext` such as requiring certain certificates. For details, refer ``ssl`` parameter of :class:`~amqp.transport.SSLTransport`. The "socket_settings" parameter is a dictionary defining tcp settings which will be applied as socket options. When "confirm_publish" is set to True, the channel is put to confirm mode. In this mode, each published message is confirmed using Publisher confirms RabbitMQ extension. """ Channel = Channel #: Mapping of protocol extensions to enable. #: The server will report these in server_properties[capabilities], #: and if a key in this map is present the client will tell the #: server to either enable or disable the capability depending #: on the value set in this map. #: For example with: #: negotiate_capabilities = { #: 'consumer_cancel_notify': True, #: } #: The client will enable this capability if the server reports #: support for it, but if the value is False the client will #: disable the capability. negotiate_capabilities = NEGOTIATE_CAPABILITIES #: These are sent to the server to announce what features #: we support, type of client etc. library_properties = LIBRARY_PROPERTIES #: Final heartbeat interval value (in float seconds) after negotiation heartbeat = None #: Original heartbeat interval value proposed by client. client_heartbeat = None #: Original heartbeat interval proposed by server. server_heartbeat = None #: Time of last heartbeat sent (in monotonic time, if available). last_heartbeat_sent = 0 #: Time of last heartbeat received (in monotonic time, if available). last_heartbeat_received = 0 #: Number of successful writes to socket. bytes_sent = 0 #: Number of successful reads from socket. bytes_recv = 0 #: Number of bytes sent to socket at the last heartbeat check. prev_sent = None #: Number of bytes received from socket at the last heartbeat check. prev_recv = None _METHODS = { spec.method(spec.Connection.Start, 'ooFSS'), spec.method(spec.Connection.OpenOk), spec.method(spec.Connection.Secure, 's'), spec.method(spec.Connection.Tune, 'BlB'), spec.method(spec.Connection.Close, 'BsBB'), spec.method(spec.Connection.Blocked), spec.method(spec.Connection.Unblocked), spec.method(spec.Connection.CloseOk), } _METHODS = {m.method_sig: m for m in _METHODS} _ALLOWED_METHODS_WHEN_CLOSING = ( spec.Connection.Close, spec.Connection.CloseOk ) connection_errors = ( ConnectionError, socket.error, IOError, OSError, ) channel_errors = (ChannelError,) recoverable_connection_errors = ( RecoverableConnectionError, MessageNacked, socket.error, IOError, OSError, ) recoverable_channel_errors = ( RecoverableChannelError, ) def __init__(self, host='localhost:5672', userid='guest', password='guest', login_method=None, login_response=None, authentication=(), virtual_host='/', locale='en_US', client_properties=None, ssl=False, connect_timeout=None, channel_max=None, frame_max=None, heartbeat=0, on_open=None, on_blocked=None, on_unblocked=None, confirm_publish=False, on_tune_ok=None, read_timeout=None, write_timeout=None, socket_settings=None, frame_handler=frame_handler, frame_writer=frame_writer, **kwargs): self._connection_id = uuid.uuid4().hex channel_max = channel_max or 65535 frame_max = frame_max or 131072 if authentication: if isinstance(authentication, sasl.SASL): authentication = (authentication,) self.authentication = authentication elif login_method is not None: if login_method == 'GSSAPI': auth = sasl.GSSAPI(userid) elif login_method == 'EXTERNAL': auth = sasl.EXTERNAL() elif login_method == 'AMQPLAIN': if userid is None or password is None: raise ValueError( "Must supply authentication or userid/password") auth = sasl.AMQPLAIN(userid, password) elif login_method == 'PLAIN': if userid is None or password is None: raise ValueError( "Must supply authentication or userid/password") auth = sasl.PLAIN(userid, password) elif login_response is not None: auth = sasl.RAW(login_method, login_response) else: raise ValueError("Invalid login method", login_method) self.authentication = (auth,) else: self.authentication = (sasl.GSSAPI(userid, fail_soft=True), sasl.EXTERNAL(), sasl.AMQPLAIN(userid, password), sasl.PLAIN(userid, password)) self.client_properties = dict( self.library_properties, **client_properties or {} ) self.locale = locale self.host = host self.virtual_host = virtual_host self.on_tune_ok = ensure_promise(on_tune_ok) self.frame_handler_cls = frame_handler self.frame_writer_cls = frame_writer self._handshake_complete = False self.channels = {} # The connection object itself is treated as channel 0 super().__init__(self, 0) self._frame_writer = None self._on_inbound_frame = None self._transport = None # Properties set in the Tune method self.channel_max = channel_max self.frame_max = frame_max self.client_heartbeat = heartbeat self.confirm_publish = confirm_publish self.ssl = ssl self.read_timeout = read_timeout self.write_timeout = write_timeout self.socket_settings = socket_settings # Callbacks self.on_blocked = on_blocked self.on_unblocked = on_unblocked self.on_open = ensure_promise(on_open) self._used_channel_ids = array('H') # Properties set in the Start method self.version_major = 0 self.version_minor = 0 self.server_properties = {} self.mechanisms = [] self.locales = [] self.connect_timeout = connect_timeout def __repr__(self): if self._transport: return f'' else: return f'' def __enter__(self): self.connect() return self def __exit__(self, *eargs): self.close() def then(self, on_success, on_error=None): return self.on_open.then(on_success, on_error) def _setup_listeners(self): self._callbacks.update({ spec.Connection.Start: self._on_start, spec.Connection.OpenOk: self._on_open_ok, spec.Connection.Secure: self._on_secure, spec.Connection.Tune: self._on_tune, spec.Connection.Close: self._on_close, spec.Connection.Blocked: self._on_blocked, spec.Connection.Unblocked: self._on_unblocked, spec.Connection.CloseOk: self._on_close_ok, }) def connect(self, callback=None): # Let the transport.py module setup the actual # socket connection to the broker. # if self.connected: return callback() if callback else None try: self.transport = self.Transport( self.host, self.connect_timeout, self.ssl, self.read_timeout, self.write_timeout, socket_settings=self.socket_settings, ) self.transport.connect() self.on_inbound_frame = self.frame_handler_cls( self, self.on_inbound_method) self.frame_writer = self.frame_writer_cls(self, self.transport) while not self._handshake_complete: self.drain_events(timeout=self.connect_timeout) except (OSError, SSLError): self.collect() raise def _warn_force_connect(self, attr): warnings.warn(AMQPDeprecationWarning( W_FORCE_CONNECT.format(attr=attr))) @property def transport(self): if self._transport is None: self._warn_force_connect('transport') self.connect() return self._transport @transport.setter def transport(self, transport): self._transport = transport @property def on_inbound_frame(self): if self._on_inbound_frame is None: self._warn_force_connect('on_inbound_frame') self.connect() return self._on_inbound_frame @on_inbound_frame.setter def on_inbound_frame(self, on_inbound_frame): self._on_inbound_frame = on_inbound_frame @property def frame_writer(self): if self._frame_writer is None: self._warn_force_connect('frame_writer') self.connect() return self._frame_writer @frame_writer.setter def frame_writer(self, frame_writer): self._frame_writer = frame_writer def _on_start(self, version_major, version_minor, server_properties, mechanisms, locales, argsig='FsSs'): client_properties = self.client_properties self.version_major = version_major self.version_minor = version_minor self.server_properties = server_properties if isinstance(mechanisms, str): mechanisms = mechanisms.encode('utf-8') self.mechanisms = mechanisms.split(b' ') self.locales = locales.split(' ') AMQP_LOGGER.debug( START_DEBUG_FMT, self.version_major, self.version_minor, self.server_properties, self.mechanisms, self.locales, ) # Negotiate protocol extensions (capabilities) scap = server_properties.get('capabilities') or {} cap = client_properties.setdefault('capabilities', {}) cap.update({ wanted_cap: enable_cap for wanted_cap, enable_cap in self.negotiate_capabilities.items() if scap.get(wanted_cap) }) if not cap: # no capabilities, server may not react well to having # this key present in client_properties, so we remove it. client_properties.pop('capabilities', None) for authentication in self.authentication: if authentication.mechanism in self.mechanisms: login_response = authentication.start(self) if login_response is not NotImplemented: break else: raise ConnectionError( "Couldn't find appropriate auth mechanism " "(can offer: {}; available: {})".format( b", ".join(m.mechanism for m in self.authentication if m.mechanism).decode(), b", ".join(self.mechanisms).decode())) self.send_method( spec.Connection.StartOk, argsig, (client_properties, authentication.mechanism, login_response, self.locale), ) def _on_secure(self, challenge): pass def _on_tune(self, channel_max, frame_max, server_heartbeat, argsig='BlB'): client_heartbeat = self.client_heartbeat or 0 self.channel_max = channel_max or self.channel_max self.frame_max = frame_max or self.frame_max self.server_heartbeat = server_heartbeat or 0 # negotiate the heartbeat interval to the smaller of the # specified values if self.server_heartbeat == 0 or client_heartbeat == 0: self.heartbeat = max(self.server_heartbeat, client_heartbeat) else: self.heartbeat = min(self.server_heartbeat, client_heartbeat) # Ignore server heartbeat if client_heartbeat is disabled if not self.client_heartbeat: self.heartbeat = 0 self.send_method( spec.Connection.TuneOk, argsig, (self.channel_max, self.frame_max, self.heartbeat), callback=self._on_tune_sent, ) def _on_tune_sent(self, argsig='ssb'): self.send_method( spec.Connection.Open, argsig, (self.virtual_host, '', False), ) def _on_open_ok(self): self._handshake_complete = True self.on_open(self) def Transport(self, host, connect_timeout, ssl=False, read_timeout=None, write_timeout=None, socket_settings=None, **kwargs): return Transport( host, connect_timeout=connect_timeout, ssl=ssl, read_timeout=read_timeout, write_timeout=write_timeout, socket_settings=socket_settings, **kwargs) @property def connected(self): return self._transport and self._transport.connected def collect(self): if self._transport: self._transport.close() if self.channels: # Copy all the channels except self since the channels # dictionary changes during the collection process. channels = [ ch for ch in self.channels.values() if ch is not self ] for ch in channels: ch.collect() self._transport = self.connection = self.channels = None def _get_free_channel_id(self): # Cast to a set for fast lookups, and keep stored as an array for lower memory usage. used_channel_ids = set(self._used_channel_ids) for channel_id in range(1, self.channel_max + 1): if channel_id not in used_channel_ids: self._used_channel_ids.append(channel_id) return channel_id raise ResourceError( 'No free channel ids, current={}, channel_max={}'.format( len(self.channels), self.channel_max), spec.Channel.Open) def _claim_channel_id(self, channel_id): if channel_id in self._used_channel_ids: raise ConnectionError(f'Channel {channel_id!r} already open') else: self._used_channel_ids.append(channel_id) return channel_id def channel(self, channel_id=None, callback=None): """Create new channel. Fetch a Channel object identified by the numeric channel_id, or create that object if it doesn't already exist. """ if self.channels is None: raise RecoverableConnectionError('Connection already closed.') try: return self.channels[channel_id] except KeyError: channel = self.Channel(self, channel_id, on_open=callback) channel.open() return channel def is_alive(self): raise NotImplementedError('Use AMQP heartbeats') def drain_events(self, timeout=None): # read until message is ready while not self.blocking_read(timeout): pass def blocking_read(self, timeout=None): with self.transport.having_timeout(timeout): frame = self.transport.read_frame() return self.on_inbound_frame(frame) def on_inbound_method(self, channel_id, method_sig, payload, content): if self.channels is None: raise RecoverableConnectionError('Connection already closed') return self.channels[channel_id].dispatch_method( method_sig, payload, content, ) def close(self, reply_code=0, reply_text='', method_sig=(0, 0), argsig='BsBB'): """Request a connection close. This method indicates that the sender wants to close the connection. This may be due to internal conditions (e.g. a forced shut-down) or due to an error handling a specific method, i.e. an exception. When a close is due to an exception, the sender provides the class and method id of the method which caused the exception. RULE: After sending this method any received method except the Close-OK method MUST be discarded. RULE: The peer sending this method MAY use a counter or timeout to detect failure of the other peer to respond correctly with the Close-OK method. RULE: When a server receives the Close method from a client it MUST delete all server-side resources associated with the client's context. A client CANNOT reconnect to a context after sending or receiving a Close method. PARAMETERS: reply_code: short The reply code. The AMQ reply codes are defined in AMQ RFC 011. reply_text: shortstr The localised reply text. This text can be logged as an aid to resolving issues. class_id: short failing method class When the close is provoked by a method exception, this is the class of the method. method_id: short failing method ID When the close is provoked by a method exception, this is the ID of the method. """ if self._transport is None: # already closed return try: self.is_closing = True return self.send_method( spec.Connection.Close, argsig, (reply_code, reply_text, method_sig[0], method_sig[1]), wait=spec.Connection.CloseOk, ) except (OSError, SSLError): # close connection self.collect() raise finally: self.is_closing = False def _on_close(self, reply_code, reply_text, class_id, method_id): """Request a connection close. This method indicates that the sender wants to close the connection. This may be due to internal conditions (e.g. a forced shut-down) or due to an error handling a specific method, i.e. an exception. When a close is due to an exception, the sender provides the class and method id of the method which caused the exception. RULE: After sending this method any received method except the Close-OK method MUST be discarded. RULE: The peer sending this method MAY use a counter or timeout to detect failure of the other peer to respond correctly with the Close-OK method. RULE: When a server receives the Close method from a client it MUST delete all server-side resources associated with the client's context. A client CANNOT reconnect to a context after sending or receiving a Close method. PARAMETERS: reply_code: short The reply code. The AMQ reply codes are defined in AMQ RFC 011. reply_text: shortstr The localised reply text. This text can be logged as an aid to resolving issues. class_id: short failing method class When the close is provoked by a method exception, this is the class of the method. method_id: short failing method ID When the close is provoked by a method exception, this is the ID of the method. """ self._x_close_ok() raise error_for_code(reply_code, reply_text, (class_id, method_id), ConnectionError) def _x_close_ok(self): """Confirm a connection close. This method confirms a Connection.Close method and tells the recipient that it is safe to release resources for the connection and close the socket. RULE: A peer that detects a socket closure without having received a Close-Ok handshake method SHOULD log the error. """ self.send_method(spec.Connection.CloseOk, callback=self._on_close_ok) def _on_close_ok(self): """Confirm a connection close. This method confirms a Connection.Close method and tells the recipient that it is safe to release resources for the connection and close the socket. RULE: A peer that detects a socket closure without having received a Close-Ok handshake method SHOULD log the error. """ self.collect() def _on_blocked(self): """Callback called when connection blocked. Notes: This is an RabbitMQ Extension. """ reason = 'connection blocked, see broker logs' if self.on_blocked: return self.on_blocked(reason) def _on_unblocked(self): if self.on_unblocked: return self.on_unblocked() def send_heartbeat(self): self.frame_writer(8, 0, None, None, None) def heartbeat_tick(self, rate=2): """Send heartbeat packets if necessary. Raises: ~amqp.exceptions.ConnectionForvced: if none have been received recently. Note: This should be called frequently, on the order of once per second. Keyword Arguments: rate (int): Number of heartbeat frames to send during the heartbeat timeout """ AMQP_HEARTBEAT_LOGGER.debug('heartbeat_tick : for connection %s', self._connection_id) if not self.heartbeat: return # If rate is wrong, let's use 2 as default if rate <= 0: rate = 2 # treat actual data exchange in either direction as a heartbeat sent_now = self.bytes_sent recv_now = self.bytes_recv if self.prev_sent is None or self.prev_sent != sent_now: self.last_heartbeat_sent = monotonic() if self.prev_recv is None or self.prev_recv != recv_now: self.last_heartbeat_received = monotonic() now = monotonic() AMQP_HEARTBEAT_LOGGER.debug( 'heartbeat_tick : Prev sent/recv: %s/%s, ' 'now - %s/%s, monotonic - %s, ' 'last_heartbeat_sent - %s, heartbeat int. - %s ' 'for connection %s', self.prev_sent, self.prev_recv, sent_now, recv_now, now, self.last_heartbeat_sent, self.heartbeat, self._connection_id, ) self.prev_sent, self.prev_recv = sent_now, recv_now # send a heartbeat if it's time to do so if now > self.last_heartbeat_sent + self.heartbeat / rate: AMQP_HEARTBEAT_LOGGER.debug( 'heartbeat_tick: sending heartbeat for connection %s', self._connection_id) self.send_heartbeat() self.last_heartbeat_sent = monotonic() # if we've missed two intervals' heartbeats, fail; this gives the # server enough time to send heartbeats a little late two_heartbeats = 2 * self.heartbeat two_heartbeats_interval = self.last_heartbeat_received + two_heartbeats heartbeats_missed = two_heartbeats_interval < monotonic() if self.last_heartbeat_received and heartbeats_missed: raise ConnectionForced('Too many heartbeats missed') @property def sock(self): return self.transport.sock @property def server_capabilities(self): return self.server_properties.get('capabilities') or {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/exceptions.py0000644000076500000240000001577614345343305015451 0ustar00nusnusstaff"""Exceptions used by amqp.""" # Copyright (C) 2007-2008 Barry Pederson from struct import pack, unpack __all__ = ( 'AMQPError', 'ConnectionError', 'ChannelError', 'RecoverableConnectionError', 'IrrecoverableConnectionError', 'RecoverableChannelError', 'IrrecoverableChannelError', 'ConsumerCancelled', 'ContentTooLarge', 'NoConsumers', 'ConnectionForced', 'InvalidPath', 'AccessRefused', 'NotFound', 'ResourceLocked', 'PreconditionFailed', 'FrameError', 'FrameSyntaxError', 'InvalidCommand', 'ChannelNotOpen', 'UnexpectedFrame', 'ResourceError', 'NotAllowed', 'AMQPNotImplementedError', 'InternalError', 'MessageNacked', 'AMQPDeprecationWarning', ) class AMQPDeprecationWarning(UserWarning): """Warning for deprecated things.""" class MessageNacked(Exception): """Message was nacked by broker.""" class AMQPError(Exception): """Base class for all AMQP exceptions.""" code = 0 def __init__(self, reply_text=None, method_sig=None, method_name=None, reply_code=None): self.message = reply_text self.reply_code = reply_code or self.code self.reply_text = reply_text self.method_sig = method_sig self.method_name = method_name or '' if method_sig and not self.method_name: self.method_name = METHOD_NAME_MAP.get(method_sig, '') Exception.__init__(self, reply_code, reply_text, method_sig, self.method_name) def __str__(self): if self.method: return '{0.method}: ({0.reply_code}) {0.reply_text}'.format(self) return self.reply_text or '<{}: unknown error>'.format( type(self).__name__ ) @property def method(self): return self.method_name or self.method_sig class ConnectionError(AMQPError): """AMQP Connection Error.""" class ChannelError(AMQPError): """AMQP Channel Error.""" class RecoverableChannelError(ChannelError): """Exception class for recoverable channel errors.""" class IrrecoverableChannelError(ChannelError): """Exception class for irrecoverable channel errors.""" class RecoverableConnectionError(ConnectionError): """Exception class for recoverable connection errors.""" class IrrecoverableConnectionError(ConnectionError): """Exception class for irrecoverable connection errors.""" class Blocked(RecoverableConnectionError): """AMQP Connection Blocked Predicate.""" class ConsumerCancelled(RecoverableConnectionError): """AMQP Consumer Cancelled Predicate.""" class ContentTooLarge(RecoverableChannelError): """AMQP Content Too Large Error.""" code = 311 class NoConsumers(RecoverableChannelError): """AMQP No Consumers Error.""" code = 313 class ConnectionForced(RecoverableConnectionError): """AMQP Connection Forced Error.""" code = 320 class InvalidPath(IrrecoverableConnectionError): """AMQP Invalid Path Error.""" code = 402 class AccessRefused(IrrecoverableChannelError): """AMQP Access Refused Error.""" code = 403 class NotFound(IrrecoverableChannelError): """AMQP Not Found Error.""" code = 404 class ResourceLocked(RecoverableChannelError): """AMQP Resource Locked Error.""" code = 405 class PreconditionFailed(IrrecoverableChannelError): """AMQP Precondition Failed Error.""" code = 406 class FrameError(IrrecoverableConnectionError): """AMQP Frame Error.""" code = 501 class FrameSyntaxError(IrrecoverableConnectionError): """AMQP Frame Syntax Error.""" code = 502 class InvalidCommand(IrrecoverableConnectionError): """AMQP Invalid Command Error.""" code = 503 class ChannelNotOpen(IrrecoverableConnectionError): """AMQP Channel Not Open Error.""" code = 504 class UnexpectedFrame(IrrecoverableConnectionError): """AMQP Unexpected Frame.""" code = 505 class ResourceError(RecoverableConnectionError): """AMQP Resource Error.""" code = 506 class NotAllowed(IrrecoverableConnectionError): """AMQP Not Allowed Error.""" code = 530 class AMQPNotImplementedError(IrrecoverableConnectionError): """AMQP Not Implemented Error.""" code = 540 class InternalError(IrrecoverableConnectionError): """AMQP Internal Error.""" code = 541 ERROR_MAP = { 311: ContentTooLarge, 313: NoConsumers, 320: ConnectionForced, 402: InvalidPath, 403: AccessRefused, 404: NotFound, 405: ResourceLocked, 406: PreconditionFailed, 501: FrameError, 502: FrameSyntaxError, 503: InvalidCommand, 504: ChannelNotOpen, 505: UnexpectedFrame, 506: ResourceError, 530: NotAllowed, 540: AMQPNotImplementedError, 541: InternalError, } def error_for_code(code, text, method, default): try: return ERROR_MAP[code](text, method, reply_code=code) except KeyError: return default(text, method, reply_code=code) METHOD_NAME_MAP = { (10, 10): 'Connection.start', (10, 11): 'Connection.start_ok', (10, 20): 'Connection.secure', (10, 21): 'Connection.secure_ok', (10, 30): 'Connection.tune', (10, 31): 'Connection.tune_ok', (10, 40): 'Connection.open', (10, 41): 'Connection.open_ok', (10, 50): 'Connection.close', (10, 51): 'Connection.close_ok', (20, 10): 'Channel.open', (20, 11): 'Channel.open_ok', (20, 20): 'Channel.flow', (20, 21): 'Channel.flow_ok', (20, 40): 'Channel.close', (20, 41): 'Channel.close_ok', (30, 10): 'Access.request', (30, 11): 'Access.request_ok', (40, 10): 'Exchange.declare', (40, 11): 'Exchange.declare_ok', (40, 20): 'Exchange.delete', (40, 21): 'Exchange.delete_ok', (40, 30): 'Exchange.bind', (40, 31): 'Exchange.bind_ok', (40, 40): 'Exchange.unbind', (40, 41): 'Exchange.unbind_ok', (50, 10): 'Queue.declare', (50, 11): 'Queue.declare_ok', (50, 20): 'Queue.bind', (50, 21): 'Queue.bind_ok', (50, 30): 'Queue.purge', (50, 31): 'Queue.purge_ok', (50, 40): 'Queue.delete', (50, 41): 'Queue.delete_ok', (50, 50): 'Queue.unbind', (50, 51): 'Queue.unbind_ok', (60, 10): 'Basic.qos', (60, 11): 'Basic.qos_ok', (60, 20): 'Basic.consume', (60, 21): 'Basic.consume_ok', (60, 30): 'Basic.cancel', (60, 31): 'Basic.cancel_ok', (60, 40): 'Basic.publish', (60, 50): 'Basic.return', (60, 60): 'Basic.deliver', (60, 70): 'Basic.get', (60, 71): 'Basic.get_ok', (60, 72): 'Basic.get_empty', (60, 80): 'Basic.ack', (60, 90): 'Basic.reject', (60, 100): 'Basic.recover_async', (60, 110): 'Basic.recover', (60, 111): 'Basic.recover_ok', (60, 120): 'Basic.nack', (90, 10): 'Tx.select', (90, 11): 'Tx.select_ok', (90, 20): 'Tx.commit', (90, 21): 'Tx.commit_ok', (90, 30): 'Tx.rollback', (90, 31): 'Tx.rollback_ok', (85, 10): 'Confirm.select', (85, 11): 'Confirm.select_ok', } for _method_id, _method_name in list(METHOD_NAME_MAP.items()): METHOD_NAME_MAP[unpack('>I', pack('>HH', *_method_id))[0]] = \ _method_name ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/method_framing.py0000644000076500000240000001511614345343305016237 0ustar00nusnusstaff"""Convert between frames and higher-level AMQP methods.""" # Copyright (C) 2007-2008 Barry Pederson from collections import defaultdict from struct import pack, pack_into, unpack_from from . import spec from .basic_message import Message from .exceptions import UnexpectedFrame from .utils import str_to_bytes __all__ = ('frame_handler', 'frame_writer') #: Set of methods that require both a content frame and a body frame. _CONTENT_METHODS = frozenset([ spec.Basic.Return, spec.Basic.Deliver, spec.Basic.GetOk, ]) #: Number of bytes reserved for protocol in a content frame. #: We use this to calculate when a frame exceeeds the max frame size, #: and if it does not the message will fit into the preallocated buffer. FRAME_OVERHEAD = 40 def frame_handler(connection, callback, unpack_from=unpack_from, content_methods=_CONTENT_METHODS): """Create closure that reads frames.""" expected_types = defaultdict(lambda: 1) partial_messages = {} def on_frame(frame): frame_type, channel, buf = frame connection.bytes_recv += 1 if frame_type not in (expected_types[channel], 8): raise UnexpectedFrame( 'Received frame {} while expecting type: {}'.format( frame_type, expected_types[channel]), ) elif frame_type == 1: method_sig = unpack_from('>HH', buf, 0) if method_sig in content_methods: # Save what we've got so far and wait for the content-header partial_messages[channel] = Message( frame_method=method_sig, frame_args=buf, ) expected_types[channel] = 2 return False callback(channel, method_sig, buf, None) elif frame_type == 2: msg = partial_messages[channel] msg.inbound_header(buf) if not msg.ready: # wait for the content-body expected_types[channel] = 3 return False # bodyless message, we're done expected_types[channel] = 1 partial_messages.pop(channel, None) callback(channel, msg.frame_method, msg.frame_args, msg) elif frame_type == 3: msg = partial_messages[channel] msg.inbound_body(buf) if not msg.ready: # wait for the rest of the content-body return False expected_types[channel] = 1 partial_messages.pop(channel, None) callback(channel, msg.frame_method, msg.frame_args, msg) elif frame_type == 8: # bytes_recv already updated return False return True return on_frame class Buffer: def __init__(self, buf): self.buf = buf @property def buf(self): return self._buf @buf.setter def buf(self, buf): self._buf = buf # Using a memoryview allows slicing without copying underlying data. # Slicing this is much faster than slicing the bytearray directly. # More details: https://stackoverflow.com/a/34257357 self.view = memoryview(buf) def frame_writer(connection, transport, pack=pack, pack_into=pack_into, range=range, len=len, bytes=bytes, str_to_bytes=str_to_bytes, text_t=str): """Create closure that writes frames.""" write = transport.write buffer_store = Buffer(bytearray(connection.frame_max - 8)) def write_frame(type_, channel, method_sig, args, content): chunk_size = connection.frame_max - 8 offset = 0 properties = None args = str_to_bytes(args) if content: body = content.body if isinstance(body, str): encoding = content.properties.setdefault( 'content_encoding', 'utf-8') body = body.encode(encoding) properties = content._serialize_properties() bodylen = len(body) properties_len = len(properties) or 0 framelen = len(args) + properties_len + bodylen + FRAME_OVERHEAD bigbody = framelen > chunk_size else: body, bodylen, bigbody = None, 0, 0 if bigbody: # ## SLOW: string copy and write for every frame frame = (b''.join([pack('>HH', *method_sig), args]) if type_ == 1 else b'') # encode method frame framelen = len(frame) write(pack('>BHI%dsB' % framelen, type_, channel, framelen, frame, 0xce)) if body: frame = b''.join([ pack('>HHQ', method_sig[0], 0, len(body)), properties, ]) framelen = len(frame) write(pack('>BHI%dsB' % framelen, 2, channel, framelen, frame, 0xce)) for i in range(0, bodylen, chunk_size): frame = body[i:i + chunk_size] framelen = len(frame) write(pack('>BHI%dsB' % framelen, 3, channel, framelen, frame, 0xce)) else: # frame_max can be updated via connection._on_tune. If # it became larger, then we need to resize the buffer # to prevent overflow. if chunk_size > len(buffer_store.buf): buffer_store.buf = bytearray(chunk_size) buf = buffer_store.buf # ## FAST: pack into buffer and single write frame = (b''.join([pack('>HH', *method_sig), args]) if type_ == 1 else b'') framelen = len(frame) pack_into('>BHI%dsB' % framelen, buf, offset, type_, channel, framelen, frame, 0xce) offset += 8 + framelen if body is not None: frame = b''.join([ pack('>HHQ', method_sig[0], 0, len(body)), properties, ]) framelen = len(frame) pack_into('>BHI%dsB' % framelen, buf, offset, 2, channel, framelen, frame, 0xce) offset += 8 + framelen bodylen = len(body) if bodylen > 0: framelen = bodylen pack_into('>BHI%dsB' % framelen, buf, offset, 3, channel, framelen, body, 0xce) offset += 8 + framelen write(buffer_store.view[:offset]) connection.bytes_sent += 1 return write_frame ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/platform.py0000644000076500000240000000451314345343305015077 0ustar00nusnusstaff"""Platform compatibility.""" import platform import re import sys # Jython does not have this attribute import typing try: from socket import SOL_TCP except ImportError: # pragma: no cover from socket import IPPROTO_TCP as SOL_TCP # noqa RE_NUM = re.compile(r'(\d+).+') def _linux_version_to_tuple(s: str) -> typing.Tuple[int, int, int]: return tuple(map(_versionatom, s.split('.')[:3])) def _versionatom(s: str) -> int: if s.isdigit(): return int(s) match = RE_NUM.match(s) return int(match.groups()[0]) if match else 0 # available socket options for TCP level KNOWN_TCP_OPTS = { 'TCP_CORK', 'TCP_DEFER_ACCEPT', 'TCP_KEEPCNT', 'TCP_KEEPIDLE', 'TCP_KEEPINTVL', 'TCP_LINGER2', 'TCP_MAXSEG', 'TCP_NODELAY', 'TCP_QUICKACK', 'TCP_SYNCNT', 'TCP_USER_TIMEOUT', 'TCP_WINDOW_CLAMP', } LINUX_VERSION = None if sys.platform.startswith('linux'): LINUX_VERSION = _linux_version_to_tuple(platform.release()) if LINUX_VERSION < (2, 6, 37): KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT') # Windows Subsystem for Linux is an edge-case: the Python socket library # returns most TCP_* enums, but they aren't actually supported if platform.release().endswith("Microsoft"): KNOWN_TCP_OPTS = {'TCP_NODELAY', 'TCP_KEEPIDLE', 'TCP_KEEPINTVL', 'TCP_KEEPCNT'} elif sys.platform.startswith('darwin'): KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT') elif 'bsd' in sys.platform: KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT') # According to MSDN Windows platforms support getsockopt(TCP_MAXSSEG) but not # setsockopt(TCP_MAXSEG) on IPPROTO_TCP sockets. elif sys.platform.startswith('win'): KNOWN_TCP_OPTS = {'TCP_NODELAY'} elif sys.platform.startswith('cygwin'): KNOWN_TCP_OPTS = {'TCP_NODELAY'} # illumos does not allow to set the TCP_MAXSEG socket option, # even if the Oracle documentation says otherwise. # TCP_USER_TIMEOUT does not exist on Solaris 11.4 elif sys.platform.startswith('sunos'): KNOWN_TCP_OPTS.remove('TCP_MAXSEG') KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT') # aix does not allow to set the TCP_MAXSEG # or the TCP_USER_TIMEOUT socket options. elif sys.platform.startswith('aix'): KNOWN_TCP_OPTS.remove('TCP_MAXSEG') KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT') __all__ = ( 'LINUX_VERSION', 'SOL_TCP', 'KNOWN_TCP_OPTS', ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/protocol.py0000644000076500000240000000044314345343305015112 0ustar00nusnusstaff"""Protocol data.""" from collections import namedtuple queue_declare_ok_t = namedtuple( 'queue_declare_ok_t', ('queue', 'message_count', 'consumer_count'), ) basic_return_t = namedtuple( 'basic_return_t', ('reply_code', 'reply_text', 'exchange', 'routing_key', 'message'), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/sasl.py0000644000076500000240000001354214345343305014217 0ustar00nusnusstaff"""SASL mechanisms for AMQP authentication.""" import socket import warnings from io import BytesIO from amqp.serialization import _write_table class SASL: """The base class for all amqp SASL authentication mechanisms. You should sub-class this if you're implementing your own authentication. """ @property def mechanism(self): """Return a bytes containing the SASL mechanism name.""" raise NotImplementedError def start(self, connection): """Return the first response to a SASL challenge as a bytes object.""" raise NotImplementedError class PLAIN(SASL): """PLAIN SASL authentication mechanism. See https://tools.ietf.org/html/rfc4616 for details """ mechanism = b'PLAIN' def __init__(self, username, password): self.username, self.password = username, password __slots__ = ( "username", "password", ) def start(self, connection): if self.username is None or self.password is None: return NotImplemented login_response = BytesIO() login_response.write(b'\0') login_response.write(self.username.encode('utf-8')) login_response.write(b'\0') login_response.write(self.password.encode('utf-8')) return login_response.getvalue() class AMQPLAIN(SASL): """AMQPLAIN SASL authentication mechanism. This is a non-standard mechanism used by AMQP servers. """ mechanism = b'AMQPLAIN' def __init__(self, username, password): self.username, self.password = username, password __slots__ = ( "username", "password", ) def start(self, connection): if self.username is None or self.password is None: return NotImplemented login_response = BytesIO() _write_table({b'LOGIN': self.username, b'PASSWORD': self.password}, login_response.write, []) # Skip the length at the beginning return login_response.getvalue()[4:] def _get_gssapi_mechanism(): try: import gssapi import gssapi.raw.misc # Fail if the old python-gssapi is installed except ImportError: class FakeGSSAPI(SASL): """A no-op SASL mechanism for when gssapi isn't available.""" mechanism = None def __init__(self, client_name=None, service=b'amqp', rdns=False, fail_soft=False): if not fail_soft: raise NotImplementedError( "You need to install the `gssapi` module for GSSAPI " "SASL support") def start(self): # pragma: no cover return NotImplemented return FakeGSSAPI else: class GSSAPI(SASL): """GSSAPI SASL authentication mechanism. See https://tools.ietf.org/html/rfc4752 for details """ mechanism = b'GSSAPI' def __init__(self, client_name=None, service=b'amqp', rdns=False, fail_soft=False): if client_name and not isinstance(client_name, bytes): client_name = client_name.encode('ascii') self.client_name = client_name self.fail_soft = fail_soft self.service = service self.rdns = rdns __slots__ = ( "client_name", "fail_soft", "service", "rdns" ) def get_hostname(self, connection): sock = connection.transport.sock if self.rdns and sock.family in (socket.AF_INET, socket.AF_INET6): peer = sock.getpeername() hostname, _, _ = socket.gethostbyaddr(peer[0]) else: hostname = connection.transport.host if not isinstance(hostname, bytes): hostname = hostname.encode('ascii') return hostname def start(self, connection): try: if self.client_name: creds = gssapi.Credentials( name=gssapi.Name(self.client_name)) else: creds = None hostname = self.get_hostname(connection) name = gssapi.Name(b'@'.join([self.service, hostname]), gssapi.NameType.hostbased_service) context = gssapi.SecurityContext(name=name, creds=creds) return context.step(None) except gssapi.raw.misc.GSSError: if self.fail_soft: return NotImplemented else: raise return GSSAPI GSSAPI = _get_gssapi_mechanism() class EXTERNAL(SASL): """EXTERNAL SASL mechanism. Enables external authentication, i.e. not handled through this protocol. Only passes 'EXTERNAL' as authentication mechanism, but no further authentication data. """ mechanism = b'EXTERNAL' def start(self, connection): return b'' class RAW(SASL): """A generic custom SASL mechanism. This mechanism takes a mechanism name and response to send to the server, so can be used for simple custom authentication schemes. """ mechanism = None def __init__(self, mechanism, response): assert isinstance(mechanism, bytes) assert isinstance(response, bytes) self.mechanism, self.response = mechanism, response warnings.warn("Passing login_method and login_response to Connection " "is deprecated. Please implement a SASL subclass " "instead.", DeprecationWarning) def start(self, connection): return self.response ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/serialization.py0000644000076500000240000004141214345343305016127 0ustar00nusnusstaff"""Convert between bytestreams and higher-level AMQP types. 2007-11-05 Barry Pederson """ # Copyright (C) 2007 Barry Pederson import calendar from datetime import datetime from decimal import Decimal from io import BytesIO from struct import pack, unpack_from from .exceptions import FrameSyntaxError from .spec import Basic from .utils import bytes_to_str as pstr_t from .utils import str_to_bytes ILLEGAL_TABLE_TYPE = """\ Table type {0!r} not handled by amqp. """ ILLEGAL_TABLE_TYPE_WITH_KEY = """\ Table type {0!r} for key {1!r} not handled by amqp. [value: {2!r}] """ ILLEGAL_TABLE_TYPE_WITH_VALUE = """\ Table type {0!r} not handled by amqp. [value: {1!r}] """ def _read_item(buf, offset): ftype = chr(buf[offset]) offset += 1 # 'S': long string if ftype == 'S': slen, = unpack_from('>I', buf, offset) offset += 4 try: val = pstr_t(buf[offset:offset + slen]) except UnicodeDecodeError: val = buf[offset:offset + slen] offset += slen # 's': short string elif ftype == 's': slen, = unpack_from('>B', buf, offset) offset += 1 val = pstr_t(buf[offset:offset + slen]) offset += slen # 'x': Bytes Array elif ftype == 'x': blen, = unpack_from('>I', buf, offset) offset += 4 val = buf[offset:offset + blen] offset += blen # 'b': short-short int elif ftype == 'b': val, = unpack_from('>B', buf, offset) offset += 1 # 'B': short-short unsigned int elif ftype == 'B': val, = unpack_from('>b', buf, offset) offset += 1 # 'U': short int elif ftype == 'U': val, = unpack_from('>h', buf, offset) offset += 2 # 'u': short unsigned int elif ftype == 'u': val, = unpack_from('>H', buf, offset) offset += 2 # 'I': long int elif ftype == 'I': val, = unpack_from('>i', buf, offset) offset += 4 # 'i': long unsigned int elif ftype == 'i': val, = unpack_from('>I', buf, offset) offset += 4 # 'L': long long int elif ftype == 'L': val, = unpack_from('>q', buf, offset) offset += 8 # 'l': long long unsigned int elif ftype == 'l': val, = unpack_from('>Q', buf, offset) offset += 8 # 'f': float elif ftype == 'f': val, = unpack_from('>f', buf, offset) offset += 4 # 'd': double elif ftype == 'd': val, = unpack_from('>d', buf, offset) offset += 8 # 'D': decimal elif ftype == 'D': d, = unpack_from('>B', buf, offset) offset += 1 n, = unpack_from('>i', buf, offset) offset += 4 val = Decimal(n) / Decimal(10 ** d) # 'F': table elif ftype == 'F': tlen, = unpack_from('>I', buf, offset) offset += 4 limit = offset + tlen val = {} while offset < limit: keylen, = unpack_from('>B', buf, offset) offset += 1 key = pstr_t(buf[offset:offset + keylen]) offset += keylen val[key], offset = _read_item(buf, offset) # 'A': array elif ftype == 'A': alen, = unpack_from('>I', buf, offset) offset += 4 limit = offset + alen val = [] while offset < limit: v, offset = _read_item(buf, offset) val.append(v) # 't' (bool) elif ftype == 't': val, = unpack_from('>B', buf, offset) val = bool(val) offset += 1 # 'T': timestamp elif ftype == 'T': val, = unpack_from('>Q', buf, offset) offset += 8 val = datetime.utcfromtimestamp(val) # 'V': void elif ftype == 'V': val = None else: raise FrameSyntaxError( 'Unknown value in table: {!r} ({!r})'.format( ftype, type(ftype))) return val, offset def loads(format, buf, offset): """Deserialize amqp format. bit = b octet = o short = B long = l long long = L float = f shortstr = s longstr = S table = F array = A timestamp = T """ bitcount = bits = 0 values = [] append = values.append format = pstr_t(format) for p in format: if p == 'b': if not bitcount: bits = ord(buf[offset:offset + 1]) offset += 1 bitcount = 8 val = (bits & 1) == 1 bits >>= 1 bitcount -= 1 elif p == 'o': bitcount = bits = 0 val, = unpack_from('>B', buf, offset) offset += 1 elif p == 'B': bitcount = bits = 0 val, = unpack_from('>H', buf, offset) offset += 2 elif p == 'l': bitcount = bits = 0 val, = unpack_from('>I', buf, offset) offset += 4 elif p == 'L': bitcount = bits = 0 val, = unpack_from('>Q', buf, offset) offset += 8 elif p == 'f': bitcount = bits = 0 val, = unpack_from('>f', buf, offset) offset += 4 elif p == 's': bitcount = bits = 0 slen, = unpack_from('B', buf, offset) offset += 1 val = buf[offset:offset + slen].decode('utf-8', 'surrogatepass') offset += slen elif p == 'S': bitcount = bits = 0 slen, = unpack_from('>I', buf, offset) offset += 4 val = buf[offset:offset + slen].decode('utf-8', 'surrogatepass') offset += slen elif p == 'x': blen, = unpack_from('>I', buf, offset) offset += 4 val = buf[offset:offset + blen] offset += blen elif p == 'F': bitcount = bits = 0 tlen, = unpack_from('>I', buf, offset) offset += 4 limit = offset + tlen val = {} while offset < limit: keylen, = unpack_from('>B', buf, offset) offset += 1 key = pstr_t(buf[offset:offset + keylen]) offset += keylen val[key], offset = _read_item(buf, offset) elif p == 'A': bitcount = bits = 0 alen, = unpack_from('>I', buf, offset) offset += 4 limit = offset + alen val = [] while offset < limit: aval, offset = _read_item(buf, offset) val.append(aval) elif p == 'T': bitcount = bits = 0 val, = unpack_from('>Q', buf, offset) offset += 8 val = datetime.utcfromtimestamp(val) else: raise FrameSyntaxError(ILLEGAL_TABLE_TYPE.format(p)) append(val) return values, offset def _flushbits(bits, write): if bits: write(pack('B' * len(bits), *bits)) bits[:] = [] return 0 def dumps(format, values): """Serialize AMQP arguments. Notes: bit = b octet = o short = B long = l long long = L shortstr = s longstr = S byte array = x table = F array = A """ bitcount = 0 bits = [] out = BytesIO() write = out.write format = pstr_t(format) for i, val in enumerate(values): p = format[i] if p == 'b': val = 1 if val else 0 shift = bitcount % 8 if shift == 0: bits.append(0) bits[-1] |= (val << shift) bitcount += 1 elif p == 'o': bitcount = _flushbits(bits, write) write(pack('B', val)) elif p == 'B': bitcount = _flushbits(bits, write) write(pack('>H', int(val))) elif p == 'l': bitcount = _flushbits(bits, write) write(pack('>I', val)) elif p == 'L': bitcount = _flushbits(bits, write) write(pack('>Q', val)) elif p == 'f': bitcount = _flushbits(bits, write) write(pack('>f', val)) elif p == 's': val = val or '' bitcount = _flushbits(bits, write) if isinstance(val, str): val = val.encode('utf-8', 'surrogatepass') write(pack('B', len(val))) write(val) elif p == 'S' or p == 'x': val = val or '' bitcount = _flushbits(bits, write) if isinstance(val, str): val = val.encode('utf-8', 'surrogatepass') write(pack('>I', len(val))) write(val) elif p == 'F': bitcount = _flushbits(bits, write) _write_table(val or {}, write, bits) elif p == 'A': bitcount = _flushbits(bits, write) _write_array(val or [], write, bits) elif p == 'T': write(pack('>Q', int(calendar.timegm(val.utctimetuple())))) _flushbits(bits, write) return out.getvalue() def _write_table(d, write, bits): out = BytesIO() twrite = out.write for k, v in d.items(): if isinstance(k, str): k = k.encode('utf-8', 'surrogatepass') twrite(pack('B', len(k))) twrite(k) try: _write_item(v, twrite, bits) except ValueError: raise FrameSyntaxError( ILLEGAL_TABLE_TYPE_WITH_KEY.format(type(v), k, v)) table_data = out.getvalue() write(pack('>I', len(table_data))) write(table_data) def _write_array(list_, write, bits): out = BytesIO() awrite = out.write for v in list_: try: _write_item(v, awrite, bits) except ValueError: raise FrameSyntaxError( ILLEGAL_TABLE_TYPE_WITH_VALUE.format(type(v), v)) array_data = out.getvalue() write(pack('>I', len(array_data))) write(array_data) def _write_item(v, write, bits): if isinstance(v, (str, bytes)): if isinstance(v, str): v = v.encode('utf-8', 'surrogatepass') write(pack('>cI', b'S', len(v))) write(v) elif isinstance(v, bool): write(pack('>cB', b't', int(v))) elif isinstance(v, float): write(pack('>cd', b'd', v)) elif isinstance(v, int): if v > 2147483647 or v < -2147483647: write(pack('>cq', b'L', v)) else: write(pack('>ci', b'I', v)) elif isinstance(v, Decimal): sign, digits, exponent = v.as_tuple() v = 0 for d in digits: v = (v * 10) + d if sign: v = -v write(pack('>cBi', b'D', -exponent, v)) elif isinstance(v, datetime): write( pack('>cQ', b'T', int(calendar.timegm(v.utctimetuple())))) elif isinstance(v, dict): write(b'F') _write_table(v, write, bits) elif isinstance(v, (list, tuple)): write(b'A') _write_array(v, write, bits) elif v is None: write(b'V') else: raise ValueError() def decode_properties_basic(buf, offset): """Decode basic properties.""" properties = {} flags, = unpack_from('>H', buf, offset) offset += 2 if flags & 0x8000: slen, = unpack_from('>B', buf, offset) offset += 1 properties['content_type'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x4000: slen, = unpack_from('>B', buf, offset) offset += 1 properties['content_encoding'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x2000: _f, offset = loads('F', buf, offset) properties['application_headers'], = _f if flags & 0x1000: properties['delivery_mode'], = unpack_from('>B', buf, offset) offset += 1 if flags & 0x0800: properties['priority'], = unpack_from('>B', buf, offset) offset += 1 if flags & 0x0400: slen, = unpack_from('>B', buf, offset) offset += 1 properties['correlation_id'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0200: slen, = unpack_from('>B', buf, offset) offset += 1 properties['reply_to'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0100: slen, = unpack_from('>B', buf, offset) offset += 1 properties['expiration'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0080: slen, = unpack_from('>B', buf, offset) offset += 1 properties['message_id'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0040: properties['timestamp'], = unpack_from('>Q', buf, offset) offset += 8 if flags & 0x0020: slen, = unpack_from('>B', buf, offset) offset += 1 properties['type'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0010: slen, = unpack_from('>B', buf, offset) offset += 1 properties['user_id'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0008: slen, = unpack_from('>B', buf, offset) offset += 1 properties['app_id'] = pstr_t(buf[offset:offset + slen]) offset += slen if flags & 0x0004: slen, = unpack_from('>B', buf, offset) offset += 1 properties['cluster_id'] = pstr_t(buf[offset:offset + slen]) offset += slen return properties, offset PROPERTY_CLASSES = { Basic.CLASS_ID: decode_properties_basic, } class GenericContent: """Abstract base class for AMQP content. Subclasses should override the PROPERTIES attribute. """ CLASS_ID = None PROPERTIES = [('dummy', 's')] def __init__(self, frame_method=None, frame_args=None, **props): self.frame_method = frame_method self.frame_args = frame_args self.properties = props self._pending_chunks = [] self.body_received = 0 self.body_size = 0 self.ready = False __slots__ = ( "frame_method", "frame_args", "properties", "_pending_chunks", "body_received", "body_size", "ready", # adding '__dict__' to get dynamic assignment "__dict__", "__weakref__", ) def __getattr__(self, name): # Look for additional properties in the 'properties' # dictionary, and if present - the 'delivery_info' dictionary. if name == '__setstate__': # Allows pickling/unpickling to work raise AttributeError('__setstate__') if name in self.properties: return self.properties[name] raise AttributeError(name) def _load_properties(self, class_id, buf, offset): """Load AMQP properties. Given the raw bytes containing the property-flags and property-list from a content-frame-header, parse and insert into a dictionary stored in this object as an attribute named 'properties'. """ # Read 16-bit shorts until we get one with a low bit set to zero props, offset = PROPERTY_CLASSES[class_id](buf, offset) self.properties = props return offset def _serialize_properties(self): """Serialize AMQP properties. Serialize the 'properties' attribute (a dictionary) into the raw bytes making up a set of property flags and a property list, suitable for putting into a content frame header. """ shift = 15 flag_bits = 0 flags = [] sformat, svalues = [], [] props = self.properties for key, proptype in self.PROPERTIES: val = props.get(key, None) if val is not None: if shift == 0: flags.append(flag_bits) flag_bits = 0 shift = 15 flag_bits |= (1 << shift) if proptype != 'bit': sformat.append(str_to_bytes(proptype)) svalues.append(val) shift -= 1 flags.append(flag_bits) result = BytesIO() write = result.write for flag_bits in flags: write(pack('>H', flag_bits)) write(dumps(b''.join(sformat), svalues)) return result.getvalue() def inbound_header(self, buf, offset=0): class_id, self.body_size = unpack_from('>HxxQ', buf, offset) offset += 12 self._load_properties(class_id, buf, offset) if not self.body_size: self.ready = True return offset def inbound_body(self, buf): chunks = self._pending_chunks self.body_received += len(buf) if self.body_received >= self.body_size: if chunks: chunks.append(buf) self.body = bytes().join(chunks) chunks[:] = [] else: self.body = buf self.ready = True else: chunks.append(buf) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/spec.py0000644000076500000240000000411114345343305014177 0ustar00nusnusstaff"""AMQP Spec.""" from collections import namedtuple method_t = namedtuple('method_t', ('method_sig', 'args', 'content')) def method(method_sig, args=None, content=False): """Create amqp method specification tuple.""" return method_t(method_sig, args, content) class Connection: """AMQ Connection class.""" CLASS_ID = 10 Start = (10, 10) StartOk = (10, 11) Secure = (10, 20) SecureOk = (10, 21) Tune = (10, 30) TuneOk = (10, 31) Open = (10, 40) OpenOk = (10, 41) Close = (10, 50) CloseOk = (10, 51) Blocked = (10, 60) Unblocked = (10, 61) class Channel: """AMQ Channel class.""" CLASS_ID = 20 Open = (20, 10) OpenOk = (20, 11) Flow = (20, 20) FlowOk = (20, 21) Close = (20, 40) CloseOk = (20, 41) class Exchange: """AMQ Exchange class.""" CLASS_ID = 40 Declare = (40, 10) DeclareOk = (40, 11) Delete = (40, 20) DeleteOk = (40, 21) Bind = (40, 30) BindOk = (40, 31) Unbind = (40, 40) UnbindOk = (40, 51) class Queue: """AMQ Queue class.""" CLASS_ID = 50 Declare = (50, 10) DeclareOk = (50, 11) Bind = (50, 20) BindOk = (50, 21) Purge = (50, 30) PurgeOk = (50, 31) Delete = (50, 40) DeleteOk = (50, 41) Unbind = (50, 50) UnbindOk = (50, 51) class Basic: """AMQ Basic class.""" CLASS_ID = 60 Qos = (60, 10) QosOk = (60, 11) Consume = (60, 20) ConsumeOk = (60, 21) Cancel = (60, 30) CancelOk = (60, 31) Publish = (60, 40) Return = (60, 50) Deliver = (60, 60) Get = (60, 70) GetOk = (60, 71) GetEmpty = (60, 72) Ack = (60, 80) Nack = (60, 120) Reject = (60, 90) RecoverAsync = (60, 100) Recover = (60, 110) RecoverOk = (60, 111) class Confirm: """AMQ Confirm class.""" CLASS_ID = 85 Select = (85, 10) SelectOk = (85, 11) class Tx: """AMQ Tx class.""" CLASS_ID = 90 Select = (90, 10) SelectOk = (90, 11) Commit = (90, 20) CommitOk = (90, 21) Rollback = (90, 30) RollbackOk = (90, 31) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1717840700.0 amqp-5.3.1/amqp/transport.py0000644000076500000240000005500614631025474015314 0ustar00nusnusstaff"""Transport implementation.""" # Copyright (C) 2009 Barry Pederson import errno import os import re import socket import ssl from contextlib import contextmanager from ssl import SSLError from struct import pack, unpack from .exceptions import UnexpectedFrame from .platform import KNOWN_TCP_OPTS, SOL_TCP from .utils import set_cloexec _UNAVAIL = {errno.EAGAIN, errno.EINTR, errno.ENOENT, errno.EWOULDBLOCK} AMQP_PORT = 5672 EMPTY_BUFFER = bytes() SIGNED_INT_MAX = 0x7FFFFFFF # Yes, Advanced Message Queuing Protocol Protocol is redundant AMQP_PROTOCOL_HEADER = b'AMQP\x00\x00\x09\x01' # Match things like: [fe80::1]:5432, from RFC 2732 IPV6_LITERAL = re.compile(r'\[([\.0-9a-f:]+)\](?::(\d+))?') DEFAULT_SOCKET_SETTINGS = { 'TCP_NODELAY': 1, 'TCP_USER_TIMEOUT': 1000, 'TCP_KEEPIDLE': 60, 'TCP_KEEPINTVL': 10, 'TCP_KEEPCNT': 9, } def to_host_port(host, default=AMQP_PORT): """Convert hostname:port string to host, port tuple.""" port = default m = IPV6_LITERAL.match(host) if m: host = m.group(1) if m.group(2): port = int(m.group(2)) else: if ':' in host: host, port = host.rsplit(':', 1) port = int(port) return host, port class _AbstractTransport: """Common superclass for TCP and SSL transports. PARAMETERS: host: str Broker address in format ``HOSTNAME:PORT``. connect_timeout: int Timeout of creating new connection. read_timeout: int sets ``SO_RCVTIMEO`` parameter of socket. write_timeout: int sets ``SO_SNDTIMEO`` parameter of socket. socket_settings: dict dictionary containing `optname` and ``optval`` passed to ``setsockopt(2)``. raise_on_initial_eintr: bool when True, ``socket.timeout`` is raised when exception is received during first read. See ``_read()`` for details. """ def __init__(self, host, connect_timeout=None, read_timeout=None, write_timeout=None, socket_settings=None, raise_on_initial_eintr=True, **kwargs): self.connected = False self.sock = None self.raise_on_initial_eintr = raise_on_initial_eintr self._read_buffer = EMPTY_BUFFER self.host, self.port = to_host_port(host) self.connect_timeout = connect_timeout self.read_timeout = read_timeout self.write_timeout = write_timeout self.socket_settings = socket_settings __slots__ = ( "connection", "sock", "raise_on_initial_eintr", "_read_buffer", "host", "port", "connect_timeout", "read_timeout", "write_timeout", "socket_settings", # adding '__dict__' to get dynamic assignment "__dict__", "__weakref__", ) def __repr__(self): if self.sock: src = f'{self.sock.getsockname()[0]}:{self.sock.getsockname()[1]}' try: dst = f'{self.sock.getpeername()[0]}:{self.sock.getpeername()[1]}' except (socket.error) as e: dst = f'ERROR: {e}' return f'<{type(self).__name__}: {src} -> {dst} at {id(self):#x}>' else: return f'<{type(self).__name__}: (disconnected) at {id(self):#x}>' def connect(self): try: # are we already connected? if self.connected: return self._connect(self.host, self.port, self.connect_timeout) self._init_socket( self.socket_settings, self.read_timeout, self.write_timeout, ) # we've sent the banner; signal connect # EINTR, EAGAIN, EWOULDBLOCK would signal that the banner # has _not_ been sent self.connected = True except (OSError, SSLError): # if not fully connected, close socket, and reraise error if self.sock and not self.connected: self.sock.close() self.sock = None raise @contextmanager def having_timeout(self, timeout): if timeout is None: yield self.sock else: sock = self.sock prev = sock.gettimeout() if prev != timeout: sock.settimeout(timeout) try: yield self.sock except SSLError as exc: if 'timed out' in str(exc): # http://bugs.python.org/issue10272 raise socket.timeout() elif 'The operation did not complete' in str(exc): # Non-blocking SSL sockets can throw SSLError raise socket.timeout() raise except OSError as exc: if exc.errno == errno.EWOULDBLOCK: raise socket.timeout() raise finally: if timeout != prev: sock.settimeout(prev) def _connect(self, host, port, timeout): entries = socket.getaddrinfo( host, port, socket.AF_UNSPEC, socket.SOCK_STREAM, SOL_TCP, ) for i, res in enumerate(entries): af, socktype, proto, canonname, sa = res try: self.sock = socket.socket(af, socktype, proto) try: set_cloexec(self.sock, True) except NotImplementedError: pass self.sock.settimeout(timeout) self.sock.connect(sa) except socket.error: if self.sock: self.sock.close() self.sock = None if i + 1 >= len(entries): raise else: break def _init_socket(self, socket_settings, read_timeout, write_timeout): self.sock.settimeout(None) # set socket back to blocking mode self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) self._set_socket_options(socket_settings) # set socket timeouts for timeout, interval in ((socket.SO_SNDTIMEO, write_timeout), (socket.SO_RCVTIMEO, read_timeout)): if interval is not None: sec = int(interval) usec = int((interval - sec) * 1000000) self.sock.setsockopt( socket.SOL_SOCKET, timeout, pack('ll', sec, usec), ) self._setup_transport() self._write(AMQP_PROTOCOL_HEADER) def _get_tcp_socket_defaults(self, sock): tcp_opts = {} for opt in KNOWN_TCP_OPTS: enum = None if opt == 'TCP_USER_TIMEOUT': try: from socket import TCP_USER_TIMEOUT as enum except ImportError: # should be in Python 3.6+ on Linux. enum = 18 elif hasattr(socket, opt): enum = getattr(socket, opt) if enum: if opt in DEFAULT_SOCKET_SETTINGS: tcp_opts[enum] = DEFAULT_SOCKET_SETTINGS[opt] elif hasattr(socket, opt): tcp_opts[enum] = sock.getsockopt( SOL_TCP, getattr(socket, opt)) return tcp_opts def _set_socket_options(self, socket_settings): tcp_opts = self._get_tcp_socket_defaults(self.sock) if socket_settings: tcp_opts.update(socket_settings) for opt, val in tcp_opts.items(): self.sock.setsockopt(SOL_TCP, opt, val) def _read(self, n, initial=False): """Read exactly n bytes from the peer.""" raise NotImplementedError('Must be overridden in subclass') def _setup_transport(self): """Do any additional initialization of the class.""" pass def _shutdown_transport(self): """Do any preliminary work in shutting down the connection.""" pass def _write(self, s): """Completely write a string to the peer.""" raise NotImplementedError('Must be overridden in subclass') def close(self): if self.sock is not None: try: self._shutdown_transport() except OSError: pass # Call shutdown first to make sure that pending messages # reach the AMQP broker if the program exits after # calling this method. try: self.sock.shutdown(socket.SHUT_RDWR) except OSError: pass try: self.sock.close() except OSError: pass self.sock = None self.connected = False def read_frame(self, unpack=unpack): """Parse AMQP frame. Frame has following format:: 0 1 3 7 size+7 size+8 +------+---------+---------+ +-------------+ +-----------+ | type | channel | size | | payload | | frame-end | +------+---------+---------+ +-------------+ +-----------+ octet short long 'size' octets octet """ read = self._read read_frame_buffer = EMPTY_BUFFER try: frame_header = read(7, True) read_frame_buffer += frame_header frame_type, channel, size = unpack('>BHI', frame_header) # >I is an unsigned int, but the argument to sock.recv is signed, # so we know the size can be at most 2 * SIGNED_INT_MAX if size > SIGNED_INT_MAX: part1 = read(SIGNED_INT_MAX) try: part2 = read(size - SIGNED_INT_MAX) except (socket.timeout, OSError, SSLError): # In case this read times out, we need to make sure to not # lose part1 when we retry the read read_frame_buffer += part1 raise payload = b''.join([part1, part2]) else: payload = read(size) read_frame_buffer += payload frame_end = ord(read(1)) except socket.timeout: self._read_buffer = read_frame_buffer + self._read_buffer raise except (OSError, SSLError) as exc: if ( isinstance(exc, socket.error) and os.name == 'nt' and exc.errno == errno.EWOULDBLOCK # noqa ): # On windows we can get a read timeout with a winsock error # code instead of a proper socket.timeout() error, see # https://github.com/celery/py-amqp/issues/320 self._read_buffer = read_frame_buffer + self._read_buffer raise socket.timeout() if isinstance(exc, SSLError) and 'timed out' in str(exc): # Don't disconnect for ssl read time outs # http://bugs.python.org/issue10272 self._read_buffer = read_frame_buffer + self._read_buffer raise socket.timeout() if exc.errno not in _UNAVAIL: self.connected = False raise # frame-end octet must contain '\xce' value if frame_end == 206: return frame_type, channel, payload else: raise UnexpectedFrame( f'Received frame_end {frame_end:#04x} while expecting 0xce') def write(self, s): try: self._write(s) except socket.timeout: raise except OSError as exc: if exc.errno not in _UNAVAIL: self.connected = False raise class SSLTransport(_AbstractTransport): """Transport that works over SSL. PARAMETERS: host: str Broker address in format ``HOSTNAME:PORT``. connect_timeout: int Timeout of creating new connection. ssl: bool|dict parameters of TLS subsystem. - when ``ssl`` is not dictionary, defaults of TLS are used - otherwise: - if ``ssl`` dictionary contains ``context`` key, :attr:`~SSLTransport._wrap_context` is used for wrapping socket. ``context`` is a dictionary passed to :attr:`~SSLTransport._wrap_context` as context parameter. All others items from ``ssl`` argument are passed as ``sslopts``. - if ``ssl`` dictionary does not contain ``context`` key, :attr:`~SSLTransport._wrap_socket_sni` is used for wrapping socket. All items in ``ssl`` argument are passed to :attr:`~SSLTransport._wrap_socket_sni` as parameters. kwargs: additional arguments of :class:`~amqp.transport._AbstractTransport` class """ def __init__(self, host, connect_timeout=None, ssl=None, **kwargs): self.sslopts = ssl if isinstance(ssl, dict) else {} self._read_buffer = EMPTY_BUFFER super().__init__( host, connect_timeout=connect_timeout, **kwargs) __slots__ = ( "sslopts", ) def _setup_transport(self): """Wrap the socket in an SSL object.""" self.sock = self._wrap_socket(self.sock, **self.sslopts) # Explicitly set a timeout here to stop any hangs on handshake. self.sock.settimeout(self.connect_timeout) self.sock.do_handshake() self._quick_recv = self.sock.read def _wrap_socket(self, sock, context=None, **sslopts): if context: return self._wrap_context(sock, sslopts, **context) return self._wrap_socket_sni(sock, **sslopts) def _wrap_context(self, sock, sslopts, check_hostname=None, **ctx_options): """Wrap socket without SNI headers. PARAMETERS: sock: socket.socket Socket to be wrapped. sslopts: dict Parameters of :attr:`ssl.SSLContext.wrap_socket`. check_hostname Whether to match the peer cert’s hostname. See :attr:`ssl.SSLContext.check_hostname` for details. ctx_options Parameters of :attr:`ssl.create_default_context`. """ ctx = ssl.create_default_context(**ctx_options) ctx.check_hostname = check_hostname return ctx.wrap_socket(sock, **sslopts) def _wrap_socket_sni(self, sock, keyfile=None, certfile=None, server_side=False, cert_reqs=None, ca_certs=None, do_handshake_on_connect=False, suppress_ragged_eofs=True, server_hostname=None, ciphers=None, ssl_version=None): """Socket wrap with SNI headers. stdlib :attr:`ssl.SSLContext.wrap_socket` method augmented with support for setting the server_hostname field required for SNI hostname header. PARAMETERS: sock: socket.socket Socket to be wrapped. keyfile: str Path to the private key certfile: str Path to the certificate server_side: bool Identifies whether server-side or client-side behavior is desired from this socket. See :attr:`~ssl.SSLContext.wrap_socket` for details. cert_reqs: ssl.VerifyMode When set to other than :attr:`ssl.CERT_NONE`, peers certificate is checked. Possible values are :attr:`ssl.CERT_NONE`, :attr:`ssl.CERT_OPTIONAL` and :attr:`ssl.CERT_REQUIRED`. ca_certs: str Path to “certification authority” (CA) certificates used to validate other peers’ certificates when ``cert_reqs`` is other than :attr:`ssl.CERT_NONE`. do_handshake_on_connect: bool Specifies whether to do the SSL handshake automatically. See :attr:`~ssl.SSLContext.wrap_socket` for details. suppress_ragged_eofs (bool): See :attr:`~ssl.SSLContext.wrap_socket` for details. server_hostname: str Specifies the hostname of the service which we are connecting to. See :attr:`~ssl.SSLContext.wrap_socket` for details. ciphers: str Available ciphers for sockets created with this context. See :attr:`ssl.SSLContext.set_ciphers` ssl_version: Protocol of the SSL Context. The value is one of ``ssl.PROTOCOL_*`` constants. """ opts = { 'sock': sock, 'server_side': server_side, 'do_handshake_on_connect': do_handshake_on_connect, 'suppress_ragged_eofs': suppress_ragged_eofs, 'server_hostname': server_hostname, } if ssl_version is None: ssl_version = ( ssl.PROTOCOL_TLS_SERVER if server_side else ssl.PROTOCOL_TLS_CLIENT ) context = ssl.SSLContext(ssl_version) if certfile is not None: context.load_cert_chain(certfile, keyfile) if ca_certs is not None: context.load_verify_locations(ca_certs) if ciphers is not None: context.set_ciphers(ciphers) # Set SNI headers if supported. # Must set context.check_hostname before setting context.verify_mode # to avoid setting context.verify_mode=ssl.CERT_NONE while # context.check_hostname is still True (the default value in context # if client-side) which results in the following exception: # ValueError: Cannot set verify_mode to CERT_NONE when check_hostname # is enabled. try: context.check_hostname = ( ssl.HAS_SNI and server_hostname is not None ) except AttributeError: pass # ask forgiveness not permission # See note above re: ordering for context.check_hostname and # context.verify_mode assignments. if cert_reqs is not None: context.verify_mode = cert_reqs if ca_certs is None and context.verify_mode != ssl.CERT_NONE: purpose = ( ssl.Purpose.CLIENT_AUTH if server_side else ssl.Purpose.SERVER_AUTH ) context.load_default_certs(purpose) sock = context.wrap_socket(**opts) return sock def _shutdown_transport(self): """Unwrap a SSL socket, so we can call shutdown().""" if self.sock is not None: self.sock = self.sock.unwrap() def _read(self, n, initial=False, _errnos=(errno.ENOENT, errno.EAGAIN, errno.EINTR)): # According to SSL_read(3), it can at most return 16kb of data. # Thus, we use an internal read buffer like TCPTransport._read # to get the exact number of bytes wanted. recv = self._quick_recv rbuf = self._read_buffer try: while len(rbuf) < n: try: s = recv(n - len(rbuf)) # see note above except OSError as exc: # ssl.sock.read may cause ENOENT if the # operation couldn't be performed (Issue celery#1414). if exc.errno in _errnos: if initial and self.raise_on_initial_eintr: raise socket.timeout() continue raise if not s: raise OSError('Server unexpectedly closed connection') rbuf += s except: # noqa self._read_buffer = rbuf raise result, self._read_buffer = rbuf[:n], rbuf[n:] return result def _write(self, s): """Write a string out to the SSL socket fully.""" write = self.sock.write while s: try: n = write(s) except ValueError: # AG: sock._sslobj might become null in the meantime if the # remote connection has hung up. # In python 3.4, a ValueError is raised is self._sslobj is # None. n = 0 if not n: raise OSError('Socket closed') s = s[n:] class TCPTransport(_AbstractTransport): """Transport that deals directly with TCP socket. All parameters are :class:`~amqp.transport._AbstractTransport` class. """ def _setup_transport(self): # Setup to _write() directly to the socket, and # do our own buffered reads. self._write = self.sock.sendall self._read_buffer = EMPTY_BUFFER self._quick_recv = self.sock.recv def _read(self, n, initial=False, _errnos=(errno.EAGAIN, errno.EINTR)): """Read exactly n bytes from the socket.""" recv = self._quick_recv rbuf = self._read_buffer try: while len(rbuf) < n: try: s = recv(n - len(rbuf)) except OSError as exc: if exc.errno in _errnos: if initial and self.raise_on_initial_eintr: raise socket.timeout() continue raise if not s: raise OSError('Server unexpectedly closed connection') rbuf += s except: # noqa self._read_buffer = rbuf raise result, self._read_buffer = rbuf[:n], rbuf[n:] return result def Transport(host, connect_timeout=None, ssl=False, **kwargs): """Create transport. Given a few parameters from the Connection constructor, select and create a subclass of :class:`~amqp.transport._AbstractTransport`. PARAMETERS: host: str Broker address in format ``HOSTNAME:PORT``. connect_timeout: int Timeout of creating new connection. ssl: bool|dict If set, :class:`~amqp.transport.SSLTransport` is used and ``ssl`` parameter is passed to it. Otherwise :class:`~amqp.transport.TCPTransport` is used. kwargs: additional arguments of :class:`~amqp.transport._AbstractTransport` class """ transport = SSLTransport if ssl else TCPTransport return transport(host, connect_timeout=connect_timeout, ssl=ssl, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/amqp/utils.py0000644000076500000240000000266014345343305014414 0ustar00nusnusstaff"""Compatibility utilities.""" import logging from logging import NullHandler # enables celery 3.1.23 to start again from vine import promise # noqa from vine.utils import wraps try: import fcntl except ImportError: # pragma: no cover fcntl = None # noqa def set_cloexec(fd, cloexec): """Set flag to close fd after exec.""" if fcntl is None: return try: FD_CLOEXEC = fcntl.FD_CLOEXEC except AttributeError: raise NotImplementedError( 'close-on-exec flag not supported on this platform', ) flags = fcntl.fcntl(fd, fcntl.F_GETFD) if cloexec: flags |= FD_CLOEXEC else: flags &= ~FD_CLOEXEC return fcntl.fcntl(fd, fcntl.F_SETFD, flags) def coro(gen): """Decorator to mark generator as a co-routine.""" @wraps(gen) def _boot(*args, **kwargs): co = gen(*args, **kwargs) next(co) return co return _boot def str_to_bytes(s): """Convert str to bytes.""" if isinstance(s, str): return s.encode('utf-8', 'surrogatepass') return s def bytes_to_str(s): """Convert bytes to str.""" if isinstance(s, bytes): return s.decode('utf-8', 'surrogatepass') return s def get_logger(logger): """Get logger by name.""" if isinstance(logger, str): logger = logging.getLogger(logger) if not logger.handlers: logger.addHandler(NullHandler()) return logger ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.166515 amqp-5.3.1/amqp.egg-info/0000755000076500000240000000000014714731266014377 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/PKG-INFO0000644000076500000240000002126714714731266015504 0ustar00nusnusstaffMetadata-Version: 2.1 Name: amqp Version: 5.3.1 Summary: Low-level AMQP client for Python (fork of amqplib). Home-page: http://github.com/celery/py-amqp Author: Barry Pederson Author-email: auvipy@gmail.com Maintainer: Asif Saif Uddin, Matus Valo License: BSD Keywords: amqp rabbitmq cloudamqp messaging Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Requires-Python: >=3.6 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: vine<6.0.0,>=5.0.0 ===================================================================== Python AMQP 0.9.1 client library ===================================================================== |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| :Version: 5.3.1 :Web: https://amqp.readthedocs.io/ :Download: https://pypi.org/project/amqp/ :Source: http://github.com/celery/py-amqp/ :Keywords: amqp, rabbitmq About ===== This is a fork of amqplib_ which was originally written by Barry Pederson. It is maintained by the Celery_ project, and used by `kombu`_ as a pure python alternative when `librabbitmq`_ is not available. This library should be API compatible with `librabbitmq`_. .. _amqplib: https://pypi.org/project/amqplib/ .. _Celery: http://celeryproject.org/ .. _kombu: https://kombu.readthedocs.io/ .. _librabbitmq: https://pypi.org/project/librabbitmq/ Differences from `amqplib`_ =========================== - Supports draining events from multiple channels (``Connection.drain_events``) - Support for timeouts - Channels are restored after channel error, instead of having to close the connection. - Support for heartbeats - ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals (half of the heartbeat value if rate is 2). - Or some other scheme by using ``Connection.send_heartbeat``. - Supports RabbitMQ extensions: - Consumer Cancel Notifications - by default a cancel results in ``ChannelError`` being raised - but not if a ``on_cancel`` callback is passed to ``basic_consume``. - Publisher confirms - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``. - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Authentication Failure Notifications Instead of just closing the connection abruptly on invalid credentials, py-amqp will raise an ``AccessRefused`` error when connected to rabbitmq-server 3.2.0 or greater. - Support for ``basic_return`` - Uses AMQP 0-9-1 instead of 0-8. - ``Channel.access_request`` and ``ticket`` arguments to methods **removed**. - Supports the ``arguments`` argument to ``basic_consume``. - ``internal`` argument to ``exchange_declare`` removed. - ``auto_delete`` argument to ``exchange_declare`` deprecated - ``insist`` argument to ``Connection`` removed. - ``Channel.alerts`` has been removed. - Support for ``Channel.basic_recover_async``. - ``Channel.basic_recover`` deprecated. - Exceptions renamed to have idiomatic names: - ``AMQPException`` -> ``AMQPError`` - ``AMQPConnectionException`` -> ConnectionError`` - ``AMQPChannelException`` -> ChannelError`` - ``Connection.known_hosts`` removed. - ``Connection`` no longer supports redirects. - ``exchange`` argument to ``queue_bind`` can now be empty to use the "default exchange". - Adds ``Connection.is_alive`` that tries to detect whether the connection can still be used. - Adds ``Connection.connection_errors`` and ``.channel_errors``, a list of recoverable errors. - Exposes the underlying socket as ``Connection.sock``. - Adds ``Channel.no_ack_consumers`` to keep track of consumer tags that set the no_ack flag. - Slightly better at error recovery Quick overview ============== Simple producer publishing messages to ``test`` queue using default exchange: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Producer publishing to ``test_exchange`` exchange with publisher confirms enabled and using virtual_host ``test_vhost``: .. code:: python import amqp with amqp.Connection( 'broker.example.com', exchange='test_exchange', confirm_publish=True, virtual_host='test_vhost' ) as c: ch = c.channel() ch.basic_publish(amqp.Message('Hello World'), routing_key='test') Consumer with acknowledgments enabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_ack(message.delivery_tag) ch.basic_consume(queue='test', callback=on_message) while True: c.drain_events() Consumer with acknowledgments disabled: .. code:: python import amqp with amqp.Connection('broker.example.com') as c: ch = c.channel() def on_message(message): print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body)) ch.basic_consume(queue='test', callback=on_message, no_ack=True) while True: c.drain_events() Speedups ======== This library has **experimental** support of speedups. Speedups are implemented using Cython. To enable speedups, ``CELERY_ENABLE_SPEEDUPS`` environment variable must be set during building/installation. Currently speedups can be installed: 1. using source package (using ``--no-binary`` switch): .. code:: shell CELERY_ENABLE_SPEEDUPS=true pip install --no-binary :all: amqp 2. building directly source code: .. code:: shell CELERY_ENABLE_SPEEDUPS=true python setup.py install Further ======= - Differences between AMQP 0.8 and 0.9.1 http://www.rabbitmq.com/amqp-0-8-to-0-9-1.html - AMQP 0.9.1 Quick Reference http://www.rabbitmq.com/amqp-0-9-1-quickref.html - RabbitMQ Extensions http://www.rabbitmq.com/extensions.html - For more information about AMQP, visit http://www.amqp.org - For other Python client libraries see: http://www.rabbitmq.com/devtools.html#python-dev .. |build-status| image:: https://github.com/celery/py-amqp/actions/workflows/ci.yaml/badge.svg :alt: Build status :target: https://github.com/celery/py-amqp/actions/workflows/ci.yaml .. |coverage| image:: https://codecov.io/github/celery/py-amqp/coverage.svg?branch=main :target: https://codecov.io/github/celery/py-amqp?branch=main .. |license| image:: https://img.shields.io/pypi/l/amqp.svg :alt: BSD License :target: https://opensource.org/licenses/BSD-3-Clause .. |wheel| image:: https://img.shields.io/pypi/wheel/amqp.svg :alt: Python AMQP can be installed via wheel :target: https://pypi.org/project/amqp/ .. |pyversion| image:: https://img.shields.io/pypi/pyversions/amqp.svg :alt: Supported Python versions. :target: https://pypi.org/project/amqp/ .. |pyimp| image:: https://img.shields.io/pypi/implementation/amqp.svg :alt: Support Python implementations. :target: https://pypi.org/project/amqp/ py-amqp as part of the Tidelift Subscription ============================================ The maintainers of py-amqp and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/pypi-amqp?utm_source=pypi-amqp&utm_medium=referral&utm_campaign=readme&utm_term=repo) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/SOURCES.txt0000644000076500000240000000341614714731266016267 0ustar00nusnusstaffChangelog LICENSE MANIFEST.in README.rst setup.cfg setup.py amqp/__init__.py amqp/abstract_channel.py amqp/basic_message.py amqp/channel.py amqp/connection.py amqp/exceptions.py amqp/method_framing.py amqp/platform.py amqp/protocol.py amqp/sasl.py amqp/serialization.py amqp/spec.py amqp/transport.py amqp/utils.py amqp.egg-info/PKG-INFO amqp.egg-info/SOURCES.txt amqp.egg-info/dependency_links.txt amqp.egg-info/not-zip-safe amqp.egg-info/requires.txt amqp.egg-info/top_level.txt docs/Makefile docs/changelog.rst docs/conf.py docs/index.rst docs/make.bat docs/_static/.keep docs/_templates/sidebardonations.html docs/images/celery_128.png docs/images/favicon.ico docs/includes/introduction.txt docs/reference/amqp.abstract_channel.rst docs/reference/amqp.basic_message.rst docs/reference/amqp.channel.rst docs/reference/amqp.connection.rst docs/reference/amqp.exceptions.rst docs/reference/amqp.method_framing.rst docs/reference/amqp.platform.rst docs/reference/amqp.protocol.rst docs/reference/amqp.sasl.rst docs/reference/amqp.serialization.rst docs/reference/amqp.spec.rst docs/reference/amqp.transport.rst docs/reference/amqp.utils.rst docs/reference/index.rst docs/templates/readme.txt extra/update_comments_from_spec.py requirements/default.txt requirements/docs.txt requirements/pkgutils.txt requirements/test-ci.txt requirements/test.txt t/__init__.py t/mocks.py t/integration/__init__.py t/integration/conftest.py t/integration/test_integration.py t/integration/test_rmq.py t/unit/__init__.py t/unit/conftest.py t/unit/test_abstract_channel.py t/unit/test_basic_message.py t/unit/test_channel.py t/unit/test_connection.py t/unit/test_exceptions.py t/unit/test_method_framing.py t/unit/test_platform.py t/unit/test_sasl.py t/unit/test_serialization.py t/unit/test_transport.py t/unit/test_utils.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/dependency_links.txt0000644000076500000240000000000114714731266020445 0ustar00nusnusstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/not-zip-safe0000644000076500000240000000000114714731266016625 0ustar00nusnusstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/requires.txt0000644000076500000240000000002314714731266016772 0ustar00nusnusstaffvine<6.0.0,>=5.0.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441334.0 amqp-5.3.1/amqp.egg-info/top_level.txt0000644000076500000240000000000514714731266017124 0ustar00nusnusstaffamqp ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1568458 amqp-5.3.1/docs/0000755000076500000240000000000014714731266012677 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/Makefile0000644000076500000240000001751114345343305014335 0ustar00nusnusstaff# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don\'t have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " epub3 to make an epub3" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" @echo " apicheck to verify that all modules are present in autodoc" .PHONY: clean clean: rm -rf $(BUILDDIR)/* .PHONY: html html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." .PHONY: dirhtml dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." .PHONY: singlehtml singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." .PHONY: pickle pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." .PHONY: json json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." .PHONY: htmlhelp htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." .PHONY: qthelp qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PROJ.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PROJ.qhc" .PHONY: applehelp applehelp: $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp @echo @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." @echo "N.B. You won't be able to view it unless you put it in" \ "~/Library/Documentation/Help or install it in your application" \ "bundle." .PHONY: devhelp devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/PROJ" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PROJ" @echo "# devhelp" .PHONY: epub epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." .PHONY: epub3 epub3: $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 @echo @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." .PHONY: latex latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." .PHONY: latexpdf latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: latexpdfja latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: text text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." .PHONY: man man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." .PHONY: texinfo texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." .PHONY: info info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." .PHONY: gettext gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." .PHONY: changes changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." .PHONY: linkcheck linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." .PHONY: doctest doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." .PHONY: coverage coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." .PHONY: apicheck apicheck: $(SPHINXBUILD) -b apicheck $(ALLSPHINXOPTS) $(BUILDDIR)/apicheck .PHONY: xml xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." .PHONY: pseudoxml pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1571097 amqp-5.3.1/docs/_static/0000755000076500000240000000000014714731266014325 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/_static/.keep0000644000076500000240000000000014345343305015231 0ustar00nusnusstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1572406 amqp-5.3.1/docs/_templates/0000755000076500000240000000000014714731266015034 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/_templates/sidebardonations.html0000644000076500000240000000601214345343305021242 0ustar00nusnusstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/changelog.rst0000644000076500000240000000003214345343305015344 0ustar00nusnusstaff.. include:: ../Changelog ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441042.0 amqp-5.3.1/docs/conf.py0000644000076500000240000000131014714730622014164 0ustar00nusnusstafffrom sphinx_celery import conf globals().update(conf.build_config( 'amqp', __file__, project='py-amqp', description='Python Promises', version_dev='5.3', version_stable='5.3', canonical_url='https://amqp.readthedocs.io', webdomain='celeryproject.org', github_project='celery/py-amqp', author='Ask Solem & contributors', author_name='Ask Solem', copyright='2016', publisher='Celery Project', html_logo='images/celery_128.png', html_favicon='images/favicon.ico', html_prepend_sidebars=['sidebardonations.html'], extra_extensions=[], include_intersphinx={'python', 'sphinx'}, apicheck_package='amqp', apicheck_ignore_modules=['amqp'], )) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1576421 amqp-5.3.1/docs/images/0000755000076500000240000000000014714731266014144 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/images/celery_128.png0000644000076500000240000006321214345343305016524 0ustar00nusnusstaffPNG  IHDR>aiCCPICC ProfilexTkPe:g >hndStCkWZ6!Hm\$~ًo:w> كo{ a"L"4M'S9'^qZ/USO^C+hMJ&G@Ӳylto߫c՚  5"Yi\t։15LsX g8ocግ#f45@ B:K@8i ΁'&.)@ry[:Vͦ#wQ?HBd(B acĪL"JitTy8;(Gx_^[%׎ŷQ麲uan7m QH^eOQu6Su 2%vX ^*l O—ޭˀq,>S%LdB1CZ$M9P 'w\/].r#E|!3>_oa۾d1Zӑz'=~V+cjJtO%mN |-bWO+ o ^ IH.;S]i_s9*p.7U^s.3u |^,<;c=ma>Vt.[՟Ϫ x# ¡_2 pHYs   IDATx nY}sΦ,H#hA aYl(*q$ؘpSq1 %V!$!Ѯhf_w?3~W,&U.v9]y|c/eΕ /Ly#ya|]t>OmQv_%[vgn?Tǟ 4Q8.=y\9 \,._|'2x=- _k^7uONNc_yPcnC/C8y6#w`ĪUT ).B:>>ֶv/^' rnb[\~?,TzȻbpEq7?mwG̞611ٶ Jbᳳڏ;\\w)*m X]2.a 5 by۔W/Y罞\4y0Bt#C?ǂRߑno)L@N]n~W7>_{,Xw^_.]:ڹ~7ͬӫG7ϭj[k0,ǥV4i0D )N)o)&bĢK2jOJP IE2@'{vP'ċ' ߷O [x[Ql1Thz@c=_U Jp#=ip[a[a|00z83b/!UT V^s,,k ֓/ԹПjYx`5)D1CDm7?ꡅJ⑿7Xl|{v?6y+lC,O ݯK Z&U?oxYӟۚ;dN RQ9XP\V2I-AP.zHچm NOjIdDg@K^7h_ *VDg~uۇ*Fe $Me}c3.# $+ޜ{o;^;D]:-4i]_;}%{i0^vRQ2EU~&UѾ;^8dSLy0p]#jlorA{ʷPWIuo$5៏8=ڴGAi.4Bu.OL$1G:iT\xu8sp7Pmjͥ_{uܽ:w|sM1*в^Imè *S~ y 3T0x(ۤQ0q,!wȫ]X4W֓V 1Pv!k ` 4?\#WPy[?. ;hD'iF"? 'omF3 3(#&۩'6oE ]ZsG^ 钥_fy{;nkva]X]) 1z0'Q5WW_ca5^PE3 Y2I_ɂIKUJ裱9*:YpQp2˫vWΔiJ+6:*=؞55z)0dкpHVށvWVt{=7Qjom'Y)P[۷~-}u #y?WxAO G@ڤIf"2 0Qe( 7Z^I\=KcT%[08,Bm{s‰jt Ϯ1Qź7C+"ȹ)l*ƠUNфe/F@ &'~N4}ĭׇډ׏Z "C9'r'sNJBtI6ʋ!\7Q&T):Ӻ& 1 0'oi**$r,1Yp7QI 35eEYFWΠ N^W3> e^5Q!1Ox'/$ jo=R<QSk7*v\s (·HKo v"RKi(@8n(HL87SWޤP*yGHYU:QBnCdmmQQ ʢ0:L~UyPEQfE$AQTYšr:j_WQ`}t:0p3b#"!$4jND.VF! 'I!q/z*`gOGipeou>FOOSON9WP9ަR F)ɳ\l yjz."Ɯ)h`f#PØ.R%o)Nhi^8 r86XApd .M @T1@\1 g1B} Qko4%cn~jMO= XYf[X+Kt^pIa5Gnu>F(&hxh2GaB*nJ0N]akq7CH1kWFAsNr0R)(p*fL(&a Ӷ1Efgε/|?z=}=<Xگv[opk{-=v[a3@FЌcc)c+v)/#VzΛjHdLJ,*?q*TGF F<Ŋ%.dpyy cxd)V!|}֬IM}6m`j 1$(ɨa~3±oﵥ=׷wq:qk1cbZқ&έoOLo uÑo~-寸kKn9Ԏ\̓k|Rh+m}G( *1JI>"rh7 /Q6z8]4[(MW0y(b[P.1S":Cⵋi*7fm1qG (ꐃOdo><|{Ͽ@?]ZᕋE4u1CY`j#й٦ƦQZύŶݦo&~YQOOvkvniwW bC,//6jxLƘ3Zo5,oK$$֜0ւPlx6ø%!`ѝ lsUS𢯑%FD O$P(u . x'8i< ]3*Dr1xWYA[>GN>uګpMclMчs3 b|=MM*8?37Q ܑ2>qҠEi`喚U,_oO;o>R{ -͡x-xfoG(<^:˹1ĥ[͞Ymgu 'P,i(&gkN'K4ekkЋa9vK^~D8۾5W剳…%;Tp% hANRLbsK~+۝"r/K@FbJ BI̕ȖL݅{T N1a$Ub,j8f@(l06mG;,2ܯG.Mea螝i 5,L)_X^o 5m\g,o@f1 ?{<vƐ7F2Qef(+(ivKk ] ϛo`[:{2YTicP|dʔ %Bnj=<d/Qik<:qHeh<Й%I*e%$pG18ȴOFQcqOm7ͽ6M7<=;֦0yz~pOWAGGj (~Fۿ5BejXuJۻw&us0VOvp~^[÷>ok(tcr ̜HD)]Ro{!46>/ﺮ;_׾]>G\=C eYY+]`)SHΥnԾʤG$!H˔́]"G$3aJRB/1cHNvj|:O5t?wMY{!L ͣNfP( Pz 00:^Deh3R2rqឱu h2O TL܄%i _A'_ l;SzEvKŒbq~w?vjX5\ :\L;xp5Ta LG񮽋afTTfp/]`ηq轀 0TJ4 d#9Idz蟲c_嶷Fp/sd б>\f*sqGjNN1Ĺέb6$ k}aj9Zc$8'#| z}߾9Hn#Qsis18|h_{mCv뵍+Rgp 2h߻o&_ 0;;!X@bMz{~T[;v۝׶'O}4ӽuV QQ]c.:m5);˺7Ct\*<]pnnys(aN'`++́n61˪5ȐuQb%6 !e ll;{bۭ7m20MsH]5b//ߋB#i`oO=ߞfD;־_n{٭BF'Oj'OjN'x=ǏN=ܞ9gDڛ_1N!LCfX^8QPFDP.+2X\D]:,$YMUm+W0 [vQJ`&0lر3Dia1dx{hS>PBߺ!ί&k L;&X[kj{SЍsO*pV4=.tiާkbVnϟ񔵅q6l 3W}M{뿶}˷~C?f|{YsejCm^G޸wniվ V'o~?'?nO Ay"icĹdÖ{Pp 3}gPۿpC{^|{?P7~vK&h~诵=Kc𩱩xyA_ Mؤή#mP[ E9|4sQ!#o07kTZ(@cī³VAHT*ATR#l IDATr.p=硆)n)ۼ1\pLJ{=|u{׾=s3=_xmmrTtRm+mgoe{\ @/gթإ)a iCJT[|Hgt V}2˔#B)3+i)1nάlSs#_CNղ$, 'UM'mC,8XY]Cϴ[Y2|8Gu||=wͶF{sJ Yw9Uyo 9r:\ɾo?19b5D o1[#G'>Llomox&Mo~wcǟ3sc rPqgl!Ł$Q%eS pJlIu3%i%> #pd(m(=b(O vC⭓qaPvxiwi>^/zkip=qAfs THIhH:I# @URXskx=r0$:P8.Ȕge86 ͺC.(vU(ߛ9y*@pUgLsgBAX pm?b/q"W<)d=St_;&Rq$|Xs=Spk] -dڦzCBƩAt4D(0#;{ UEf0 ‹CLM. +R!Oڔ2q1<:MvYZZ͍ܱ 3K7dw:"thdPY;.eq'aNǮ0) 'BC K1S`#m\;Jũ}WlS Lm+E1udvw&)o#)Nb2q48t)dAQfxs~l)L >,,.9'lKLB,Y w\~vpQqHʵ+]J# *;a6t1(ҹ;z3JB*c}Б,\J1m`(Stt#+d8Y|9;yؚPgn9 }r.G!+wRdsxląSpLw!2pQe'h ǔ6Jq3 G̃%-4.Њl u|#|QεЌAJ1*gQ[+R;bڨ4έ>V׭(F3ӳ-`А0k=(f< h'1t߾}w~ag:˫; 0 0j*v|su'Vb٨4WQ 1&X[7-gh.cՓ,*4oʍ /2$#xz>\_ZYv nteV k\QGè~|P7y_ɠri6f]yT]qn!,Mi7 JksY@A990ƾ5 H\Y fr7\OOg,`*u:sw{X T8,An*9& r8?!\<E2^f|@-kr%ixu d)s Ù 爇V7D+R] 1I/" ՚\PǹΧg"k^k(CK`B3_Vn+QR!Pn%K*h i.謲9\ı7>#%n6pΜ+`[k'=VyS!QF"9C2Obnu8ݝ+RW! [2\B)Dj4 9WҨg0zw^4H[rguQFZFA$'_b`Q삋h?Ȭc4-#~PeQgh'bz._`{ɽӷGNlY@y싁{X+ og=WXH=Ti8%+u#\2$)O 5U8xsS 1fՊ957yfxW..5P4YEpKdT EG]̌MN:S nCCsn\ek#}o[#k 3HȢIFah_yJ_4 hayK^<֝ђaxG*7YYӹ^/X]*eDxv*kcEʫ1:H(,`6sP2d# k2hr[?DkCIt"LFWXf \`[}VRM= Z]8RE EK4HBq/pV34,j`|p6pr RL35œXhR!_VHc5e|1uAG\W(b9_k* Jz@$M!*ti-(8 c0c9/%Xę8ZS @_ GQ .4m%x@:wSiX!; qx*6]]{cig@#gq>S5*C/4w}N U2'[=8+;ɐtۑ3w!=lJNaDNi+ PTQr02Ɉ .#/G4rI`]̷#*)hpdE9J7Dýa[]MdDy1TZw>]]du(Q27{Hz/ V|oM`1h+Da*"?yK#Z^' 9 jwG6g/:4:F]9 ,\9ri!F`NgrG@6VAr (y25huM6 \]5KC h}@GU)\"LzM|< Gc?ܢWKdOd ;0TXy՟,4#BuKR7aW6OjXz\TxJ^+lb8L# - ~#7D6SWaxyIW s[!!BaY*'^c5_,fs iX8U_ʠ]/֝c A5Kp<-*Q˸Dx([~T~)鍥οP6c&ݨ6xuDl/M=OEMC'KM1ͯeiYpH Τ¶gzµ]yʩV%R}rz+OGpKe$H n@u:<x *W{!H/?\IHntQ k]aM#p`(Q+=ԆJT`È( #0,@K]ؾBvGnsЧ(R0- z ' S:XǴ I8eevپIS8]Q"Oo%\X[ #b22Ls` x{lQ_6daR&`ӿɐzqXUPUi;8yU!+%0TYb1 B?@JP-{K)NASa]:-zS6bԩY6l+ ay;$dӦpH GV.z5 3O"jp({'wj")QNs}+@3d"A&h\ %Xy ֺ/(0ݍ!3<)Pp*Vt 0 ݓpD +]$AeS>o22/ Ag9*0? penEP.]F\zMo_rQR(.qR_WF++@H04xHM*o2v$ -> ̚텓Oέ+`Igdb1Hÿ ́i#(h?B)m@l𬁾ϹBʂB2 vyœK,3 BYu~dmfFRu l']rR鑿0"A/hg!կF $YM(-AI_/wU:>]!Z"|_Ԫ(Q-Yg :u !^~_ 1=t%Pap:чlC),~--4{ 9F N@ΰ ~IQH5#RK^N5N*zt_fs-5IVMYhXB,u^ Ma67)T/u;:ċ3c2  %!DlWio.f|F~"n*l^E]FA6W"G.}Q̗d}ֲ4N][J# @o#LedN|PAH`2f GZHHC& ̟lVqsqƟ/29"{4W֌CL gZ^TJ # "-1$#4ɖ/^]jx>QqNE}k44OAeN<ܾ'?h{ "_z'U~jI ĝ#vdJ(:2+S!Z\C)]UH0pgHVRW{Tnժug%);aC%6`%~8n߁vsxaC[p̰'P{> o=|}o_-2PtiE;tt޾+9 R^fpNT|ut \wkU8Kg)=־gg?L6ÏSNQv?śCna69 IT)vCf8~yq+99KQiM*^`a "z?}ɗ^_b2"߰6/KVgސ {d?su9~=g*B o n|վ݇<ܮe;oLM͵1~ySLJu b#>6Z \*ړ?sZZ5, .Zl0~W~ud#oDu_nGYZ嗖wK;k l; xəW8Hjh6OxtvC,{VPocӽ3 AsL@0 u1>JW4'Q 6K?3`x;3K,+GJo7; yp3guA3'1~Ez6u慬E8JF6ůj{fc?vێ΂!3@-/ EI"櫢AuFEIwxH?$O#C_dL hfqQ"f@(1(`rkhh6&_%045ܻHߟ]E:,Rv(KL fN_oC_n0#V;5ZQEc*e3'۱gηg=yCZ`qmF)+Ǯ=iZwT=n67V;䒧*vMjSc#I9NPFMJ黀3hGyX:Gq*p%sS+q2Ṗ>oܮvíW<@}o]`Пs\>-^m-7%^wӾp'iLX{O]pζ<۞vy^?A[3s" $ӾUJNoyf]<^yr LjcֈL/B2=Me(&zkX Z=Q}|W0"4Ó+d&|wu V!H4A}ht+/%{&1ƥȍS5okwM_2C#CI9 㥯iX|^ nt~2n߇)DzCOr-[gl'$1m| FpToqmA,co^`Ԏ:ef<"(E `y@ՉR^a!!8QXNnN[_ov9|>]?a? Q||x++Ƨ p6ɚ/(}|XX# û@O3ݔw9b%Ҳ F iҳ0AʿpT ?0>πrcTt@mC Hq@3M] o>Ң"wu=G<5>v_̇n?p:}E M<3p& "Hi״?~q/1B [ *P,J/G2v@>hv!5}9d.>$:0ByH?ow* %~Rs_w8t?o? S#5ĥZ,&<FS+< Ҡ:bdeRѯ CB8heV(!2hHsK.: a۾/\MʌU_K.1|]ww#03)L:8&:,b:=)B7}=um񪩶z=g z#nh2Q# *ۼi5М*~q*C38wX\84T?n\)|ϻ!|>!5!Jvj V>*d}zen?v~j#,\]˷O}h59-`;.Rd 0qNzY @CxWOl9֎=E<%+ᄃ}߻CGN=}i\LsuT Dz~rd nVu>TyꦾzK+~4 PpԾB0P7CD7:BMD/Y>a| IȔ׻!SAX"Fp'Omqg펯xuY$=vvy zn"2b7k Ac5&o$uB4!}6x;_پnlv-WŃ(]?Ξh_xYdvϹR x5jP%+eL(2<嘞bDGKipGjjHQWJW0{]#v@L5>UDvUF -5M·>8bPa:'NϬ~x-_l~?N]}W6λTϐ-) e, k癧϶13|Hw;h[np;įc8k+%g1ny_<> _zT gMUK:rPΦ܇q$N:ԡ܍6/T=Qjc̠pНD.!罈\|t l[??1C#@u˺ }uv=ϵps?G=g>a4rKsI>Ǵϙ%Loo\sU;r͡vKrci_[?pblJZVۣGmg~b_WsctcaųD?=BL KN#/ dȜuG`@9Rm" C^J ˕N.ʬLR)@\To(|]41ZI[q\S>^׶7fFx.#cϝ?sr):d>~w`1LuTЏn0Nᅓ'N ,?d RG"^ ]Z4 en0v$ , ZtOAEwWFa G}`WJ A/Rnx=G=BxivapQ+XOEJ< nl b˞M (sOE ^51' nck~18xܬY#ߐ׭^r'hhKR: p'}1UhT*lu5ձ3_)g+xz@eiBNeT ixRH#Ԩ~]B eLjC].`#F01aEy:r[sHQ8RF<:ybYk^(8Qp[s+cn[(^30&x-n5J ^E:e\ hϠ3K9]{XFH 6HZ !x1V8ƺxѭUk Z+XseXjBGC8ɨҭ˵uT|§2r0b'ϐ,z0{?ܭZ&Vx4t~(nxvShUy O Z5 <(ɓ)C{]9 O 2 V26hqّY9H݅L}(Ӷ0JWĶ%jW,#rW(Be ^#BTP/2E\p"2R`0(p GS%Ql@MzM9|hgϝ! R5i=*;y.ZӰQRe()7Н}Wf_/gi Odzn-nSg7mg7X/WT^8ѥw3,l0f|+m$N"/" se\`>Xf*X WzL'e6u0Gq~2 zuڶf2[eMZ*9N%>tK?)TjU~* @WiqxQmO^S,ۗ<5pgϻbnZ~v>hn r}_jq̪ҙ\^7w$ئ?8_\a"B;a 8!Քixj >Sk6~#fٜ;+<Tݠpg&*)(Y. "1ԡM"|ؚ[Xn?O?kz1eҕ @͝99=5/ߺg(ۍ >W-*o\3HGX,+!GUFͷc#:F-N/XB0|O8"(Ca,|οvVશ|Oia7GS0%$|M_$(Oy wȻ0<څWgكGDZr=SHf8ҭ4IDATL}v=.;n[,KH*u׷' S{X+CSۨer$0\%IEі;x+М\MMJ^{C{*p;HSc(8sse~9pͨJ2ڡ%j /SbT&繎,.;ʷ~UkU6szyf艧pG>|qsEH%5mómصly䁯Z7yY\A~1:\\7+XsuA b$6A97?ݡ~u.<7jHGt<W[q庲z F^pj y{cmʹ/ٸoeg>>peK@*Wcw&gvͭG~Zn٬/2MNM+%\*仏,nW 'Y/ZHү){I=:.xt |x4ISY'UC!:~m^TbY@vVW{&VX78nɹ|} E ˧?Mc_5mvcM.X=j˫֗B%@=3`ʩ7mbn K."c ^.8 !J?\ ċ|$?_Lʿx1/_# =!˔PM(%IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/images/favicon.ico0000644000076500000240000000644414345343305016266 0ustar00nusnusstaffPNG  IHDR szziCCPICC Profile8TOA-b""6l1)4 FA$z$l 7`z1d!ԍ:55wO:zi c l,2qd#>("ⶬ^a;5'F" iZ+*Zqޤ 'QX\qQL8q-jʃ_ym1) ɝv KDHjrG ` *u{븕޲@˪g3\g0laS,+@5?eϚ漯ն~4<ع_}R 1 L'YfIDATX WKl\WcNm&TIPJ RPB,ڪ v,R%V,*+VTBxg- UNb9~df)@dʮb]S,$C 6?I>(IJ>2wa,c0hO!$1U,vCƪ)@@݅GtX">Ӑ`uXoXM?Gp,yp`M4 ԂzZu.7c&XI@g$ɈsKj.Jg mx֥<(>Jf[,T@5-jbSob\zI`ɝY̽"UR1!+^ZfE쬼$WmQߋBR1iu,ee[8<F]F޸c&k\}&dTDTgzK jb2a'd~Ȓ4d6˘>|YvԇG{,' 'G:!|İXfL ӶYQWu P<4U)K2z"l0[ WSc$FP z!oF^@RA̵h24StXhUlpKF~;A,Ą1,P#Kt? ѧa-[aXQ(&tbha#'x`7?ͮ@YeȍzEIMefv7֮_uc9Qs^ooK3gq7 DT71#`!e|u&T"*fVN?33!9<; Rln 8MtZu^XzD4I/%ڎQXiK.nة{zyr=6٪l8RgFXO:r;;R yhY\8ť.`HemudܘPɯK0#9X<G mie8C@wpKxYdCJw7d 즛X25BjRJN(ub<ơ+7[6yz;ǣU>.lڲC죨aP?#N0YL^֘a4SJw'㙖(#M*qY젢;?ྩ<>!PnL$NzXӳFk}sOχ﷿`|#[O>fc DH5Nuu#YԮ=٨^_Azulņ]}ٯך;n%W(>? x W ?rHIENDB`././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.157822 amqp-5.3.1/docs/includes/0000755000076500000240000000000014714731266014505 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441318.0 amqp-5.3.1/docs/includes/introduction.txt0000644000076500000240000000717414714731246017776 0ustar00nusnusstaff:Version: 5.3.1 :Web: https://amqp.readthedocs.io/ :Download: https://pypi.org/project/amqp/ :Source: http://github.com/celery/py-amqp/ :Keywords: amqp, rabbitmq About ===== This is a fork of amqplib_ which was originally written by Barry Pederson. It is maintained by the Celery_ project, and used by `kombu`_ as a pure python alternative when `librabbitmq`_ is not available. This library should be API compatible with `librabbitmq`_. .. _amqplib: https://pypi.org/project/amqplib/ .. _Celery: http://celeryproject.org/ .. _kombu: https://kombu.readthedocs.io/ .. _librabbitmq: https://pypi.org/project/librabbitmq/ Differences from `amqplib`_ =========================== - Supports draining events from multiple channels (``Connection.drain_events``) - Support for timeouts - Channels are restored after channel error, instead of having to close the connection. - Support for heartbeats - ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals (half of the heartbeat value if rate is 2). - Or some other scheme by using ``Connection.send_heartbeat``. - Supports RabbitMQ extensions: - Consumer Cancel Notifications - by default a cancel results in ``ChannelError`` being raised - but not if a ``on_cancel`` callback is passed to ``basic_consume``. - Publisher confirms - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``. - ``Channel.confirm_select()`` enables publisher confirms. - ``Channel.events['basic_ack'].append(my_callback)`` adds a callback to be called when a message is confirmed. This callback is then called with the signature ``(delivery_tag, multiple)``. - Support for ``basic_return`` - Uses AMQP 0-9-1 instead of 0-8. - ``Channel.access_request`` and ``ticket`` arguments to methods **removed**. - Supports the ``arguments`` argument to ``basic_consume``. - ``internal`` argument to ``exchange_declare`` removed. - ``auto_delete`` argument to ``exchange_declare`` deprecated - ``insist`` argument to ``Connection`` removed. - ``Channel.alerts`` has been removed. - Support for ``Channel.basic_recover_async``. - ``Channel.basic_recover`` deprecated. - Exceptions renamed to have idiomatic names: - ``AMQPException`` -> ``AMQPError`` - ``AMQPConnectionException`` -> ConnectionError`` - ``AMQPChannelException`` -> ChannelError`` - ``Connection.known_hosts`` removed. - ``Connection`` no longer supports redirects. - ``exchange`` argument to ``queue_bind`` can now be empty to use the "default exchange". - Adds ``Connection.is_alive`` that tries to detect whether the connection can still be used. - Adds ``Connection.connection_errors`` and ``.channel_errors``, a list of recoverable errors. - Exposes the underlying socket as ``Connection.sock``. - Adds ``Channel.no_ack_consumers`` to keep track of consumer tags that set the no_ack flag. - Slightly better at error recovery Further ======= - Differences between AMQP 0.8 and 0.9.1 http://www.rabbitmq.com/amqp-0-8-to-0-9-1.html - AMQP 0.9.1 Quick Reference http://www.rabbitmq.com/amqp-0-9-1-quickref.html - RabbitMQ Extensions http://www.rabbitmq.com/extensions.html - For more information about AMQP, visit http://www.amqp.org - For other Python client libraries see: http://www.rabbitmq.com/devtools.html#python-dev ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/index.rst0000644000076500000240000000054314345343305014533 0ustar00nusnusstaff============================================= amqp - Python AMQP low-level client library ============================================= .. include:: includes/introduction.txt Contents ======== .. toctree:: :maxdepth: 2 reference/index changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/make.bat0000644000076500000240000001646414345343305014310 0ustar00nusnusstaff@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. epub3 to make an epub3 echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled echo. coverage to run coverage check of the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) REM Check if sphinx-build is available and fallback to Python version if any %SPHINXBUILD% 1>NUL 2>NUL if errorlevel 9009 goto sphinx_python goto sphinx_ok :sphinx_python set SPHINXBUILD=python -m sphinx.__init__ %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) :sphinx_ok if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\PROJ.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\PROJ.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "epub3" ( %SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3 if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub3 file is in %BUILDDIR%/epub3. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "coverage" ( %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage if errorlevel 1 exit /b 1 echo. echo.Testing of coverage in the sources finished, look at the ^ results in %BUILDDIR%/coverage/python.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) :end ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1606975 amqp-5.3.1/docs/reference/0000755000076500000240000000000014714731266014635 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.abstract_channel.rst0000644000076500000240000000042414345343305021610 0ustar00nusnusstaff===================================================== ``amqp.abstract_channel`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.abstract_channel .. automodule:: amqp.abstract_channel :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.basic_message.rst0000644000076500000240000000041314345343305021100 0ustar00nusnusstaff===================================================== ``amqp.basic_message`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.basic_message .. automodule:: amqp.basic_message :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.channel.rst0000644000076500000240000000037114345343305017726 0ustar00nusnusstaff===================================================== ``amqp.channel`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.channel .. automodule:: amqp.channel :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.connection.rst0000644000076500000240000000040214345343305020450 0ustar00nusnusstaff===================================================== ``amqp.connection`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.connection .. automodule:: amqp.connection :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.exceptions.rst0000644000076500000240000000040214345343305020472 0ustar00nusnusstaff===================================================== ``amqp.exceptions`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.exceptions .. automodule:: amqp.exceptions :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.method_framing.rst0000644000076500000240000000041614345343305021301 0ustar00nusnusstaff===================================================== ``amqp.method_framing`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.method_framing .. automodule:: amqp.method_framing :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.platform.rst0000644000076500000240000000037414345343305020145 0ustar00nusnusstaff===================================================== ``amqp.platform`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.platform .. automodule:: amqp.platform :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.protocol.rst0000644000076500000240000000037414345343305020162 0ustar00nusnusstaff===================================================== ``amqp.protocol`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.protocol .. automodule:: amqp.protocol :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.sasl.rst0000644000076500000240000000035414345343305017261 0ustar00nusnusstaff===================================================== amqp.spec ===================================================== .. contents:: :local: .. currentmodule:: amqp.sasl .. automodule:: amqp.sasl :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.serialization.rst0000644000076500000240000000041314345343305021170 0ustar00nusnusstaff===================================================== ``amqp.serialization`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.serialization .. automodule:: amqp.serialization :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.spec.rst0000644000076500000240000000036014345343305017246 0ustar00nusnusstaff===================================================== ``amqp.spec`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.spec .. automodule:: amqp.spec :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.transport.rst0000644000076500000240000000102214345343305020344 0ustar00nusnusstaff===================================================== ``amqp.transport`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.transport .. automodule:: amqp.transport .. autoclass:: _AbstractTransport :members: :undoc-members: .. autoclass:: SSLTransport :members: :private-members: _wrap_context, _wrap_socket_sni :undoc-members: .. autoclass:: TCPTransport :members: :undoc-members: .. autoclass:: Transport :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/amqp.utils.rst0000644000076500000240000000036314345343305017457 0ustar00nusnusstaff===================================================== ``amqp.utils`` ===================================================== .. contents:: :local: .. currentmodule:: amqp.utils .. automodule:: amqp.utils :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/docs/reference/index.rst0000644000076500000240000000057314345343305016474 0ustar00nusnusstaff.. _apiref: =============== API Reference =============== :Release: |version| :Date: |today| .. toctree:: :maxdepth: 1 amqp.connection amqp.channel amqp.basic_message amqp.exceptions amqp.abstract_channel amqp.transport amqp.method_framing amqp.platform amqp.protocol amqp.sasl amqp.serialization amqp.spec amqp.utils ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.160871 amqp-5.3.1/docs/templates/0000755000076500000240000000000014714731266014675 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1685012980.0 amqp-5.3.1/docs/templates/readme.txt0000644000076500000240000000232614433640764016676 0ustar00nusnusstaff===================================================================== Python AMQP 0.9.1 client library ===================================================================== |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| .. include:: ../includes/introduction.txt .. |build-status| image:: https://github.com/celery/py-amqp/actions/workflows/ci.yaml/badge.svg :alt: Build status :target: https://github.com/celery/py-amqp/actions/workflows/ci.yaml .. |coverage| image:: https://codecov.io/github/celery/py-amqp/coverage.svg?branch=main :target: https://codecov.io/github/celery/py-amqp?branch=main .. |license| image:: https://img.shields.io/pypi/l/amqp.svg :alt: BSD License :target: https://opensource.org/licenses/BSD-3-Clause .. |wheel| image:: https://img.shields.io/pypi/wheel/amqp.svg :alt: Python AMQP can be installed via wheel :target: https://pypi.org/project/amqp/ .. |pyversion| image:: https://img.shields.io/pypi/pyversions/amqp.svg :alt: Supported Python versions. :target: https://pypi.org/project/amqp/ .. |pyimp| image:: https://img.shields.io/pypi/implementation/amqp.svg :alt: Support Python implementations. :target: https://pypi.org/project/amqp/ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1620944 amqp-5.3.1/extra/0000755000076500000240000000000014714731266013072 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/extra/update_comments_from_spec.py0000644000076500000240000000347714345343305020674 0ustar00nusnusstaffimport os import sys import re default_source_file = os.path.join( os.path.dirname(__file__), '../amqp/channel.py', ) RE_COMMENTS = re.compile( r'(?Pdef\s+(?P[a-zA-Z0-9_]+)\(.*?\)' ':\n+\\s+""")(?P.*?)(?=""")', re.MULTILINE | re.DOTALL ) USAGE = """\ Usage: %s []\ """ def update_comments(comments_file, impl_file, result_file): text_file = open(impl_file) source = text_file.read() comments = get_comments(comments_file) for def_name, comment in comments.items(): source = replace_comment_per_def( source, result_file, def_name, comment ) new_file = open(result_file, 'w+') new_file.write(source) def get_comments(filename): text_file = open(filename) whole_source = text_file.read() comments = {} all_matches = RE_COMMENTS.finditer(whole_source) for match in all_matches: comments[match.group('mname')] = match.group('comment') # print('method: %s \ncomment: %s' % ( # match.group('mname'), match.group('comment'))) return comments def replace_comment_per_def(source, result_file, def_name, new_comment): regex = (r'(?Pdef\s+' + def_name + '\\(.*?\\):\n+\\s+""".*?\n).*?(?=""")') # print('method and comment:' + def_name + new_comment) result = re.sub(regex, r'\g' + new_comment, source, 0, re.MULTILINE | re.DOTALL) return result def main(argv=None): if argv is None: argv = sys.argv if len(argv) < 3: print(USAGE % argv[0]) return 1 impl_file = default_source_file if len(argv) >= 4: impl_file = argv[3] update_comments(argv[1], impl_file, argv[2]) if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1631246 amqp-5.3.1/requirements/0000755000076500000240000000000014714731266014472 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1716998002.0 amqp-5.3.1/requirements/default.txt0000644000076500000240000000002314625647562016660 0ustar00nusnusstaffvine>=5.0.0,<6.0.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731441042.0 amqp-5.3.1/requirements/docs.txt0000644000076500000240000000002514714730622016153 0ustar00nusnusstaffsphinx_celery>=2.1.3 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/requirements/pkgutils.txt0000644000076500000240000000015014345343305017062 0ustar00nusnusstaffsetuptools>=20.6.7 wheel>=0.29.0 flake8>=3.8.3 tox>=2.3.1 sphinx2rst>=1.0 bumpversion pydocstyle==1.1.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1716998002.0 amqp-5.3.1/requirements/test-ci.txt0000644000076500000240000000004014625647562016603 0ustar00nusnusstaffpytest-cov codecov pytest-xdist ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/requirements/test.txt0000644000076500000240000000010414345343305016176 0ustar00nusnusstaffpytest>=6.2.5,<=8.0.0 pytest-sugar>=0.9.1 pytest-rerunfailures>=6.0 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731441334.167433 amqp-5.3.1/setup.cfg0000644000076500000240000000044314714731266013571 0ustar00nusnusstaff[tool:pytest] testpaths = t/unit/ t/integration/ python_classes = test_* [bdist_rpm] requires = vine [flake8] ignore = N806, N802, N801, N803 [pep257] ignore = D102,D104,D203,D105,D213 [bdist_wheel] universal = 0 [metadata] license_file = LICENSE [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/setup.py0000644000076500000240000000716414345343305013462 0ustar00nusnusstaff#!/usr/bin/env python3 import re import sys from os import environ from pathlib import Path import setuptools import setuptools.command.test NAME = 'amqp' # -*- Classifiers -*- classes = """ Development Status :: 5 - Production/Stable Programming Language :: Python Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy License :: OSI Approved :: BSD License Intended Audience :: Developers Operating System :: OS Independent """ classifiers = [s.strip() for s in classes.split('\n') if s] # -*- Distribution Meta -*- re_meta = re.compile(r'__(\w+?)__\s*=\s*(.*)') re_doc = re.compile(r'^"""(.+?)"""') def add_default(m): attr_name, attr_value = m.groups() return (attr_name, attr_value.strip("\"'")), def add_doc(m): return ('doc', m.groups()[0]), pats = {re_meta: add_default, re_doc: add_doc} here = Path(__file__).parent meta = {} for line in (here / 'amqp/__init__.py').read_text().splitlines(): if line.strip() == '# -eof meta-': break for pattern, handler in pats.items(): m = pattern.match(line.strip()) if m: meta.update(handler(m)) # -*- Installation Requires -*- py_version = sys.version_info is_jython = sys.platform.startswith('java') is_pypy = hasattr(sys, 'pypy_version_info') def strip_comments(l): return l.split('#', 1)[0].strip() def reqs(f): lines = (here / 'requirements' / f).read_text().splitlines() reqs = [strip_comments(l) for l in lines] return list(filter(None, reqs)) # -*- %%% -*- class pytest(setuptools.command.test.test): user_options = [('pytest-args=', 'a', 'Arguments to pass to py.test')] def initialize_options(self): setuptools.command.test.test.initialize_options(self) self.pytest_args = '' def run_tests(self): import pytest pytest_args = self.pytest_args.split(' ') sys.exit(pytest.main(pytest_args)) if environ.get("CELERY_ENABLE_SPEEDUPS"): setup_requires = ['Cython'] ext_modules = [ setuptools.Extension( 'amqp.serialization', ["amqp/serialization.py"], ), setuptools.Extension( 'amqp.basic_message', ["amqp/basic_message.py"], ), setuptools.Extension( 'amqp.method_framing', ["amqp/method_framing.py"], ), setuptools.Extension( 'amqp.abstract_channel', ["amqp/abstract_channel.py"], ), setuptools.Extension( 'amqp.utils', ["amqp/utils.py"], ), ] else: setup_requires = [] ext_modules = [] setuptools.setup( name=NAME, packages=setuptools.find_packages(exclude=['ez_setup', 't', 't.*']), version=meta['version'], description=meta['doc'], long_description=(here / 'README.rst').read_text(), long_description_content_type="text/x-rst", keywords='amqp rabbitmq cloudamqp messaging', author=meta['author'], author_email=meta['contact'], maintainer=meta['maintainer'], url=meta['homepage'], platforms=['any'], license='BSD', classifiers=classifiers, python_requires=">=3.6", install_requires=reqs('default.txt'), setup_requires=setup_requires, tests_require=reqs('test.txt'), cmdclass={'test': pytest}, zip_safe=False, ext_modules=ext_modules, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1633806 amqp-5.3.1/t/0000755000076500000240000000000014714731266012212 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/__init__.py0000644000076500000240000000000014345343305014302 0ustar00nusnusstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1640053 amqp-5.3.1/t/integration/0000755000076500000240000000000014714731266014535 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/integration/__init__.py0000644000076500000240000000000014345343305016625 0ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/integration/conftest.py0000644000076500000240000000033114345343305016722 0ustar00nusnusstaffimport os import subprocess def pytest_sessionfinish(session, exitstatus): tox_env_dir = os.environ.get('TOX_WORK_DIR') if exitstatus and tox_env_dir: subprocess.call(["bash", "./rabbitmq_logs.sh"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/integration/test_integration.py0000644000076500000240000012173014345343305020466 0ustar00nusnusstaffimport socket from array import array from struct import pack from unittest.mock import ANY, Mock, call, patch import pytest import amqp from amqp import Channel, Connection, Message, sasl, spec from amqp.exceptions import (AccessRefused, ConnectionError, InvalidCommand, NotFound, PreconditionFailed, ResourceLocked) from amqp.protocol import queue_declare_ok_t from amqp.serialization import dumps, loads connection_testdata = ( (spec.Connection.Blocked, '_on_blocked'), (spec.Connection.Unblocked, '_on_unblocked'), (spec.Connection.Secure, '_on_secure'), (spec.Connection.CloseOk, '_on_close_ok'), ) channel_testdata = ( (spec.Basic.Ack, '_on_basic_ack'), (spec.Basic.Nack, '_on_basic_nack'), (spec.Basic.CancelOk, '_on_basic_cancel_ok'), ) exchange_declare_error_testdata = ( ( 503, "COMMAND_INVALID - " "unknown exchange type 'exchange-type'", InvalidCommand ), ( 403, "ACCESS_REFUSED - " "exchange name 'amq.foo' contains reserved prefix 'amq.*'", AccessRefused ), ( 406, "PRECONDITION_FAILED - " "inequivalent arg 'type' for exchange 'foo' in vhost '/':" "received 'direct' but current is 'fanout'", PreconditionFailed ), ) queue_declare_error_testdata = ( ( 403, "ACCESS_REFUSED - " "queue name 'amq.foo' contains reserved prefix 'amq.*", AccessRefused ), ( 404, "NOT_FOUND - " "no queue 'foo' in vhost '/'", NotFound ), ( 405, "RESOURCE_LOCKED - " "cannot obtain exclusive access to locked queue 'foo' in vhost '/'", ResourceLocked ), ) CLIENT_PROPERTIES = { 'product': 'py-amqp', 'product_version': amqp.__version__, 'capabilities': { 'consumer_cancel_notify': True, 'connection.blocked': True, 'authentication_failure_close': True }, } SERVER_PROPERTIES = { 'capabilities': { 'publisher_confirms': True, 'exchange_exchange_bindings': True, 'basic.nack': True, 'consumer_cancel_notify': True, 'connection.blocked': True, 'consumer_priorities': True, 'authentication_failure_close': True, 'per_consumer_qos': True, 'direct_reply_to': True }, 'cluster_name': 'rabbit@broker.com', 'copyright': 'Copyright (C) 2007-2018 Pivotal Software, Inc.', 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'platform': 'Erlang/OTP 20.3.8.9', 'product': 'RabbitMQ', 'version': '3.7.8' } def build_frame_type_1(method, channel=0, args=b'', arg_format=None): if len(args) > 0: args = dumps(arg_format, args) else: args = b'' frame = (b''.join([pack('>HH', *method), args])) return 1, channel, frame def build_frame_type_2(body_len, channel, properties): frame = (b''.join( [pack('>HxxQ', spec.Basic.CLASS_ID, body_len), properties]) ) return 2, channel, frame def build_frame_type_3(channel, body): return 3, channel, body class DataComparator: # Comparator used for asserting serialized data. It can be used # in cases when direct comparison of bytestream cannot be used # (mainly cases of Table type where order of items can vary) def __init__(self, argsig, items): self.argsig = argsig self.items = items def __eq__(self, other): values, offset = loads(self.argsig, other, 0) return tuple(values) == tuple(self.items) def handshake(conn, transport_mock, server_properties=None): # Helper function simulating connection handshake with server if server_properties is None: server_properties = SERVER_PROPERTIES transport_mock().read_frame.side_effect = [ build_frame_type_1( spec.Connection.Start, channel=0, args=( 0, 9, server_properties, 'AMQPLAIN PLAIN', 'en_US' ), arg_format='ooFSS' ), build_frame_type_1( spec.Connection.Tune, channel=0, args=(2047, 131072, 60), arg_format='BlB' ), build_frame_type_1( spec.Connection.OpenOk, channel=0 ) ] conn.connect() transport_mock().read_frame.side_effect = None def create_channel(channel_id, conn, transport_mock): transport_mock().read_frame.side_effect = [ build_frame_type_1( spec.Channel.OpenOk, channel=channel_id, args=(1, False), arg_format='Lb' ) ] ch = conn.channel(channel_id=channel_id) transport_mock().read_frame.side_effect = None return ch class test_connection: # Integration tests. Tests verify the correctness of communication between # library and broker. # * tests mocks broker responses mocking return values of # amqp.transport.Transport.read_frame() method # * tests asserts expected library responses to broker via calls of # amqp.method_framing.frame_writer() function def test_connect(self): # Test checking connection handshake frame_writer_cls_mock = Mock() on_open_mock = Mock() frame_writer_mock = frame_writer_cls_mock() conn = Connection( frame_writer=frame_writer_cls_mock, on_open=on_open_mock ) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) on_open_mock.assert_called_once_with(conn) security_mechanism = sasl.AMQPLAIN( 'guest', 'guest' ).start(conn).decode('utf-8', 'surrogatepass') # Expected responses from client frame_writer_mock.assert_has_calls( [ call( 1, 0, spec.Connection.StartOk, # Due Table type, we cannot compare bytestream directly DataComparator( 'FsSs', ( CLIENT_PROPERTIES, 'AMQPLAIN', security_mechanism, 'en_US' ) ), None ), call( 1, 0, spec.Connection.TuneOk, dumps( 'BlB', (conn.channel_max, conn.frame_max, conn.heartbeat) ), None ), call( 1, 0, spec.Connection.Open, dumps('ssb', (conn.virtual_host, '', False)), None ) ] ) assert conn.client_properties == CLIENT_PROPERTIES def test_connect_no_capabilities(self): # Test checking connection handshake with broker # not supporting capabilities frame_writer_cls_mock = Mock() on_open_mock = Mock() frame_writer_mock = frame_writer_cls_mock() conn = Connection( frame_writer=frame_writer_cls_mock, on_open=on_open_mock ) with patch.object(conn, 'Transport') as transport_mock: server_properties = dict(SERVER_PROPERTIES) del server_properties['capabilities'] client_properties = dict(CLIENT_PROPERTIES) del client_properties['capabilities'] handshake( conn, transport_mock, server_properties=server_properties ) on_open_mock.assert_called_once_with(conn) security_mechanism = sasl.AMQPLAIN( 'guest', 'guest' ).start(conn).decode('utf-8', 'surrogatepass') # Expected responses from client frame_writer_mock.assert_has_calls( [ call( 1, 0, spec.Connection.StartOk, # Due Table type, we cannot compare bytestream directly DataComparator( 'FsSs', ( client_properties, 'AMQPLAIN', security_mechanism, 'en_US' ) ), None ), call( 1, 0, spec.Connection.TuneOk, dumps( 'BlB', (conn.channel_max, conn.frame_max, conn.heartbeat) ), None ), call( 1, 0, spec.Connection.Open, dumps('ssb', (conn.virtual_host, '', False)), None ) ] ) assert conn.client_properties == client_properties def test_connect_missing_capabilities(self): # Test checking connection handshake with broker # supporting subset of capabilities frame_writer_cls_mock = Mock() on_open_mock = Mock() frame_writer_mock = frame_writer_cls_mock() conn = Connection( frame_writer=frame_writer_cls_mock, on_open=on_open_mock ) with patch.object(conn, 'Transport') as transport_mock: server_properties = dict(SERVER_PROPERTIES) server_properties['capabilities'] = { # This capability is not supported by client 'basic.nack': True, 'consumer_cancel_notify': True, 'connection.blocked': False, # server does not support 'authentication_failure_close' # which is supported by client } client_properties = dict(CLIENT_PROPERTIES) client_properties['capabilities'] = { 'consumer_cancel_notify': True, } handshake( conn, transport_mock, server_properties=server_properties ) on_open_mock.assert_called_once_with(conn) security_mechanism = sasl.AMQPLAIN( 'guest', 'guest' ).start(conn).decode('utf-8', 'surrogatepass') # Expected responses from client frame_writer_mock.assert_has_calls( [ call( 1, 0, spec.Connection.StartOk, # Due Table type, we cannot compare bytestream directly DataComparator( 'FsSs', ( client_properties, 'AMQPLAIN', security_mechanism, 'en_US' ) ), None ), call( 1, 0, spec.Connection.TuneOk, dumps( 'BlB', (conn.channel_max, conn.frame_max, conn.heartbeat) ), None ), call( 1, 0, spec.Connection.Open, dumps('ssb', (conn.virtual_host, '', False)), None ) ] ) assert conn.client_properties == client_properties def test_connection_close(self): # Test checking closing connection frame_writer_cls_mock = Mock() frame_writer_mock = frame_writer_cls_mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) frame_writer_mock.reset_mock() # Inject CloseOk response from broker transport_mock().read_frame.return_value = build_frame_type_1( spec.Connection.CloseOk ) t = conn.transport conn.close() frame_writer_mock.assert_called_once_with( 1, 0, spec.Connection.Close, dumps('BsBB', (0, '', 0, 0)), None ) t.close.assert_called_once_with() @patch('amqp.Connection._on_blocked') def test_connecion_ignore_methods_during_close(self, on_blocked_mock): # Test checking that py-amqp will discard any received methods # except Close and Close-OK after sending Connection.Close method # to server. frame_writer_cls_mock = Mock() frame_writer_mock = frame_writer_cls_mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) frame_writer_mock.reset_mock() # Inject CloseOk response from broker transport_mock().read_frame.side_effect = [ build_frame_type_1( spec.Connection.Blocked, channel=0 ), build_frame_type_1( spec.Connection.CloseOk ) ] t = conn.transport conn.close() on_blocked_mock.assert_not_called() frame_writer_mock.assert_called_once_with( 1, 0, spec.Connection.Close, dumps('BsBB', (0, '', 0, 0)), None ) t.close.assert_called_once_with() def test_connection_closed_by_broker(self): # Test that library response correctly CloseOk when # close method is received and _on_close_ok() method is called. frame_writer_cls_mock = Mock() frame_writer_mock = frame_writer_cls_mock() with patch.object(Connection, '_on_close_ok') as callback_mock: conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) frame_writer_mock.reset_mock() # Inject Close response from broker transport_mock().read_frame.return_value = build_frame_type_1( spec.Connection.Close, args=(1, False), arg_format='Lb' ) with pytest.raises(ConnectionError): conn.drain_events(0) frame_writer_mock.assert_called_once_with( 1, 0, spec.Connection.CloseOk, '', None ) callback_mock.assert_called_once_with() def test_send_heartbeat(self): """The send_heartbeat method writes the expected output.""" conn = Connection() with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) transport_mock().write.reset_mock() conn.send_heartbeat() transport_mock().write.assert_called_once_with( memoryview(bytearray(b'\x08\x00\x00\x00\x00\x00\x00\xce')) ) class test_channel: # Integration tests. Tests verify the correctness of communication between # library and broker. # * tests mocks broker responses mocking return values of # amqp.transport.Transport.read_frame() method # * tests asserts expected library responses to broker via calls of # amqp.method_framing.frame_writer() function @pytest.mark.parametrize("method, callback", connection_testdata) def test_connection_methods(self, method, callback): # Test verifying that proper Connection callback is called when # given method arrived from Broker. with patch.object(Connection, callback) as callback_mock: conn = Connection() with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) # Inject desired method transport_mock().read_frame.return_value = build_frame_type_1( method, channel=0, args=(1, False), arg_format='Lb' ) conn.drain_events(0) callback_mock.assert_called_once() def test_channel_ignore_methods_during_close(self): # Test checking that py-amqp will discard any received methods # except Close and Close-OK after sending Channel.Close method # to server. frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) consumer_tag = 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) channel_id = 1 transport_mock().read_frame.side_effect = [ # Inject Open Handshake build_frame_type_1( spec.Channel.OpenOk, channel=channel_id, args=(1, False), arg_format='Lb' ), # Inject basic-deliver response build_frame_type_1( spec.Basic.Deliver, channel=1, arg_format='sLbss', args=( # consumer-tag, delivery-tag, redelivered, consumer_tag, 1, False, # exchange-name, routing-key 'foo_exchange', 'routing-key' ) ), build_frame_type_2( channel=1, body_len=12, properties=b'0\x00\x00\x00\x00\x00\x01' ), build_frame_type_3( channel=1, body=b'Hello World!' ), # Inject close method build_frame_type_1( spec.Channel.CloseOk, channel=channel_id ), ] frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() with patch('amqp.Channel._on_basic_deliver') as on_deliver_mock: ch = conn.channel(channel_id=channel_id) ch.close() on_deliver_mock.assert_not_called() frame_writer_mock.assert_has_calls( [ call( 1, 1, spec.Channel.Open, dumps('s', ('',)), None ), call( 1, 1, spec.Channel.Close, dumps('BsBB', (0, '', 0, 0)), None ) ] ) assert ch.is_open is False def test_channel_open_close(self): # Test checking opening and closing channel frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) channel_id = 1 transport_mock().read_frame.side_effect = [ # Inject Open Handshake build_frame_type_1( spec.Channel.OpenOk, channel=channel_id, args=(1, False), arg_format='Lb' ), # Inject close method build_frame_type_1( spec.Channel.CloseOk, channel=channel_id ) ] frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() on_open_mock = Mock() assert conn._used_channel_ids == array('H') ch = conn.channel(channel_id=channel_id, callback=on_open_mock) on_open_mock.assert_called_once_with(ch) assert ch.is_open is True assert conn._used_channel_ids == array('H', (1,)) ch.close() frame_writer_mock.assert_has_calls( [ call( 1, 1, spec.Channel.Open, dumps('s', ('',)), None ), call( 1, 1, spec.Channel.Close, dumps('BsBB', (0, '', 0, 0)), None ) ] ) assert ch.is_open is False assert conn._used_channel_ids == array('H') def test_received_channel_Close_during_connection_close(self): # This test verifies that library handles correctly closing channel # during closing of connection: # 1. User requests closing connection - client sends Connection.Close # 2. Broker requests closing Channel - client receives Channel.Close # 3. Broker sends Connection.CloseOk # see GitHub issue #218 conn = Connection() with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) channel_id = 1 create_channel(channel_id, conn, transport_mock) # Replies sent by broker transport_mock().read_frame.side_effect = [ # Inject close methods build_frame_type_1( spec.Channel.Close, channel=channel_id, args=(1, False), arg_format='Lb' ), build_frame_type_1( spec.Connection.CloseOk ) ] conn.close() @pytest.mark.parametrize("method, callback", channel_testdata) def test_channel_methods(self, method, callback): # Test verifying that proper Channel callback is called when # given method arrived from Broker with patch.object(Channel, callback) as callback_mock: conn = Connection() with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) create_channel(1, conn, transport_mock) # Inject desired method transport_mock().read_frame.return_value = build_frame_type_1( method, channel=1, args=(1, False), arg_format='Lb' ) conn.drain_events(0) callback_mock.assert_called_once() def test_basic_publish(self): # Test verifying publishing message. frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() msg = Message('test') # we need to mock socket timeout due checks in # Channel._basic_publish transport_mock().read_frame.side_effect = socket.timeout ch.basic_publish(msg) frame_writer_mock.assert_called_once_with( 1, 1, spec.Basic.Publish, dumps('Bssbb', (0, '', '', False, False)), msg ) def test_consume_no_consumer_tag(self): # Test verifying starting consuming without specified consumer_tag callback_mock = Mock() frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) consumer_tag = 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) # Inject ConsumeOk response from Broker transport_mock().read_frame.return_value = build_frame_type_1( spec.Basic.ConsumeOk, channel=1, args=(consumer_tag,), arg_format='s' ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.basic_consume('my_queue', callback=callback_mock) frame_writer_mock.assert_called_once_with( 1, 1, spec.Basic.Consume, dumps( 'BssbbbbF', (0, 'my_queue', '', False, False, False, False, None) ), None ) assert ch.callbacks[consumer_tag] == callback_mock assert ret == 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' def test_consume_with_consumer_tag(self): # Test verifying starting consuming with specified consumer_tag callback_mock = Mock() frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) # Inject ConcumeOk response from Broker transport_mock().read_frame.return_value = build_frame_type_1( spec.Basic.ConsumeOk, channel=1, args=('my_tag',), arg_format='s' ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.basic_consume( 'my_queue', callback=callback_mock, consumer_tag='my_tag' ) frame_writer_mock.assert_called_once_with( 1, 1, spec.Basic.Consume, dumps( 'BssbbbbF', ( 0, 'my_queue', 'my_tag', False, False, False, False, None ) ), None ) assert ch.callbacks['my_tag'] == callback_mock assert ret == 'my_tag' def test_queue_declare(self): # Test verifying declaring queue frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Queue.DeclareOk, channel=1, arg_format='sll', args=('foo', 1, 2) ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.queue_declare('foo') assert ret == queue_declare_ok_t( queue='foo', message_count=1, consumer_count=2 ) frame_writer_mock.assert_called_once_with( 1, 1, spec.Queue.Declare, dumps( 'BsbbbbbF', ( 0, # queue, passive, durable, exclusive, 'foo', False, False, False, # auto_delete, nowait, arguments True, False, None ) ), None ) @pytest.mark.parametrize( "reply_code, reply_text, exception", queue_declare_error_testdata) def test_queue_declare_error(self, reply_code, reply_text, exception): # Test verifying wrong declaring exchange frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Connection.Close, args=(reply_code, reply_text) + spec.Exchange.Declare, arg_format='BsBB' ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() with pytest.raises(exception) as excinfo: ch.queue_declare('foo') assert excinfo.value.code == reply_code assert excinfo.value.message == reply_text assert excinfo.value.method == 'Exchange.declare' assert excinfo.value.method_name == 'Exchange.declare' assert excinfo.value.method_sig == spec.Exchange.Declare # Client is sending to broker: # 1. Exchange Declare # 2. Connection.CloseOk as reply to received Connection.Close frame_writer_calls = [ call( 1, 1, spec.Queue.Declare, dumps( 'BsbbbbbF', ( 0, # queue, passive, durable, exclusive, 'foo', False, False, False, # auto_delete, nowait, arguments True, False, None ) ), None ), call( 1, 0, spec.Connection.CloseOk, '', None ), ] frame_writer_mock.assert_has_calls(frame_writer_calls) def test_queue_delete(self): # Test verifying deleting queue frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Queue.DeleteOk, channel=1, arg_format='l', args=(5,) ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() msg_count = ch.queue_delete('foo') assert msg_count == 5 frame_writer_mock.assert_called_once_with( 1, 1, spec.Queue.Delete, dumps( 'Bsbbb', # queue, if_unused, if_empty, nowait (0, 'foo', False, False, False) ), None ) def test_queue_purge(self): # Test verifying purging queue frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Queue.PurgeOk, channel=1, arg_format='l', args=(4,) ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() msg_count = ch.queue_purge('foo') assert msg_count == 4 frame_writer_mock.assert_called_once_with( 1, 1, spec.Queue.Purge, dumps( 'Bsb', # queue, nowait (0, 'foo', False) ), None ) def test_basic_deliver(self): # Test checking delivering single message callback_mock = Mock() frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) consumer_tag = 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) # Inject ConsumeOk response from Broker transport_mock().read_frame.side_effect = [ # Inject Consume-ok response build_frame_type_1( spec.Basic.ConsumeOk, channel=1, args=(consumer_tag,), arg_format='s' ), # Inject basic-deliver response build_frame_type_1( spec.Basic.Deliver, channel=1, arg_format='sLbss', args=( # consumer-tag, delivery-tag, redelivered, consumer_tag, 1, False, # exchange-name, routing-key 'foo_exchange', 'routing-key' ) ), build_frame_type_2( channel=1, body_len=12, properties=b'0\x00\x00\x00\x00\x00\x01' ), build_frame_type_3( channel=1, body=b'Hello World!' ), ] frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ch.basic_consume('my_queue', callback=callback_mock) conn.drain_events() callback_mock.assert_called_once_with(ANY) msg = callback_mock.call_args[0][0] assert isinstance(msg, Message) assert msg.body_size == 12 assert msg.body == b'Hello World!' assert msg.frame_method == spec.Basic.Deliver assert msg.delivery_tag == 1 assert msg.ready is True assert msg.delivery_info == { 'consumer_tag': 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg', 'delivery_tag': 1, 'redelivered': False, 'exchange': 'foo_exchange', 'routing_key': 'routing-key' } assert msg.properties == { 'application_headers': {}, 'delivery_mode': 1 } def test_queue_get(self): # Test verifying getting message from queue frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.side_effect = [ build_frame_type_1( spec.Basic.GetOk, channel=1, arg_format='Lbssl', args=( # delivery_tag, redelivered, exchange_name 1, False, 'foo_exchange', # routing_key, message_count 'routing_key', 1 ) ), build_frame_type_2( channel=1, body_len=12, properties=b'0\x00\x00\x00\x00\x00\x01' ), build_frame_type_3( channel=1, body=b'Hello World!' ) ] frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() msg = ch.basic_get('foo') assert msg.body_size == 12 assert msg.body == b'Hello World!' assert msg.frame_method == spec.Basic.GetOk assert msg.delivery_tag == 1 assert msg.ready is True assert msg.delivery_info == { 'delivery_tag': 1, 'redelivered': False, 'exchange': 'foo_exchange', 'routing_key': 'routing_key', 'message_count': 1 } assert msg.properties == { 'application_headers': {}, 'delivery_mode': 1 } frame_writer_mock.assert_called_once_with( 1, 1, spec.Basic.Get, dumps( 'Bsb', # queue, nowait (0, 'foo', False) ), None ) def test_queue_get_empty(self): # Test verifying getting message from empty queue frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Basic.GetEmpty, channel=1, arg_format='s', args=('s') ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.basic_get('foo') assert ret is None frame_writer_mock.assert_called_once_with( 1, 1, spec.Basic.Get, dumps( 'Bsb', # queue, nowait (0, 'foo', False) ), None ) def test_exchange_declare(self): # Test verifying declaring exchange frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Exchange.DeclareOk, channel=1 ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.exchange_declare('foo', 'fanout') assert ret is None frame_writer_mock.assert_called_once_with( 1, 1, spec.Exchange.Declare, dumps( 'BssbbbbbF', ( 0, # exchange, type, passive, durable, 'foo', 'fanout', False, False, # auto_delete, internal, nowait, arguments True, False, False, None ) ), None ) @pytest.mark.parametrize( "reply_code, reply_text, exception", exchange_declare_error_testdata) def test_exchange_declare_error(self, reply_code, reply_text, exception): # Test verifying wrong declaring exchange frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Connection.Close, args=(reply_code, reply_text) + spec.Exchange.Declare, arg_format='BsBB' ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() with pytest.raises(exception) as excinfo: ch.exchange_declare('exchange', 'exchange-type') assert excinfo.value.code == reply_code assert excinfo.value.message == reply_text assert excinfo.value.method == 'Exchange.declare' assert excinfo.value.method_name == 'Exchange.declare' assert excinfo.value.method_sig == spec.Exchange.Declare # Client is sending to broker: # 1. Exchange Declare # 2. Connection.CloseOk as reply to received Connection.Close frame_writer_calls = [ call( 1, 1, spec.Exchange.Declare, dumps( 'BssbbbbbF', ( 0, # exchange, type, passive, durable, 'exchange', 'exchange-type', False, False, # auto_delete, internal, nowait, arguments True, False, False, None ) ), None ), call( 1, 0, spec.Connection.CloseOk, '', None ), ] frame_writer_mock.assert_has_calls(frame_writer_calls) def test_exchange_delete(self): # Test verifying declaring exchange frame_writer_cls_mock = Mock() conn = Connection(frame_writer=frame_writer_cls_mock) with patch.object(conn, 'Transport') as transport_mock: handshake(conn, transport_mock) ch = create_channel(1, conn, transport_mock) transport_mock().read_frame.return_value = build_frame_type_1( spec.Exchange.DeleteOk, channel=1 ) frame_writer_mock = frame_writer_cls_mock() frame_writer_mock.reset_mock() ret = ch.exchange_delete('foo') assert ret == () frame_writer_mock.assert_called_once_with( 1, 1, spec.Exchange.Delete, dumps( 'Bsbb', ( 0, # exchange, if-unused, no-wait 'foo', False, False ) ), None ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/integration/test_rmq.py0000644000076500000240000001724414345343305016746 0ustar00nusnusstaffimport os import ssl from unittest.mock import ANY, Mock import pytest import amqp from amqp import transport def get_connection( hostname, port, vhost, use_tls=False, keyfile=None, certfile=None, ca_certs=None ): host = f'{hostname}:{port}' if use_tls: return amqp.Connection(host=host, vhost=vhost, ssl={ 'keyfile': keyfile, 'certfile': certfile, 'ca_certs': ca_certs, } ) else: return amqp.Connection(host=host, vhost=vhost) @pytest.fixture(params=['plain', 'tls']) def connection(request): # this fixture yields plain connections to broker and TLS encrypted if request.param == 'plain': return get_connection( hostname=os.environ.get('RABBITMQ_HOST', 'localhost'), port=os.environ.get('RABBITMQ_5672_TCP', '5672'), vhost=getattr( request.config, "slaveinput", {} ).get("slaveid", None), ) elif request.param == 'tls': return get_connection( hostname=os.environ.get('RABBITMQ_HOST', 'localhost'), port=os.environ.get('RABBITMQ_5671_TCP', '5671'), vhost=getattr( request.config, "slaveinput", {} ).get("slaveid", None), use_tls=True, keyfile='t/certs/client_key.pem', certfile='t/certs/client_certificate.pem', ca_certs='t/certs/ca_certificate.pem', ) @pytest.mark.env('rabbitmq') @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_connect(connection): connection.connect() repr(connection) connection.close() @pytest.mark.env('rabbitmq') @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_tls_connect_fails(): # testing that wrong client key/certificate yields SSLError # when encrypted connection is used connection = get_connection( hostname=os.environ.get('RABBITMQ_HOST', 'localhost'), port=os.environ.get('RABBITMQ_5671_TCP', '5671'), vhost='/', use_tls=True, keyfile='t/certs/client_key_broken.pem', certfile='t/certs/client_certificate_broken.pem' ) with pytest.raises(ssl.SSLError): connection.connect() @pytest.mark.env('rabbitmq') @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_tls_default_certs(): # testing TLS connection against badssl.com with default certs connection = transport.Transport( host="tls-v1-2.badssl.com:1012", ssl=True, ) assert type(connection) == transport.SSLTransport connection.connect() @pytest.mark.env('rabbitmq') @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_tls_no_default_certs_fails(): # testing TLS connection fails against badssl.com without default certs connection = transport.Transport( host="tls-v1-2.badssl.com:1012", ssl={ "ca_certs": 't/certs/ca_certificate.pem', }, ) with pytest.raises(ssl.SSLError): connection.connect() @pytest.mark.env('rabbitmq') class test_rabbitmq_operations(): @pytest.fixture(autouse=True) def setup_conn(self, connection): self.connection = connection self.connection.connect() self.channel = self.connection.channel() yield self.channel.close() self.connection.close() @pytest.mark.parametrize( "publish_method,mandatory,immediate", ( ('basic_publish', False, True), ('basic_publish', True, True), ('basic_publish', False, False), ('basic_publish', True, False), ('basic_publish_confirm', False, True), ('basic_publish_confirm', True, True), ('basic_publish_confirm', False, False), ('basic_publish_confirm', True, False), ) ) @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_publish_consume(self, publish_method, mandatory, immediate): callback = Mock() self.channel.queue_declare( queue='py-amqp-unittest', durable=False, exclusive=True ) # RabbitMQ 3 removed the support for the immediate flag # Since we confirm the message, RabbitMQ complains # See # http://www.rabbitmq.com/blog/2012/11/19/breaking-things-with-rabbitmq-3-0/ if immediate and publish_method == "basic_publish_confirm": with pytest.raises(amqp.exceptions.AMQPNotImplementedError) as exc: getattr(self.channel, publish_method)( amqp.Message('Unittest'), routing_key='py-amqp-unittest', mandatory=mandatory, immediate=immediate ) assert exc.value.reply_code == 540 assert exc.value.method_name == 'Basic.publish' assert exc.value.reply_text == 'NOT_IMPLEMENTED - immediate=true' return else: getattr(self.channel, publish_method)( amqp.Message('Unittest'), routing_key='py-amqp-unittest', mandatory=mandatory, immediate=immediate ) # RabbitMQ 3 removed the support for the immediate flag # See # http://www.rabbitmq.com/blog/2012/11/19/breaking-things-with-rabbitmq-3-0/ if immediate: with pytest.raises(amqp.exceptions.AMQPNotImplementedError) as exc: self.channel.basic_consume( queue='py-amqp-unittest', callback=callback, consumer_tag='amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' ) assert exc.value.reply_code == 540 assert exc.value.method_name == 'Basic.publish' assert exc.value.reply_text == 'NOT_IMPLEMENTED - immediate=true' return else: self.channel.basic_consume( queue='py-amqp-unittest', callback=callback, consumer_tag='amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg' ) self.connection.drain_events() callback.assert_called_once_with(ANY) msg = callback.call_args[0][0] assert isinstance(msg, amqp.Message) assert msg.body_size == len('Unittest') assert msg.body == 'Unittest' assert msg.frame_method == amqp.spec.Basic.Deliver assert msg.delivery_tag == 1 assert msg.ready is True assert msg.delivery_info == { 'consumer_tag': 'amq.ctag-PCmzXGkhCw_v0Zq7jXyvkg', 'delivery_tag': 1, 'redelivered': False, 'exchange': '', 'routing_key': 'py-amqp-unittest' } assert msg.properties == {'content_encoding': 'utf-8'} self.channel.basic_ack(msg.delivery_tag) @pytest.mark.flaky(reruns=5, reruns_delay=2) def test_publish_get(self): self.channel.queue_declare( queue='py-amqp-unittest', durable=False, exclusive=True ) self.channel.basic_publish( amqp.Message('Unittest'), routing_key='py-amqp-unittest' ) msg = self.channel.basic_get( queue='py-amqp-unittest', ) assert msg.body_size == 8 assert msg.body == 'Unittest' assert msg.frame_method == amqp.spec.Basic.GetOk assert msg.delivery_tag == 1 assert msg.ready is True assert msg.delivery_info == { 'delivery_tag': 1, 'redelivered': False, 'exchange': '', 'routing_key': 'py-amqp-unittest', 'message_count': 0 } assert msg.properties == { 'content_encoding': 'utf-8' } self.channel.basic_ack(msg.delivery_tag) msg = self.channel.basic_get( queue='py-amqp-unittest', ) assert msg is None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/mocks.py0000644000076500000240000000134414345343305013673 0ustar00nusnusstafffrom unittest.mock import Mock class _ContextMock(Mock): """Dummy class implementing __enter__ and __exit__ as the :keyword:`with` statement requires these to be implemented in the class, not just the instance.""" def __enter__(self): return self def __exit__(self, *exc_info): pass def ContextMock(*args, **kwargs): """Mock that mocks :keyword:`with` statement contexts.""" obj = _ContextMock(*args, **kwargs) obj.attach_mock(_ContextMock(), '__enter__') obj.attach_mock(_ContextMock(), '__exit__') obj.__enter__.return_value = obj # if __exit__ return a value the exception is ignored, # so it must return None here. obj.__exit__.return_value = None return obj ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731441334.1662936 amqp-5.3.1/t/unit/0000755000076500000240000000000014714731266013171 5ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/__init__.py0000644000076500000240000000000014345343305015261 0ustar00nusnusstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/conftest.py0000644000076500000240000000356214345343305015367 0ustar00nusnusstafffrom unittest.mock import MagicMock import pytest sentinel = object() class _patching: def __init__(self, monkeypatch, request): self.monkeypatch = monkeypatch self.request = request def __getattr__(self, name): return getattr(self.monkeypatch, name) def __call__(self, path, value=sentinel, name=None, new=MagicMock, **kwargs): value = self._value_or_mock(value, new, name, path, **kwargs) self.monkeypatch.setattr(path, value) return value def _value_or_mock(self, value, new, name, path, **kwargs): if value is sentinel: value = new(name=name or path.rpartition('.')[2]) for k, v in kwargs.items(): setattr(value, k, v) return value def setattr(self, target, name=sentinel, value=sentinel, **kwargs): # alias to __call__ with the interface of pytest.monkeypatch.setattr if value is sentinel: value, name = name, None return self(target, value, name=name) def setitem(self, dic, name, value=sentinel, new=MagicMock, **kwargs): # same as pytest.monkeypatch.setattr but default value is MagicMock value = self._value_or_mock(value, new, name, dic, **kwargs) self.monkeypatch.setitem(dic, name, value) return value @pytest.fixture def patching(monkeypatch, request): """Monkeypath.setattr shortcut. Example: .. code-block:: python def test_foo(patching): # execv value here will be mock.MagicMock by default. execv = patching('os.execv') patching('sys.platform', 'darwin') # set concrete value patching.setenv('DJANGO_SETTINGS_MODULE', 'x.settings') # val will be of type mock.MagicMock by default val = patching.setitem('path.to.dict', 'KEY') """ return _patching(monkeypatch, request) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_abstract_channel.py0000644000076500000240000001375414345343305020100 0ustar00nusnusstafffrom unittest.mock import Mock, patch, sentinel import pytest from vine import promise from amqp import spec from amqp.abstract_channel import (IGNORED_METHOD_DURING_CHANNEL_CLOSE, AbstractChannel) from amqp.exceptions import AMQPNotImplementedError, RecoverableConnectionError from amqp.serialization import dumps class test_AbstractChannel: class Channel(AbstractChannel): def _setup_listeners(self): pass @pytest.fixture(autouse=True) def setup_conn(self): self.conn = Mock(name='connection') self.conn.channels = {} self.channel_id = 1 self.c = self.Channel(self.conn, self.channel_id) self.method = Mock(name='method') self.content = Mock(name='content') self.content.content_encoding = 'utf-8' self.c._METHODS = {(50, 61): self.method} def test_enter_exit(self): self.c.close = Mock(name='close') with self.c: pass self.c.close.assert_called_with() def test_send_method(self): self.c.send_method((50, 60), 'iB', (30, 0)) self.conn.frame_writer.assert_called_with( 1, self.channel_id, (50, 60), dumps('iB', (30, 0)), None, ) def test_send_method__callback(self): callback = Mock(name='callback') p = promise(callback) self.c.send_method((50, 60), 'iB', (30, 0), callback=p) callback.assert_called_with() def test_send_method__wait(self): self.c.wait = Mock(name='wait') self.c.send_method((50, 60), 'iB', (30, 0), wait=(50, 61)) self.c.wait.assert_called_with((50, 61), returns_tuple=False) def test_send_method__no_connection(self): self.c.connection = None with pytest.raises(RecoverableConnectionError): self.c.send_method((50, 60)) def test_send_method__connection_dropped(self): self.c.connection.frame_writer.side_effect = StopIteration with pytest.raises(RecoverableConnectionError): self.c.send_method((50, 60)) def test_close(self): with pytest.raises(NotImplementedError): self.c.close() def test_wait(self): with patch('amqp.abstract_channel.ensure_promise') as ensure_promise: p = ensure_promise.return_value p.ready = False def on_drain(*args, **kwargs): p.ready = True self.conn.drain_events.side_effect = on_drain p.value = (1,), {'arg': 2} self.c.wait((50, 61), timeout=1) self.conn.drain_events.assert_called_with(timeout=1) prev = self.c._pending[(50, 61)] = Mock(name='p2') p.value = None self.c.wait([(50, 61)]) assert self.c._pending[(50, 61)] is prev def test_dispatch_method__content_encoding(self): self.c.auto_decode = True self.method.args = None self.c.dispatch_method((50, 61), 'payload', self.content) self.content.body.decode.side_effect = KeyError() self.c.dispatch_method((50, 61), 'payload', self.content) def test_dispatch_method__unknown_method(self): with pytest.raises(AMQPNotImplementedError): self.c.dispatch_method((100, 131), 'payload', self.content) def test_dispatch_method__one_shot(self): self.method.args = None p = self.c._pending[(50, 61)] = Mock(name='oneshot') self.c.dispatch_method((50, 61), 'payload', self.content) p.assert_called_with((50, 61), self.content) def test_dispatch_method__one_shot_no_content(self): self.method.args = None self.method.content = None p = self.c._pending[(50, 61)] = Mock(name='oneshot') self.c.dispatch_method((50, 61), 'payload', self.content) p.assert_called_with((50, 61)) assert not self.c._pending def test_dispatch_method__listeners(self): with patch('amqp.abstract_channel.loads') as loads: loads.return_value = [1, 2, 3], 'foo' p = self.c._callbacks[(50, 61)] = Mock(name='p') self.c.dispatch_method((50, 61), 'payload', self.content) p.assert_called_with(1, 2, 3, self.content) def test_dispatch_method__listeners_and_one_shot(self): with patch('amqp.abstract_channel.loads') as loads: loads.return_value = [1, 2, 3], 'foo' p1 = self.c._callbacks[(50, 61)] = Mock(name='p') p2 = self.c._pending[(50, 61)] = Mock(name='oneshot') self.c.dispatch_method((50, 61), 'payload', self.content) p1.assert_called_with(1, 2, 3, self.content) p2.assert_called_with((50, 61), 1, 2, 3, self.content) assert not self.c._pending assert self.c._callbacks[(50, 61)] @pytest.mark.parametrize( "method", ( spec.Channel.Close, spec.Channel.CloseOk, spec.Basic.Deliver ) ) def test_dispatch_method__closing_connection(self, method, caplog): self.c._ALLOWED_METHODS_WHEN_CLOSING = ( spec.Channel.Close, spec.Channel.CloseOk ) self.c.is_closing = True with patch.object(self.c, '_METHODS'), \ patch.object(self.c, '_callbacks'): self.c.dispatch_method( method, sentinel.PAYLOAD, sentinel.CONTENT ) if method in (spec.Channel.Close, spec.Channel.CloseOk): self.c._METHODS.__getitem__.assert_called_once_with(method) self.c._callbacks[method].assert_called_once_with( sentinel.CONTENT ) else: self.c._METHODS.__getitem__.assert_not_called() self.c._callbacks[method].assert_not_called() assert caplog.records[0].msg == \ IGNORED_METHOD_DURING_CHANNEL_CLOSE assert caplog.records[0].args[0] == method assert caplog.records[0].args[1] == self.channel_id assert caplog.records[0].levelname == 'WARNING' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_basic_message.py0000644000076500000240000000067414345343305017367 0ustar00nusnusstafffrom unittest.mock import Mock from amqp.basic_message import Message class test_Message: def test_message(self): m = Message( 'foo', channel=Mock(name='channel'), application_headers={'h': 'v'}, ) m.delivery_info = {'delivery_tag': '1234'} assert m.body == 'foo' assert m.channel assert m.headers == {'h': 'v'} assert m.delivery_tag == '1234' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_channel.py0000644000076500000240000005116414345343305016212 0ustar00nusnusstaffimport socket from struct import pack from unittest.mock import ANY, MagicMock, Mock, patch import pytest from vine import promise from amqp import spec from amqp.basic_message import Message from amqp.channel import Channel from amqp.exceptions import (ConsumerCancelled, MessageNacked, NotFound, RecoverableConnectionError) from amqp.serialization import dumps from t.mocks import ContextMock class test_Channel: @pytest.fixture(autouse=True) def setup_conn(self): self.conn = MagicMock(name='connection') self.conn.is_closing = False self.conn.channels = {} self.conn._get_free_channel_id.return_value = 2 self.c = Channel(self.conn, 1) self.c.send_method = Mock(name='send_method') def test_init_confirm_enabled(self): self.conn.confirm_publish = True c = Channel(self.conn, 2) assert c.basic_publish == c.basic_publish_confirm def test_init_confirm_disabled(self): self.conn.confirm_publish = False c = Channel(self.conn, 2) assert c.basic_publish == c._basic_publish def test_init_auto_channel(self): c = Channel(self.conn, None) self.conn._get_free_channel_id.assert_called_with() assert c.channel_id is self.conn._get_free_channel_id() def test_init_explicit_channel(self): Channel(self.conn, 3) self.conn._claim_channel_id.assert_called_with(3) def test_then(self): self.c.on_open = Mock(name='on_open') on_success = Mock(name='on_success') on_error = Mock(name='on_error') self.c.then(on_success, on_error) self.c.on_open.then.assert_called_with(on_success, on_error) def test_collect(self): self.c.callbacks[(50, 61)] = Mock() self.c.cancel_callbacks['foo'] = Mock() self.c.events['bar'].add(Mock()) self.c.no_ack_consumers.add('foo') self.c.collect() assert not self.c.callbacks assert not self.c.cancel_callbacks assert not self.c.events assert not self.c.no_ack_consumers assert not self.c.is_open self.c.collect() def test_do_revive(self): self.c.open = Mock(name='open') self.c.is_open = True self.c._do_revive() assert not self.c.is_open self.c.open.assert_called_with() def test_close__not_open(self): self.c.is_open = False self.c.close() def test_close__no_connection(self): self.c.connection = None self.c.close() def test_close(self): self.c.is_open = True self.c.close(30, 'text', spec.Queue.Declare) self.c.send_method.assert_called_with( spec.Channel.Close, 'BsBB', (30, 'text', spec.Queue.Declare[0], spec.Queue.Declare[1]), wait=spec.Channel.CloseOk, ) assert self.c.is_closing is False assert self.c.connection is None def test_on_close(self): self.c._do_revive = Mock(name='_do_revive') with pytest.raises(NotFound): self.c._on_close(404, 'text', 50, 61) self.c.send_method.assert_called_with(spec.Channel.CloseOk) self.c._do_revive.assert_called_with() def test_on_close_ok(self): self.c.collect = Mock(name='collect') self.c._on_close_ok() self.c.collect.assert_called_with() def test_flow(self): self.c.flow(0) self.c.send_method.assert_called_with( spec.Channel.Flow, 'b', (0,), wait=spec.Channel.FlowOk, ) def test_on_flow(self): self.c._x_flow_ok = Mock(name='_x_flow_ok') self.c._on_flow(0) assert not self.c.active self.c._x_flow_ok.assert_called_with(0) def test_x_flow_ok(self): self.c._x_flow_ok(1) self.c.send_method.assert_called_with(spec.Channel.FlowOk, 'b', (1,)) def test_open(self): self.c.is_open = True self.c.open() self.c.is_open = False self.c.open() self.c.send_method.assert_called_with( spec.Channel.Open, 's', ('',), wait=spec.Channel.OpenOk, ) def test_on_open_ok(self): self.c.on_open = Mock(name='on_open') self.c.is_open = False self.c._on_open_ok() assert self.c.is_open self.c.on_open.assert_called_with(self.c) def test_exchange_declare(self): self.c.exchange_declare( 'foo', 'direct', False, True, auto_delete=False, nowait=False, arguments={'x': 1}, ) self.c.send_method.assert_called_with( spec.Exchange.Declare, 'BssbbbbbF', (0, 'foo', 'direct', False, True, False, False, False, {'x': 1}), wait=spec.Exchange.DeclareOk, ) def test_exchange_declare__auto_delete(self): self.c.exchange_declare( 'foo', 'direct', False, True, auto_delete=True, nowait=False, arguments={'x': 1}, ) def test_exchange_delete(self): self.c.exchange_delete('foo') self.c.send_method.assert_called_with( spec.Exchange.Delete, 'Bsbb', (0, 'foo', False, False), wait=spec.Exchange.DeleteOk, ) def test_exchange_bind(self): self.c.exchange_bind('dest', 'source', 'rkey', arguments={'x': 1}) self.c.send_method.assert_called_with( spec.Exchange.Bind, 'BsssbF', (0, 'dest', 'source', 'rkey', False, {'x': 1}), wait=spec.Exchange.BindOk, ) def test_exchange_unbind(self): self.c.exchange_unbind('dest', 'source', 'rkey', arguments={'x': 1}) self.c.send_method.assert_called_with( spec.Exchange.Unbind, 'BsssbF', (0, 'dest', 'source', 'rkey', False, {'x': 1}), wait=spec.Exchange.UnbindOk, ) def test_queue_bind(self): self.c.queue_bind('q', 'ex', 'rkey', arguments={'x': 1}) self.c.send_method.assert_called_with( spec.Queue.Bind, 'BsssbF', (0, 'q', 'ex', 'rkey', False, {'x': 1}), wait=spec.Queue.BindOk, ) def test_queue_unbind(self): self.c.queue_unbind('q', 'ex', 'rkey', arguments={'x': 1}) self.c.send_method.assert_called_with( spec.Queue.Unbind, 'BsssF', (0, 'q', 'ex', 'rkey', {'x': 1}), wait=spec.Queue.UnbindOk, ) def test_queue_declare(self): self.c.queue_declare('q', False, True, False, False, True, {'x': 1}) self.c.send_method.assert_called_with( spec.Queue.Declare, 'BsbbbbbF', (0, 'q', False, True, False, False, True, {'x': 1}), ) def test_queue_declare__sync(self): self.c.wait = Mock(name='wait') self.c.wait.return_value = ('name', 123, 45) ret = self.c.queue_declare( 'q', False, True, False, False, False, {'x': 1}, ) self.c.send_method.assert_called_with( spec.Queue.Declare, 'BsbbbbbF', (0, 'q', False, True, False, False, False, {'x': 1}), ) assert ret.queue == 'name' assert ret.message_count == 123 assert ret.consumer_count == 45 self.c.wait.assert_called_with( spec.Queue.DeclareOk, returns_tuple=True) def test_queue_delete(self): self.c.queue_delete('q') self.c.send_method.assert_called_with( spec.Queue.Delete, 'Bsbbb', (0, 'q', False, False, False), wait=spec.Queue.DeleteOk, ) def test_queue_purge(self): self.c.queue_purge('q') self.c.send_method.assert_called_with( spec.Queue.Purge, 'Bsb', (0, 'q', False), wait=spec.Queue.PurgeOk, ) def test_basic_ack(self): self.c.basic_ack(123, multiple=1) self.c.send_method.assert_called_with( spec.Basic.Ack, 'Lb', (123, 1), ) def test_basic_cancel(self): self.c.basic_cancel(123) self.c.send_method.assert_called_with( spec.Basic.Cancel, 'sb', (123, False), wait=spec.Basic.CancelOk, ) self.c.connection = None self.c.basic_cancel(123) def test_on_basic_cancel(self): self.c._remove_tag = Mock(name='_remove_tag') self.c._on_basic_cancel(123) self.c._remove_tag.return_value.assert_called_with(123) self.c._remove_tag.return_value = None with pytest.raises(ConsumerCancelled): self.c._on_basic_cancel(123) def test_on_basic_cancel_ok(self): self.c._remove_tag = Mock(name='remove_tag') self.c._on_basic_cancel_ok(123) self.c._remove_tag.assert_called_with(123) def test_remove_tag(self): self.c.callbacks[123] = Mock() p = self.c.cancel_callbacks[123] = Mock() assert self.c._remove_tag(123) is p assert 123 not in self.c.callbacks assert 123 not in self.c.cancel_callbacks def test_basic_consume(self): callback = Mock() on_cancel = Mock() self.c.send_method.return_value = (123, ) self.c.basic_consume( 'q', 123, arguments={'x': 1}, callback=callback, on_cancel=on_cancel, ) self.c.send_method.assert_called_with( spec.Basic.Consume, 'BssbbbbF', (0, 'q', 123, False, False, False, False, {'x': 1}), wait=spec.Basic.ConsumeOk, returns_tuple=True ) assert self.c.callbacks[123] is callback assert self.c.cancel_callbacks[123] is on_cancel def test_basic_consume__no_ack(self): self.c.send_method.return_value = (123,) self.c.basic_consume( 'q', 123, arguments={'x': 1}, no_ack=True, ) assert 123 in self.c.no_ack_consumers def test_basic_consume_no_consumer_tag(self): callback = Mock() self.c.send_method.return_value = (123,) ret = self.c.basic_consume( 'q', arguments={'x': 1}, callback=callback, ) self.c.send_method.assert_called_with( spec.Basic.Consume, 'BssbbbbF', (0, 'q', '', False, False, False, False, {'x': 1}), wait=spec.Basic.ConsumeOk, returns_tuple=True ) assert self.c.callbacks[123] is callback assert ret == 123 def test_basic_consume_no_wait(self): callback = Mock() ret_promise = promise() self.c.send_method.return_value = ret_promise ret = self.c.basic_consume( 'q', 123, arguments={'x': 1}, callback=callback, nowait=True ) self.c.send_method.assert_called_with( spec.Basic.Consume, 'BssbbbbF', (0, 'q', 123, False, False, False, True, {'x': 1}), wait=None, returns_tuple=True ) assert self.c.callbacks[123] is callback assert ret == ret_promise def test_basic_consume_no_wait_no_consumer_tag(self): callback = Mock() with pytest.raises(ValueError): self.c.basic_consume( 'q', arguments={'x': 1}, callback=callback, nowait=True ) assert 123 not in self.c.callbacks def test_on_basic_deliver(self): msg = Message() self.c._on_basic_deliver(123, '321', False, 'ex', 'rkey', msg) callback = self.c.callbacks[123] = Mock(name='cb') self.c._on_basic_deliver(123, '321', False, 'ex', 'rkey', msg) callback.assert_called_with(msg) assert msg.channel == self.c assert msg.delivery_info == { 'consumer_tag': 123, 'delivery_tag': '321', 'redelivered': False, 'exchange': 'ex', 'routing_key': 'rkey', } def test_basic_get(self): self.c._on_get_empty = Mock() self.c._on_get_ok = Mock() self.c.send_method.return_value = ('cluster_id',) self.c.basic_get('q') self.c.send_method.assert_called_with( spec.Basic.Get, 'Bsb', (0, 'q', False), wait=[spec.Basic.GetOk, spec.Basic.GetEmpty], returns_tuple=True, ) self.c._on_get_empty.assert_called_with('cluster_id') self.c.send_method.return_value = ( 'dtag', 'redelivered', 'ex', 'rkey', 'mcount', 'msg', ) self.c.basic_get('q') self.c._on_get_ok.assert_called_with( 'dtag', 'redelivered', 'ex', 'rkey', 'mcount', 'msg', ) def test_on_get_empty(self): self.c._on_get_empty(1) def test_on_get_ok(self): msg = Message() m = self.c._on_get_ok( 'dtag', 'redelivered', 'ex', 'rkey', 'mcount', msg, ) assert m is msg assert m.channel == self.c assert m.delivery_info == { 'delivery_tag': 'dtag', 'redelivered': 'redelivered', 'exchange': 'ex', 'routing_key': 'rkey', 'message_count': 'mcount', } def test_basic_publish(self): self.c.connection.transport.having_timeout = ContextMock() self.c._basic_publish('msg', 'ex', 'rkey') self.c.send_method.assert_called_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) def test_basic_publish_confirm(self): self.c._confirm_selected = False self.c.confirm_select = Mock(name='confirm_select') self.c._basic_publish = Mock(name='_basic_publish') self.c.wait = Mock(name='wait') ret = self.c.basic_publish_confirm(1, 2, arg=1) self.c.confirm_select.assert_called_with() assert self.c._confirm_selected self.c._basic_publish.assert_called_with(1, 2, arg=1) assert ret is self.c._basic_publish() self.c.wait.assert_called_with( [spec.Basic.Ack, spec.Basic.Nack], callback=ANY, timeout=None ) self.c.basic_publish_confirm(1, 2, arg=1) def test_basic_publish_confirm_nack(self): # test checking whether library is handling correctly Nack confirms # sent from RabbitMQ. Library must raise MessageNacked when server # sent Nack message. # Nack frame construction args = dumps('Lb', (1, False)) frame = (b''.join([pack('>HH', *spec.Basic.Nack), args])) def wait(method, *args, **kwargs): # Simple mock simulating registering callbacks of real wait method for m in method: self.c._pending[m] = kwargs['callback'] self.c._basic_publish = Mock(name='_basic_publish') self.c.wait = Mock(name='wait', side_effect=wait) self.c.basic_publish_confirm(1, 2, arg=1) with pytest.raises(MessageNacked): # Inject Nack to message handler self.c.dispatch_method( spec.Basic.Nack, frame, None ) def test_basic_publish_connection_blocked(self): # Basic test checking that drain_events() is called # before publishing message and send_method() is called self.c._basic_publish('msg', 'ex', 'rkey') self.conn.drain_events.assert_called_once_with(timeout=0) self.c.send_method.assert_called_once_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) self.c.send_method.reset_mock() # Basic test checking that socket.timeout exception # is ignored and send_method() is called. self.conn.drain_events.side_effect = socket.timeout self.c._basic_publish('msg', 'ex', 'rkey') self.c.send_method.assert_called_once_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) def test_basic_publish_connection_blocked_not_supported(self): # Test veryfying that when server does not have # connection.blocked capability, drain_events() are not called self.conn.client_properties = { 'capabilities': { 'connection.blocked': False } } self.c._basic_publish('msg', 'ex', 'rkey') self.conn.drain_events.assert_not_called() self.c.send_method.assert_called_once_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) def test_basic_publish_connection_blocked_not_supported_missing(self): # Test veryfying that when server does not have # connection.blocked capability, drain_events() are not called self.conn.client_properties = { 'capabilities': {} } self.c._basic_publish('msg', 'ex', 'rkey') self.conn.drain_events.assert_not_called() self.c.send_method.assert_called_once_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) def test_basic_publish_connection_blocked_no_capabilities(self): # Test veryfying that when server does not have # support of capabilities, drain_events() are not called self.conn.client_properties = { } self.c._basic_publish('msg', 'ex', 'rkey') self.conn.drain_events.assert_not_called() self.c.send_method.assert_called_once_with( spec.Basic.Publish, 'Bssbb', (0, 'ex', 'rkey', False, False), 'msg', ) def test_basic_publish_confirm_callback(self): def wait_nack(method, *args, **kwargs): kwargs['callback'](spec.Basic.Nack) def wait_ack(method, *args, **kwargs): kwargs['callback'](spec.Basic.Ack) self.c._basic_publish = Mock(name='_basic_publish') self.c.wait = Mock(name='wait_nack', side_effect=wait_nack) with pytest.raises(MessageNacked): # when callback is called with spec.Basic.Nack it must raise # MessageNacked exception self.c.basic_publish_confirm(1, 2, arg=1) self.c.wait = Mock(name='wait_ack', side_effect=wait_ack) # when callback is called with spec.Basic.Ack # it must nost raise exception self.c.basic_publish_confirm(1, 2, arg=1) def test_basic_publish_connection_closed(self): self.c.collect() with pytest.raises(RecoverableConnectionError) as excinfo: self.c._basic_publish('msg', 'ex', 'rkey') assert 'basic_publish: connection closed' in str(excinfo.value) self.c.send_method.assert_not_called() def test_basic_qos(self): self.c.basic_qos(0, 123, False) self.c.send_method.assert_called_with( spec.Basic.Qos, 'lBb', (0, 123, False), wait=spec.Basic.QosOk, ) def test_basic_recover(self): self.c.basic_recover(requeue=True) self.c.send_method.assert_called_with( spec.Basic.Recover, 'b', (True,), ) def test_basic_recover_async(self): self.c.basic_recover_async(requeue=True) self.c.send_method.assert_called_with( spec.Basic.RecoverAsync, 'b', (True,), ) def test_basic_reject(self): self.c.basic_reject(123, requeue=True) self.c.send_method.assert_called_with( spec.Basic.Reject, 'Lb', (123, True), ) def test_on_basic_return(self): with pytest.raises(NotFound): self.c._on_basic_return(404, 'text', 'ex', 'rkey', 'msg') def test_on_basic_return__handled(self): with patch('amqp.channel.error_for_code') as error_for_code: callback = Mock(name='callback') self.c.events['basic_return'].add(callback) self.c._on_basic_return(404, 'text', 'ex', 'rkey', 'msg') callback.assert_called_with( error_for_code(), 'ex', 'rkey', 'msg', ) def test_tx_commit(self): self.c.tx_commit() self.c.send_method.assert_called_with( spec.Tx.Commit, wait=spec.Tx.CommitOk, ) def test_tx_rollback(self): self.c.tx_rollback() self.c.send_method.assert_called_with( spec.Tx.Rollback, wait=spec.Tx.RollbackOk, ) def test_tx_select(self): self.c.tx_select() self.c.send_method.assert_called_with( spec.Tx.Select, wait=spec.Tx.SelectOk, ) def test_confirm_select(self): self.c.confirm_select() self.c.send_method.assert_called_with( spec.Confirm.Select, 'b', (False,), wait=spec.Confirm.SelectOk, ) def test_on_basic_ack(self): callback = Mock(name='callback') self.c.events['basic_ack'].add(callback) self.c._on_basic_ack(123, True) callback.assert_called_with(123, True) def test_on_basic_nack(self): callback = Mock(name='callback') self.c.events['basic_nack'].add(callback) self.c._on_basic_nack(123, True) callback.assert_called_with(123, True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1716998002.0 amqp-5.3.1/t/unit/test_connection.py0000644000076500000240000005321414625647562016754 0ustar00nusnusstaffimport re import socket import warnings from unittest.mock import Mock, call, patch import pytest import time from amqp import Connection, spec from amqp.connection import SSLError from amqp.exceptions import (ConnectionError, NotFound, RecoverableConnectionError, ResourceError) from amqp.sasl import AMQPLAIN, EXTERNAL, GSSAPI, PLAIN, SASL from amqp.transport import TCPTransport from t.mocks import ContextMock class test_Connection: @pytest.fixture(autouse=True) def setup_conn(self): self.frame_handler = Mock(name='frame_handler') self.frame_writer = Mock(name='frame_writer_cls') self.conn = Connection( frame_handler=self.frame_handler, frame_writer=self.frame_writer, authentication=AMQPLAIN('foo', 'bar'), ) self.conn.Channel = Mock(name='Channel') self.conn.Transport = Mock(name='Transport') self.conn.transport = self.conn.Transport.return_value self.conn.send_method = Mock(name='send_method') self.conn.frame_writer = Mock(name='frame_writer') def test_sasl_authentication(self): authentication = SASL() self.conn = Connection(authentication=authentication) assert self.conn.authentication == (authentication,) def test_sasl_authentication_iterable(self): authentication = SASL() self.conn = Connection(authentication=(authentication,)) assert self.conn.authentication == (authentication,) def test_gssapi(self): self.conn = Connection() assert isinstance(self.conn.authentication[0], GSSAPI) def test_external(self): self.conn = Connection() assert isinstance(self.conn.authentication[1], EXTERNAL) def test_amqplain(self): self.conn = Connection(userid='foo', password='bar') auth = self.conn.authentication[2] assert isinstance(auth, AMQPLAIN) assert auth.username == 'foo' assert auth.password == 'bar' def test_plain(self): self.conn = Connection(userid='foo', password='bar') auth = self.conn.authentication[3] assert isinstance(auth, PLAIN) assert auth.username == 'foo' assert auth.password == 'bar' def test_login_method_gssapi(self): try: self.conn = Connection(userid=None, password=None, login_method='GSSAPI') except NotImplementedError: pass else: auths = self.conn.authentication assert len(auths) == 1 assert isinstance(auths[0], GSSAPI) def test_login_method_external(self): self.conn = Connection(userid=None, password=None, login_method='EXTERNAL') auths = self.conn.authentication assert len(auths) == 1 assert isinstance(auths[0], EXTERNAL) def test_login_method_amqplain(self): self.conn = Connection(login_method='AMQPLAIN') auths = self.conn.authentication assert len(auths) == 1 assert isinstance(auths[0], AMQPLAIN) def test_login_method_plain(self): self.conn = Connection(login_method='PLAIN') auths = self.conn.authentication assert len(auths) == 1 assert isinstance(auths[0], PLAIN) def test_enter_exit(self): self.conn.connect = Mock(name='connect') self.conn.close = Mock(name='close') with self.conn: self.conn.connect.assert_called_with() self.conn.close.assert_called_with() def test__enter__socket_error(self): # test when entering self.conn = Connection() self.conn.close = Mock(name='close') reached = False with patch('socket.socket', side_effect=socket.error): with pytest.raises(socket.error): with self.conn: reached = True assert not reached and not self.conn.close.called assert self.conn._transport is None and not self.conn.connected def test__exit__socket_error(self): # test when exiting connection = self.conn transport = connection._transport transport.connected = True connection.send_method = Mock(name='send_method', side_effect=socket.error) reached = False with pytest.raises(socket.error): with connection: reached = True assert reached assert connection.send_method.called and transport.close.called assert self.conn._transport is None and not self.conn.connected def test_then(self): self.conn.on_open = Mock(name='on_open') on_success = Mock(name='on_success') on_error = Mock(name='on_error') self.conn.then(on_success, on_error) self.conn.on_open.then.assert_called_with(on_success, on_error) def test_connect(self): self.conn.transport.connected = False self.conn.drain_events = Mock(name='drain_events') def on_drain(*args, **kwargs): self.conn._handshake_complete = True self.conn.drain_events.side_effect = on_drain self.conn.connect() self.conn.Transport.assert_called_with( self.conn.host, self.conn.connect_timeout, self.conn.ssl, self.conn.read_timeout, self.conn.write_timeout, socket_settings=self.conn.socket_settings, ) def test_connect__already_connected(self): callback = Mock(name='callback') self.conn.transport.connected = True assert self.conn.connect(callback) == callback.return_value callback.assert_called_with() def test_connect__socket_error(self): # check Transport.Connect error # socket.error derives from IOError # ssl.SSLError derives from socket.error self.conn = Connection() self.conn.Transport = Mock(name='Transport') transport = self.conn.Transport.return_value transport.connect.side_effect = IOError assert self.conn._transport is None and not self.conn.connected with pytest.raises(IOError): self.conn.connect() transport.connect.assert_called assert self.conn._transport is None and not self.conn.connected def test_on_start(self): self.conn._on_start(3, 4, {'foo': 'bar'}, b'x y z AMQPLAIN PLAIN', 'en_US en_GB') assert self.conn.version_major == 3 assert self.conn.version_minor == 4 assert self.conn.server_properties == {'foo': 'bar'} assert self.conn.mechanisms == [b'x', b'y', b'z', b'AMQPLAIN', b'PLAIN'] assert self.conn.locales == ['en_US', 'en_GB'] self.conn.send_method.assert_called_with( spec.Connection.StartOk, 'FsSs', ( self.conn.client_properties, b'AMQPLAIN', self.conn.authentication[0].start(self.conn), self.conn.locale, ), ) def test_on_start_string_mechanisms(self): self.conn._on_start(3, 4, {'foo': 'bar'}, 'x y z AMQPLAIN PLAIN', 'en_US en_GB') assert self.conn.version_major == 3 assert self.conn.version_minor == 4 assert self.conn.server_properties == {'foo': 'bar'} assert self.conn.mechanisms == [b'x', b'y', b'z', b'AMQPLAIN', b'PLAIN'] assert self.conn.locales == ['en_US', 'en_GB'] self.conn.send_method.assert_called_with( spec.Connection.StartOk, 'FsSs', ( self.conn.client_properties, b'AMQPLAIN', self.conn.authentication[0].start(self.conn), self.conn.locale, ), ) def test_missing_credentials(self): with pytest.raises(ValueError): self.conn = Connection(userid=None, password=None, login_method='AMQPLAIN') with pytest.raises(ValueError): self.conn = Connection(password=None, login_method='PLAIN') def test_invalid_method(self): with pytest.raises(ValueError): self.conn = Connection(login_method='any') def test_mechanism_mismatch(self): with pytest.raises(ConnectionError): self.conn._on_start(3, 4, {'foo': 'bar'}, b'x y z', 'en_US en_GB') def test_login_method_response(self): # An old way of doing things.: login_method, login_response = b'foo', b'bar' with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") self.conn = Connection(login_method=login_method, login_response=login_response) self.conn.send_method = Mock(name='send_method') self.conn._on_start(3, 4, {'foo': 'bar'}, login_method, 'en_US en_GB') assert len(w) == 1 assert issubclass(w[0].category, DeprecationWarning) self.conn.send_method.assert_called_with( spec.Connection.StartOk, 'FsSs', ( self.conn.client_properties, login_method, login_response, self.conn.locale, ), ) def test_on_start__consumer_cancel_notify(self): self.conn._on_start( 3, 4, {'capabilities': {'consumer_cancel_notify': 1}}, b'AMQPLAIN', '', ) cap = self.conn.client_properties['capabilities'] assert cap['consumer_cancel_notify'] def test_on_start__connection_blocked(self): self.conn._on_start( 3, 4, {'capabilities': {'connection.blocked': 1}}, b'AMQPLAIN', '', ) cap = self.conn.client_properties['capabilities'] assert cap['connection.blocked'] def test_on_start__authentication_failure_close(self): self.conn._on_start( 3, 4, {'capabilities': {'authentication_failure_close': 1}}, b'AMQPLAIN', '', ) cap = self.conn.client_properties['capabilities'] assert cap['authentication_failure_close'] def test_on_start__authentication_failure_close__disabled(self): self.conn._on_start( 3, 4, {'capabilities': {}}, b'AMQPLAIN', '', ) assert 'capabilities' not in self.conn.client_properties def test_on_secure(self): self.conn._on_secure('vfz') def test_on_tune(self): self.conn.client_heartbeat = 16 self.conn._on_tune(345, 16, 10) assert self.conn.channel_max == 345 assert self.conn.frame_max == 16 assert self.conn.server_heartbeat == 10 assert self.conn.heartbeat == 10 self.conn.send_method.assert_called_with( spec.Connection.TuneOk, 'BlB', ( self.conn.channel_max, self.conn.frame_max, self.conn.heartbeat, ), callback=self.conn._on_tune_sent, ) def test_on_tune__client_heartbeat_disabled(self): self.conn.client_heartbeat = 0 self.conn._on_tune(345, 16, 10) assert self.conn.heartbeat == 0 def test_on_tune_sent(self): self.conn._on_tune_sent() self.conn.send_method.assert_called_with( spec.Connection.Open, 'ssb', (self.conn.virtual_host, '', False), ) def test_on_open_ok(self): self.conn.on_open = Mock(name='on_open') self.conn._on_open_ok() assert self.conn._handshake_complete self.conn.on_open.assert_called_with(self.conn) def test_connected(self): self.conn.transport.connected = False assert not self.conn.connected self.conn.transport.connected = True assert self.conn.connected self.conn.transport = None assert not self.conn.connected def test_collect(self): channels = self.conn.channels = { 0: self.conn, 1: Mock(name='c1'), 2: Mock(name='c2'), } transport = self.conn.transport self.conn.collect() transport.close.assert_called_with() for i, channel in channels.items(): if i: channel.collect.assert_called_with() assert self.conn._transport is None def test_collect__transport_socket_raises_os_error(self): self.conn.transport = TCPTransport('localhost:5672') sock = self.conn.transport.sock = Mock(name='sock') channel = Mock(name='c1') self.conn.channels = {1: channel} sock.shutdown.side_effect = OSError self.conn.collect() channel.collect.assert_called_with() sock.close.assert_called_with() assert self.conn._transport is None assert self.conn.channels is None def test_collect_no_transport(self): self.conn = Connection() self.conn.connect = Mock(name='connect') assert not self.conn.connected self.conn.collect() assert not self.conn.connect.called def test_collect_again(self): self.conn = Connection() self.conn.collect() self.conn.collect() def test_get_free_channel_id(self): assert self.conn._get_free_channel_id() == 1 assert self.conn._get_free_channel_id() == 2 def test_get_free_channel_id__raises_ResourceError(self): self.conn.channel_max = 2 self.conn._get_free_channel_id() self.conn._get_free_channel_id() with pytest.raises(ResourceError): self.conn._get_free_channel_id() def test_claim_channel_id(self): self.conn._claim_channel_id(30) with pytest.raises(ConnectionError): self.conn._claim_channel_id(30) def test_channel(self): callback = Mock(name='callback') c = self.conn.channel(3, callback) self.conn.Channel.assert_called_with(self.conn, 3, on_open=callback) c2 = self.conn.channel(3, callback) assert c2 is c def test_channel_when_connection_is_closed(self): self.conn.collect() callback = Mock(name='callback') with pytest.raises(RecoverableConnectionError): self.conn.channel(3, callback) def test_is_alive(self): with pytest.raises(NotImplementedError): self.conn.is_alive() def test_drain_events(self): self.conn.blocking_read = Mock(name='blocking_read') self.conn.drain_events(30) self.conn.blocking_read.assert_called_with(30) def test_blocking_read__no_timeout(self): self.conn.on_inbound_frame = Mock(name='on_inbound_frame') self.conn.transport.having_timeout = ContextMock() ret = self.conn.blocking_read(None) self.conn.transport.read_frame.assert_called_with() self.conn.on_inbound_frame.assert_called_with( self.conn.transport.read_frame(), ) assert ret is self.conn.on_inbound_frame() def test_blocking_read__timeout(self): self.conn.transport = TCPTransport('localhost:5672') sock = self.conn.transport.sock = Mock(name='sock') sock.gettimeout.return_value = 1 self.conn.transport.read_frame = Mock(name='read_frame') self.conn.on_inbound_frame = Mock(name='on_inbound_frame') self.conn.blocking_read(3) sock.gettimeout.assert_called_with() sock.settimeout.assert_has_calls([call(3), call(1)]) self.conn.transport.read_frame.assert_called_with() self.conn.on_inbound_frame.assert_called_with( self.conn.transport.read_frame(), ) sock.gettimeout.return_value = 3 self.conn.blocking_read(3) def test_blocking_read__SSLError(self): self.conn.on_inbound_frame = Mock(name='on_inbound_frame') self.conn.transport = TCPTransport('localhost:5672') sock = self.conn.transport.sock = Mock(name='sock') sock.gettimeout.return_value = 1 self.conn.transport.read_frame = Mock(name='read_frame') self.conn.transport.read_frame.side_effect = SSLError( 'operation timed out') with pytest.raises(socket.timeout): self.conn.blocking_read(3) self.conn.transport.read_frame.side_effect = SSLError( 'The operation did not complete foo bar') with pytest.raises(socket.timeout): self.conn.blocking_read(3) self.conn.transport.read_frame.side_effect = SSLError( 'oh noes') with pytest.raises(SSLError): self.conn.blocking_read(3) def test_on_inbound_method(self): self.conn.channels[1] = self.conn.channel(1) self.conn.on_inbound_method(1, (50, 60), 'payload', 'content') self.conn.channels[1].dispatch_method.assert_called_with( (50, 60), 'payload', 'content', ) def test_on_inbound_method_when_connection_is_closed(self): self.conn.collect() with pytest.raises(RecoverableConnectionError): self.conn.on_inbound_method(1, (50, 60), 'payload', 'content') def test_close(self): self.conn.collect = Mock(name='collect') self.conn.close(reply_text='foo', method_sig=spec.Channel.Open) self.conn.send_method.assert_called_with( spec.Connection.Close, 'BsBB', (0, 'foo', spec.Channel.Open[0], spec.Channel.Open[1]), wait=spec.Connection.CloseOk, ) def test_close__already_closed(self): self.conn.transport = None self.conn.close() def test_close__socket_error(self): self.conn.send_method = Mock(name='send_method', side_effect=socket.error) with pytest.raises(socket.error): self.conn.close() self.conn.send_method.assert_called() assert self.conn._transport is None and not self.conn.connected def test_on_close(self): self.conn._x_close_ok = Mock(name='_x_close_ok') with pytest.raises(NotFound): self.conn._on_close(404, 'bah not found', 50, 60) def test_x_close_ok(self): self.conn._x_close_ok() self.conn.send_method.assert_called_with( spec.Connection.CloseOk, callback=self.conn._on_close_ok, ) def test_on_close_ok(self): self.conn.collect = Mock(name='collect') self.conn._on_close_ok() self.conn.collect.assert_called_with() def test_on_blocked(self): self.conn._on_blocked() self.conn.on_blocked = Mock(name='on_blocked') self.conn._on_blocked() self.conn.on_blocked.assert_called_with( 'connection blocked, see broker logs') def test_on_unblocked(self): self.conn._on_unblocked() self.conn.on_unblocked = Mock(name='on_unblocked') self.conn._on_unblocked() self.conn.on_unblocked.assert_called_with() def test_send_heartbeat(self): self.conn.send_heartbeat() self.conn.frame_writer.assert_called_with( 8, 0, None, None, None, ) def test_heartbeat_tick__no_heartbeat(self): self.conn.heartbeat = 0 self.conn.heartbeat_tick() def test_heartbeat_tick(self): self.conn.heartbeat = 3 self.conn.heartbeat_tick() self.conn.bytes_sent = 3124 self.conn.bytes_recv = 123 self.conn.heartbeat_tick() self.conn.last_heartbeat_received -= 1000 self.conn.last_heartbeat_sent -= 1000 with pytest.raises(ConnectionError): self.conn.heartbeat_tick() def _test_heartbeat_rate_tick(self, rate): # Doing 22 calls, # First one is setting the variables # All nexts may send heartbeats, depending on rate for i in range(1, 22): self.conn.heartbeat_tick(rate) time.sleep(0.1) def test_heartbeat_check_rate_default(self): # Heartbeat set to 2 secs self.conn.heartbeat = 2 # Default rate is 2 --> should send frames every sec self._test_heartbeat_rate_tick(2) # Verify that we wrote 2 frames assert self.conn.frame_writer.call_count == 2 def test_heartbeat_check_rate_four(self): # Heartbeat set to 2 secs self.conn.heartbeat = 2 # Rate 4 --> should send frames every 0.5sec self._test_heartbeat_rate_tick(4) # Verify that we wrote 4 frames assert self.conn.frame_writer.call_count == 4 def test_heartbeat_check_rate_wrong(self): # Heartbeat set to 2 secs self.conn.heartbeat = 2 # Default rate is 2 --> should send frames every sec self._test_heartbeat_rate_tick(-42) # Verify that we wrote 2 frames assert self.conn.frame_writer.call_count == 2 def test_server_capabilities(self): self.conn.server_properties['capabilities'] = {'foo': 1} assert self.conn.server_capabilities == {'foo': 1} @pytest.mark.parametrize( 'conn_kwargs,expected_vhost', [ ({}, '/'), ({'user_id': 'test_user', 'password': 'test_pass'}, '/'), ({'virtual_host': 'test_vhost'}, 'test_vhost') ] ) def test_repr_disconnected(self, conn_kwargs, expected_vhost): assert re.fullmatch( r''.format(expected_vhost), repr(Connection(host='broker.com:1234', **conn_kwargs)) ) @pytest.mark.parametrize( 'conn_kwargs,expected_vhost', [ ({}, '/'), ({'user_id': 'test_user', 'password': 'test_pass'}, '/'), ({'virtual_host': 'test_vhost'}, 'test_vhost') ] ) def test_repr_connected(self, conn_kwargs, expected_vhost): c = Connection(host='broker.com:1234', **conn_kwargs) c._transport = Mock(name='transport') assert re.fullmatch( r''.format( expected_vhost, repr(c.transport) ), repr(c) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_exceptions.py0000644000076500000240000000270314345343305016756 0ustar00nusnusstafffrom unittest.mock import Mock import pytest import amqp.exceptions from amqp.exceptions import AMQPError, error_for_code AMQP_EXCEPTIONS = ( 'ConnectionError', 'ChannelError', 'RecoverableConnectionError', 'IrrecoverableConnectionError', 'RecoverableChannelError', 'IrrecoverableChannelError', 'ConsumerCancelled', 'ContentTooLarge', 'NoConsumers', 'ConnectionForced', 'InvalidPath', 'AccessRefused', 'NotFound', 'ResourceLocked', 'PreconditionFailed', 'FrameError', 'FrameSyntaxError', 'InvalidCommand', 'ChannelNotOpen', 'UnexpectedFrame', 'ResourceError', 'NotAllowed', 'AMQPNotImplementedError', 'InternalError', ) class test_AMQPError: def test_str(self): assert str(AMQPError()) == '' x = AMQPError(method_sig=(50, 60)) assert str(x) == '(50, 60): (0) None' x = AMQPError('Test Exception') assert str(x) == 'Test Exception' @pytest.mark.parametrize("amqp_exception", AMQP_EXCEPTIONS) def test_str_subclass(self, amqp_exception): exp = f'<{amqp_exception}: unknown error>' exception_class = getattr(amqp.exceptions, amqp_exception) assert str(exception_class()) == exp class test_error_for_code: def test_unknown_error(self): default = Mock(name='default') x = error_for_code(2134214314, 't', 'm', default) default.assert_called_with('t', 'm', reply_code=2134214314) assert x is default() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_method_framing.py0000644000076500000240000001336314345343305017564 0ustar00nusnusstafffrom struct import pack from unittest.mock import Mock import pytest from amqp import spec from amqp.basic_message import Message from amqp.exceptions import UnexpectedFrame from amqp.method_framing import frame_handler, frame_writer class test_frame_handler: @pytest.fixture(autouse=True) def setup_conn(self): self.conn = Mock(name='connection') self.conn.bytes_recv = 0 self.callback = Mock(name='callback') self.g = frame_handler(self.conn, self.callback) def test_header(self): buf = pack('>HH', 60, 51) assert self.g((1, 1, buf)) self.callback.assert_called_with(1, (60, 51), buf, None) assert self.conn.bytes_recv def test_header_message_empty_body(self): assert not self.g((1, 1, pack('>HH', *spec.Basic.Deliver))) self.callback.assert_not_called() with pytest.raises(UnexpectedFrame): self.g((1, 1, pack('>HH', *spec.Basic.Deliver))) m = Message() m.properties = {} buf = pack('>HxxQ', m.CLASS_ID, 0) buf += m._serialize_properties() assert self.g((2, 1, buf)) self.callback.assert_called() msg = self.callback.call_args[0][3] self.callback.assert_called_with( 1, msg.frame_method, msg.frame_args, msg, ) def test_header_message_content(self): assert not self.g((1, 1, pack('>HH', *spec.Basic.Deliver))) self.callback.assert_not_called() m = Message() m.properties = {} buf = pack('>HxxQ', m.CLASS_ID, 16) buf += m._serialize_properties() assert not self.g((2, 1, buf)) self.callback.assert_not_called() assert not self.g((3, 1, b'thequick')) self.callback.assert_not_called() assert self.g((3, 1, b'brownfox')) self.callback.assert_called() msg = self.callback.call_args[0][3] self.callback.assert_called_with( 1, msg.frame_method, msg.frame_args, msg, ) assert msg.body == b'thequickbrownfox' def test_heartbeat_frame(self): assert not self.g((8, 1, '')) self.callback.assert_not_called() assert self.conn.bytes_recv class test_frame_writer: @pytest.fixture(autouse=True) def setup_conn(self): self.connection = Mock(name='connection') self.transport = self.connection.Transport() self.connection.frame_max = 512 self.connection.bytes_sent = 0 self.g = frame_writer(self.connection, self.transport) self.write = self.transport.write def test_write_fast_header(self): frame = 1, 1, spec.Queue.Declare, b'x' * 30, None self.g(*frame) self.write.assert_called() def test_write_fast_content(self): msg = Message(body=b'y' * 10, content_type='utf-8') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() assert 'content_encoding' not in msg.properties def test_write_slow_content(self): msg = Message(body=b'y' * 2048, content_type='utf-8') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() assert 'content_encoding' not in msg.properties def test_write_zero_len_body(self): msg = Message(body=b'', content_type='application/octet-stream') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() assert 'content_encoding' not in msg.properties def test_write_fast_unicode(self): msg = Message(body='\N{CHECK MARK}') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() memory = self.write.call_args[0][0] assert isinstance(memory, memoryview) assert '\N{CHECK MARK}'.encode() in memory.tobytes() assert msg.properties['content_encoding'] == 'utf-8' def test_write_slow_unicode(self): msg = Message(body='y' * 2048 + '\N{CHECK MARK}') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() memory = self.write.call_args[0][0] assert isinstance(memory, bytes) assert '\N{CHECK MARK}'.encode() in memory assert msg.properties['content_encoding'] == 'utf-8' def test_write_non_utf8(self): msg = Message(body='body', content_encoding='utf-16') frame = 2, 1, spec.Basic.Publish, b'x' * 10, msg self.g(*frame) self.write.assert_called() memory = self.write.call_args[0][0] assert isinstance(memory, memoryview) assert 'body'.encode('utf-16') in memory.tobytes() assert msg.properties['content_encoding'] == 'utf-16' def test_write_frame__fast__buffer_store_resize(self): """The buffer_store is resized when the connection's frame_max is increased.""" small_msg = Message(body='t') small_frame = 2, 1, spec.Basic.Publish, b'x' * 10, small_msg self.g(*small_frame) self.write.assert_called_once() write_arg = self.write.call_args[0][0] assert isinstance(write_arg, memoryview) assert len(write_arg) < self.connection.frame_max self.connection.reset_mock() # write a larger message to the same frame_writer after increasing frame_max large_msg = Message(body='t' * (self.connection.frame_max + 10)) large_frame = 2, 1, spec.Basic.Publish, b'x' * 10, large_msg original_frame_max = self.connection.frame_max self.connection.frame_max += 100 self.g(*large_frame) self.write.assert_called_once() write_arg = self.write.call_args[0][0] assert isinstance(write_arg, memoryview) assert len(write_arg) > original_frame_max ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_platform.py0000644000076500000240000000314614345343305016423 0ustar00nusnusstaffimport importlib import itertools import operator import pytest from amqp.platform import _linux_version_to_tuple def test_struct_argument_type(): from amqp.exceptions import FrameSyntaxError FrameSyntaxError() @pytest.mark.parametrize('s,expected', [ ('3.13.0-46-generic', (3, 13, 0)), ('3.19.43-1-amd64', (3, 19, 43)), ('4.4.34+', (4, 4, 34)), ('4.4.what', (4, 4, 0)), ('4.what.what', (4, 0, 0)), ('4.4.0-43-Microsoft', (4, 4, 0)), ]) def test_linux_version_to_tuple(s, expected): assert _linux_version_to_tuple(s) == expected def monkeypatch_platform(monkeypatch, sys_platform, platform_release): monkeypatch.setattr("sys.platform", sys_platform) def release(): return platform_release monkeypatch.setattr("platform.release", release) def test_tcp_opts_change(monkeypatch): monkeypatch_platform(monkeypatch, 'linux', '2.6.36-1-amd64') import amqp.platform importlib.reload(amqp.platform) old_linux = amqp.platform.KNOWN_TCP_OPTS monkeypatch_platform(monkeypatch, 'linux', '2.6.37-0-41-generic') importlib.reload(amqp.platform) new_linux = amqp.platform.KNOWN_TCP_OPTS monkeypatch_platform(monkeypatch, 'win32', '7') importlib.reload(amqp.platform) win = amqp.platform.KNOWN_TCP_OPTS monkeypatch_platform(monkeypatch, 'linux', '4.4.0-43-Microsoft') importlib.reload(amqp.platform) win_bash = amqp.platform.KNOWN_TCP_OPTS li = [old_linux, new_linux, win, win_bash] assert all(operator.ne(*i) for i in itertools.combinations(li, 2)) assert len(win) <= len(win_bash) < len(old_linux) < len(new_linux) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_sasl.py0000644000076500000240000001422414345343305015540 0ustar00nusnusstaffimport contextlib import socket import sys from io import BytesIO from unittest.mock import Mock, call, patch import pytest from amqp import sasl from amqp.serialization import _write_table class test_SASL: def test_sasl_notimplemented(self): mech = sasl.SASL() with pytest.raises(NotImplementedError): mech.mechanism with pytest.raises(NotImplementedError): mech.start(None) def test_plain(self): username, password = 'foo', 'bar' mech = sasl.PLAIN(username, password) response = mech.start(None) assert isinstance(response, bytes) assert response.split(b'\0') == \ [b'', username.encode('utf-8'), password.encode('utf-8')] def test_plain_no_password(self): username, password = 'foo', None mech = sasl.PLAIN(username, password) response = mech.start(None) assert response == NotImplemented def test_amqplain(self): username, password = 'foo', 'bar' mech = sasl.AMQPLAIN(username, password) response = mech.start(None) assert isinstance(response, bytes) login_response = BytesIO() _write_table({b'LOGIN': username, b'PASSWORD': password}, login_response.write, []) expected_response = login_response.getvalue()[4:] assert response == expected_response def test_amqplain_no_password(self): username, password = 'foo', None mech = sasl.AMQPLAIN(username, password) response = mech.start(None) assert response == NotImplemented def test_gssapi_missing(self): gssapi = sys.modules.pop('gssapi', None) GSSAPI = sasl._get_gssapi_mechanism() with pytest.raises(NotImplementedError): GSSAPI() if gssapi is not None: sys.modules['gssapi'] = gssapi @contextlib.contextmanager def fake_gssapi(self): orig_gssapi = sys.modules.pop('gssapi', None) orig_gssapi_raw = sys.modules.pop('gssapi.raw', None) orig_gssapi_raw_misc = sys.modules.pop('gssapi.raw.misc', None) gssapi = sys.modules['gssapi'] = Mock() sys.modules['gssapi.raw'] = gssapi.raw sys.modules['gssapi.raw.misc'] = gssapi.raw.misc class GSSError(Exception): pass gssapi.raw.misc.GSSError = GSSError try: yield gssapi finally: if orig_gssapi is None: del sys.modules['gssapi'] else: sys.modules['gssapi'] = orig_gssapi if orig_gssapi_raw is None: del sys.modules['gssapi.raw'] else: sys.modules['gssapi.raw'] = orig_gssapi_raw if orig_gssapi_raw_misc is None: del sys.modules['gssapi.raw.misc'] else: sys.modules['gssapi.raw.misc'] = orig_gssapi_raw_misc def test_gssapi_rdns(self): with self.fake_gssapi() as gssapi, \ patch('socket.gethostbyaddr') as gethostbyaddr: connection = Mock() connection.transport.sock.getpeername.return_value = ('192.0.2.0', 5672) connection.transport.sock.family = socket.AF_INET gethostbyaddr.return_value = ('broker.example.org', (), ()) GSSAPI = sasl._get_gssapi_mechanism() mech = GSSAPI(rdns=True) mech.start(connection) connection.transport.sock.getpeername.assert_called_with() gethostbyaddr.assert_called_with('192.0.2.0') gssapi.Name.assert_called_with(b'amqp@broker.example.org', gssapi.NameType.hostbased_service) def test_gssapi_no_rdns(self): with self.fake_gssapi() as gssapi: connection = Mock() connection.transport.host = 'broker.example.org' GSSAPI = sasl._get_gssapi_mechanism() mech = GSSAPI() mech.start(connection) gssapi.Name.assert_called_with(b'amqp@broker.example.org', gssapi.NameType.hostbased_service) def test_gssapi_step_without_client_name(self): with self.fake_gssapi() as gssapi: context = Mock() context.step.return_value = b'secrets' name = Mock() gssapi.SecurityContext.return_value = context gssapi.Name.return_value = name connection = Mock() connection.transport.host = 'broker.example.org' GSSAPI = sasl._get_gssapi_mechanism() mech = GSSAPI() response = mech.start(connection) gssapi.SecurityContext.assert_called_with(name=name, creds=None) context.step.assert_called_with(None) assert response == b'secrets' def test_gssapi_step_with_client_name(self): with self.fake_gssapi() as gssapi: context = Mock() context.step.return_value = b'secrets' client_name, service_name, credentials = Mock(), Mock(), Mock() gssapi.SecurityContext.return_value = context gssapi.Credentials.return_value = credentials gssapi.Name.side_effect = [client_name, service_name] connection = Mock() connection.transport.host = 'broker.example.org' GSSAPI = sasl._get_gssapi_mechanism() mech = GSSAPI(client_name='amqp-client/client.example.org') response = mech.start(connection) gssapi.Name.assert_has_calls([ call(b'amqp-client/client.example.org'), call(b'amqp@broker.example.org', gssapi.NameType.hostbased_service)]) gssapi.Credentials.assert_called_with(name=client_name) gssapi.SecurityContext.assert_called_with(name=service_name, creds=credentials) context.step.assert_called_with(None) assert response == b'secrets' def test_external(self): mech = sasl.EXTERNAL() response = mech.start(None) assert isinstance(response, bytes) assert response == b'' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_serialization.py0000644000076500000240000001572014345343305017455 0ustar00nusnusstaffimport pickle from datetime import datetime from decimal import Decimal from math import ceil from struct import pack import pytest from amqp.basic_message import Message from amqp.exceptions import FrameSyntaxError from amqp.serialization import GenericContent, _read_item, dumps, loads class _ANY: def __eq__(self, other): return other is not None def __ne__(self, other): return other is None class test_serialization: @pytest.mark.parametrize('descr,frame,expected,cast', [ ('S', b's8thequick', 'thequick', None), ('S', b'S\x00\x00\x00\x03\xc0\xc0\x00', b'\xc0\xc0\x00', None), ('x', b'x\x00\x00\x00\x09thequick\xffIGNORED', b'thequick\xff', None), ('b', b'b' + pack('>B', True), True, None), ('B', b'B' + pack('>b', 123), 123, None), ('U', b'U' + pack('>h', -321), -321, None), ('u', b'u' + pack('>H', 321), 321, None), ('i', b'i' + pack('>I', 1234), 1234, None), ('L', b'L' + pack('>q', -32451), -32451, None), ('l', b'l' + pack('>Q', 32451), 32451, None), ('f', b'f' + pack('>f', 33.3), 34.0, ceil), ]) def test_read_item(self, descr, frame, expected, cast): actual = _read_item(frame, 0)[0] actual = cast(actual) if cast else actual assert actual == expected def test_read_item_V(self): assert _read_item(b'V', 0)[0] is None def test_roundtrip(self): format = b'bobBlLbsbSTx' x = dumps(format, [ True, 32, False, 3415, 4513134, 13241923419, True, b'thequickbrownfox', False, 'jumpsoverthelazydog', datetime(2015, 3, 13, 10, 23), b'thequick\xff' ]) y = loads(format, x, 0) assert [ True, 32, False, 3415, 4513134, 13241923419, True, 'thequickbrownfox', False, 'jumpsoverthelazydog', datetime(2015, 3, 13, 10, 23), b'thequick\xff' ] == y[0] def test_int_boundaries(self): format = b'F' x = dumps(format, [ {'a': -2147483649, 'b': 2147483648}, # celery/celery#3121 ]) y = loads(format, x, 0) assert y[0] == [{ 'a': -2147483649, 'b': 2147483648, # celery/celery#3121 }] def test_loads_unknown_type(self): with pytest.raises(FrameSyntaxError): loads('y', 'asdsad', 0) def test_float(self): data = int(loads(b'fb', dumps(b'fb', [32.31, False]), 0)[0][0] * 100) assert(data == 3231) def test_table(self): table = { 'foo': 32, 'bar': 'baz', 'nil': None, 'array': [ 1, True, 'bar' ] } assert loads(b'F', dumps(b'F', [table]), 0)[0][0] == table def test_table__unknown_type(self): table = { 'foo': object(), 'bar': 'baz', 'nil': None, 'array': [ 1, True, 'bar' ] } with pytest.raises(FrameSyntaxError): dumps(b'F', [table]) def test_array(self): array = [ 'A', 1, True, 33.3, Decimal('55.5'), Decimal('-3.4'), datetime(2015, 3, 13, 10, 23), {'quick': 'fox', 'amount': 1}, [3, 'hens'], None, ] expected = list(array) expected[6] = _ANY() assert expected == loads('A', dumps('A', [array]), 0)[0][0] def test_array_unknown_type(self): with pytest.raises(FrameSyntaxError): dumps('A', [[object()]]) def test_bit_offset_adjusted_correctly(self): expected = [50, "quick", "fox", True, False, False, True, True, {"prop1": True}] buf = dumps('BssbbbbbF', expected) actual, _ = loads('BssbbbbbF', buf, 0) assert actual == expected def test_sixteen_bitflags(self): expected = [True, False] * 8 format = 'b' * len(expected) buf = dumps(format, expected) actual, _ = loads(format, buf, 0) assert actual == expected class test_GenericContent: @pytest.fixture(autouse=True) def setup_content(self): self.g = GenericContent() def test_getattr(self): self.g.properties['foo'] = 30 assert self.g.foo == 30 with pytest.raises(AttributeError): self.g.bar def test_pickle(self): pickle.loads(pickle.dumps(self.g)) def test_load_properties(self): m = Message() m.properties = { 'content_type': 'application/json', 'content_encoding': 'utf-8', 'application_headers': { 'foo': 1, 'id': 'id#1', }, 'delivery_mode': 1, 'priority': 255, 'correlation_id': 'df31-142f-34fd-g42d', 'reply_to': 'cosmo', 'expiration': '2015-12-23', 'message_id': '3312', 'timestamp': 3912491234, 'type': 'generic', 'user_id': 'george', 'app_id': 'vandelay', 'cluster_id': 'NYC', } s = m._serialize_properties() m2 = Message() m2._load_properties(m2.CLASS_ID, s, 0) assert m2.properties == m.properties def test_load_properties__some_missing(self): m = Message() m.properties = { 'content_type': 'application/json', 'content_encoding': 'utf-8', 'delivery_mode': 1, 'correlation_id': 'df31-142f-34fd-g42d', 'reply_to': 'cosmo', 'expiration': '2015-12-23', 'message_id': '3312', 'type': None, 'app_id': None, 'cluster_id': None, } s = m._serialize_properties() m2 = Message() m2._load_properties(m2.CLASS_ID, s, 0) def test_inbound_header(self): m = Message() m.properties = { 'content_type': 'application/json', 'content_encoding': 'utf-8', } body = 'the quick brown fox' buf = b'\0' * 30 + pack('>HxxQ', m.CLASS_ID, len(body)) buf += m._serialize_properties() assert m.inbound_header(buf, offset=30) == 42 assert m.body_size == len(body) assert m.properties['content_type'] == 'application/json' assert not m.ready def test_inbound_header__empty_body(self): m = Message() m.properties = {} buf = pack('>HxxQ', m.CLASS_ID, 0) buf += m._serialize_properties() assert m.inbound_header(buf, offset=0) == 12 assert m.ready def test_inbound_body(self): m = Message() m.body_size = 16 m.body_received = 8 m._pending_chunks = [b'the', b'quick'] m.inbound_body(b'brown') assert not m.ready m.inbound_body(b'fox') assert m.ready assert m.body == b'thequickbrownfox' def test_inbound_body__no_chunks(self): m = Message() m.body_size = 16 m.inbound_body('thequickbrownfox') assert m.ready ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1717840700.0 amqp-5.3.1/t/unit/test_transport.py0000644000076500000240000010373314631025474016640 0ustar00nusnusstaffimport errno import os import re import ssl import socket import struct from struct import pack from unittest.mock import ANY, MagicMock, Mock, call, patch, sentinel import pytest from amqp import transport from amqp.exceptions import UnexpectedFrame from amqp.transport import _AbstractTransport SIGNED_INT_MAX = 0x7FFFFFFF class DummyException(Exception): pass class MockSocket: options = {} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.connected = False self.sa = None def setsockopt(self, family, key, value): is_sol_socket = family == socket.SOL_SOCKET is_receive_or_send_timeout = key in (socket.SO_RCVTIMEO, socket.SO_SNDTIMEO) if is_sol_socket and is_receive_or_send_timeout: self.options[key] = value elif not isinstance(value, int): raise OSError() self.options[key] = value def getsockopt(self, family, key): return self.options.get(key, 0) def settimeout(self, timeout): self.timeout = timeout def fileno(self): return 10 def connect(self, sa): self.connected = True self.sa = sa def close(self): self.connected = False self.sa = None def getsockname(self): return ('127.0.0.1', 1234) def getpeername(self): if self.connected: return ('1.2.3.4', 5671) else: raise socket.error TCP_KEEPIDLE = 4 TCP_KEEPINTVL = 5 TCP_KEEPCNT = 6 class test_socket_options: @pytest.fixture(autouse=True) def setup_self(self, patching): self.host = '127.0.0.1' self.connect_timeout = 3 self.socket = MockSocket() socket = patching('socket.socket') socket().getsockopt = self.socket.getsockopt socket().setsockopt = self.socket.setsockopt self.tcp_keepidle = 20 self.tcp_keepintvl = 30 self.tcp_keepcnt = 40 self.socket.setsockopt( socket.SOL_TCP, socket.TCP_NODELAY, 1, ) self.socket.setsockopt( socket.SOL_TCP, TCP_KEEPIDLE, self.tcp_keepidle, ) self.socket.setsockopt( socket.SOL_TCP, TCP_KEEPINTVL, self.tcp_keepintvl, ) self.socket.setsockopt( socket.SOL_TCP, TCP_KEEPCNT, self.tcp_keepcnt, ) patching('amqp.transport.TCPTransport._write') patching('amqp.transport.TCPTransport._setup_transport') patching('amqp.transport.SSLTransport._write') patching('amqp.transport.SSLTransport._setup_transport') patching('amqp.transport.set_cloexec') def test_backward_compatibility_tcp_transport(self): self.transp = transport.Transport( self.host, self.connect_timeout, ssl=False, ) self.transp.connect() expected = 1 result = self.socket.getsockopt(socket.SOL_TCP, socket.TCP_NODELAY) assert result == expected def test_backward_compatibility_SSL_transport(self): self.transp = transport.Transport( self.host, self.connect_timeout, ssl=True, ) assert self.transp.sslopts is not None self.transp.connect() assert self.transp.sock is not None def test_use_default_sock_tcp_opts(self): self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings={}, ) self.transp.connect() assert (socket.TCP_NODELAY in self.transp._get_tcp_socket_defaults(self.transp.sock)) def test_set_single_sock_tcp_opt_tcp_transport(self): tcp_keepidle = self.tcp_keepidle + 5 socket_settings = {TCP_KEEPIDLE: tcp_keepidle} self.transp = transport.Transport( self.host, self.connect_timeout, ssl=False, socket_settings=socket_settings, ) self.transp.connect() expected = tcp_keepidle result = self.socket.getsockopt(socket.SOL_TCP, TCP_KEEPIDLE) assert result == expected def test_set_single_sock_tcp_opt_SSL_transport(self): self.tcp_keepidle += 5 socket_settings = {TCP_KEEPIDLE: self.tcp_keepidle} self.transp = transport.Transport( self.host, self.connect_timeout, ssl=True, socket_settings=socket_settings, ) self.transp.connect() expected = self.tcp_keepidle result = self.socket.getsockopt(socket.SOL_TCP, TCP_KEEPIDLE) assert result == expected def test_values_are_set(self): socket_settings = { TCP_KEEPIDLE: 10, TCP_KEEPINTVL: 4, TCP_KEEPCNT: 2 } self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings=socket_settings, ) self.transp.connect() expected = socket_settings tcp_keepidle = self.socket.getsockopt(socket.SOL_TCP, TCP_KEEPIDLE) tcp_keepintvl = self.socket.getsockopt(socket.SOL_TCP, TCP_KEEPINTVL) tcp_keepcnt = self.socket.getsockopt(socket.SOL_TCP, TCP_KEEPCNT) result = { TCP_KEEPIDLE: tcp_keepidle, TCP_KEEPINTVL: tcp_keepintvl, TCP_KEEPCNT: tcp_keepcnt } assert result == expected def test_passing_wrong_options(self): socket_settings = object() self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings=socket_settings, ) with pytest.raises(TypeError): self.transp.connect() def test_passing_wrong_value_options(self): socket_settings = {TCP_KEEPINTVL: b'a'} self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings=socket_settings, ) with pytest.raises(socket.error): self.transp.connect() def test_passing_value_as_string(self): socket_settings = {TCP_KEEPIDLE: b'5'} self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings=socket_settings, ) with pytest.raises(socket.error): self.transp.connect() def test_passing_tcp_nodelay(self): socket_settings = {socket.TCP_NODELAY: 0} self.transp = transport.Transport( self.host, self.connect_timeout, socket_settings=socket_settings, ) self.transp.connect() expected = 0 result = self.socket.getsockopt(socket.SOL_TCP, socket.TCP_NODELAY) assert result == expected def test_platform_socket_opts(self): s = socket.socket() opts = _AbstractTransport(self.host)._get_tcp_socket_defaults(s) assert opts def test_set_sockopt_opts_timeout(self): # tests socket options SO_RCVTIMEO and SO_SNDTIMEO self.transp = transport.Transport( self.host, self.connect_timeout, ) read_timeout_sec, read_timeout_usec = 0xdead, 0xbeef write_timeout_sec = 0x42 read_timeout = read_timeout_sec + read_timeout_usec * 0.000001 self.transp.read_timeout = read_timeout self.transp.write_timeout = write_timeout_sec self.transp.connect() expected_rcvtimeo = struct.pack('ll', read_timeout_sec, read_timeout_usec) expected_sndtimeo = struct.pack('ll', write_timeout_sec, 0) assert expected_rcvtimeo == self.socket.getsockopt(socket.SOL_TCP, socket.SO_RCVTIMEO) assert expected_sndtimeo == self.socket.getsockopt(socket.SOL_TCP, socket.SO_SNDTIMEO) def test_transport_repr_issue_361(self): "Regression test for https://github.com/celery/py-amqp/issues/361" self.t = transport.Transport(self.host) self.t.sock = MockSocket() self.t.sock.connect(None) assert '127.0.0.1:1234 -> 1.2.3.4:5671' in repr(self.t) self.t.sock.connected = False self.t.sock.close() assert '127.0.0.1:1234 -> ERROR:' in repr(self.t) class test_AbstractTransport: class Transport(transport._AbstractTransport): def _connect(self, *args): pass def _init_socket(self, *args): pass @pytest.fixture(autouse=True) def setup_transport(self, patching): self.t = self.Transport('localhost:5672', 10) self.t.connect() patching('amqp.transport.set_cloexec') def test_port(self): assert self.Transport('localhost').port == 5672 assert self.Transport('localhost:5672').port == 5672 assert self.Transport('[fe80::1]:5432').port == 5432 def test_read(self): with pytest.raises(NotImplementedError): self.t._read(1024) def test_setup_transport(self): self.t._setup_transport() def test_shutdown_transport(self): self.t._shutdown_transport() def test_write(self): with pytest.raises(NotImplementedError): self.t._write('foo') def test_close(self): sock = self.t.sock = Mock() self.t.close() sock.shutdown.assert_called_with(socket.SHUT_RDWR) sock.close.assert_called_with() assert self.t.sock is None and self.t.connected is False self.t.close() assert self.t.sock is None and self.t.connected is False def test_close_os_error(self): sock = self.t.sock = Mock() sock.shutdown.side_effect = OSError self.t.close() sock.close.assert_called_with() assert self.t.sock is None and self.t.connected is False def test_read_frame__timeout(self): self.t._read = Mock() self.t._read.side_effect = socket.timeout() with pytest.raises(socket.timeout): self.t.read_frame() def test_read_frame__SSLError(self): self.t._read = Mock() self.t._read.side_effect = transport.SSLError('timed out') with pytest.raises(socket.timeout): self.t.read_frame() def test_read_frame__EINTR(self): self.t._read = Mock() self.t.connected = True exc = OSError() exc.errno = errno.EINTR self.t._read.side_effect = exc with pytest.raises(OSError): self.t.read_frame() assert self.t.connected def test_read_frame__EBADF(self): self.t._read = Mock() self.t.connected = True exc = OSError() exc.errno = errno.EBADF self.t._read.side_effect = exc with pytest.raises(OSError): self.t.read_frame() assert not self.t.connected def test_read_frame__simple(self): self.t._read = Mock() checksum = [b'\xce'] def on_read2(size, *args): return checksum[0] def on_read1(size, *args): ret = self.t._read.return_value self.t._read.return_value = b'thequickbrownfox' self.t._read.side_effect = on_read2 return ret self.t._read.return_value = pack('>BHI', 1, 1, 16) self.t._read.side_effect = on_read1 self.t.read_frame() self.t._read.return_value = pack('>BHI', 1, 1, 16) self.t._read.side_effect = on_read1 checksum[0] = b'\x13' with pytest.raises(UnexpectedFrame) as ex: self.t.read_frame() assert ex.value.code == 505 assert ex.value.message == \ 'Received frame_end 0x13 while expecting 0xce' def test_read_frame__long(self): self.t._read = Mock() self.t._read.side_effect = [pack('>BHI', 1, 1, SIGNED_INT_MAX + 16), b'read1', b'read2', b'\xce'] frame_type, channel, payload = self.t.read_frame() assert frame_type == 1 assert channel == 1 assert payload == b'read1read2' def transport_read_EOF(self): for host, is_ssl in (('localhost:5672', False), ('localhost:5671', True),): self.t = transport.Transport(host, is_ssl) self.t.sock = Mock(name='socket') self.t.connected = True self.t._quick_recv = Mock(name='recv', return_value='') with pytest.raises( IOError, match=r'.*Server unexpectedly closed connection.*' ): self.t.read_frame() def test_write__success(self): self.t._write = Mock() self.t.write('foo') self.t._write.assert_called_with('foo') def test_write__socket_timeout(self): self.t._write = Mock() self.t._write.side_effect = socket.timeout with pytest.raises(socket.timeout): self.t.write('foo') def test_write__EINTR(self): self.t.connected = True self.t._write = Mock() exc = OSError() exc.errno = errno.EINTR self.t._write.side_effect = exc with pytest.raises(OSError): self.t.write('foo') assert self.t.connected exc.errno = errno.EBADF with pytest.raises(OSError): self.t.write('foo') assert not self.t.connected def test_having_timeout_none(self): # Checks that context manager does nothing when no timeout is provided with self.t.having_timeout(None) as actual_sock: assert actual_sock == self.t.sock def test_set_timeout(self): # Checks that context manager sets and reverts timeout properly with patch.object(self.t, 'sock') as sock_mock: sock_mock.gettimeout.return_value = 3 with self.t.having_timeout(5) as actual_sock: assert actual_sock == self.t.sock sock_mock.gettimeout.assert_called_with() sock_mock.settimeout.assert_has_calls( [ call(5), call(3), ] ) def test_set_timeout_exception_raised(self): # Checks that context manager sets and reverts timeout properly # when exception is raised. with patch.object(self.t, 'sock') as sock_mock: sock_mock.gettimeout.return_value = 3 with pytest.raises(DummyException): with self.t.having_timeout(5) as actual_sock: assert actual_sock == self.t.sock raise DummyException() sock_mock.gettimeout.assert_called_with() sock_mock.settimeout.assert_has_calls( [ call(5), call(3), ] ) def test_set_same_timeout(self): # Checks that context manager does not set timeout when # it is same as currently set. with patch.object(self.t, 'sock') as sock_mock: sock_mock.gettimeout.return_value = 5 with self.t.having_timeout(5) as actual_sock: assert actual_sock == self.t.sock sock_mock.gettimeout.assert_called_with() sock_mock.settimeout.assert_not_called() def test_set_timeout_ewouldblock_exc(self): # We expect EWOULDBLOCK to be handled as a timeout. with patch.object(self.t, 'sock') as sock_mock: sock_mock.gettimeout.return_value = 3 with pytest.raises(socket.timeout): with self.t.having_timeout(5): err = socket.error() err.errno = errno.EWOULDBLOCK raise err class DummySocketError(socket.error): pass # Other socket errors shouldn't be converted. with pytest.raises(DummySocketError): with self.t.having_timeout(5): raise DummySocketError() class test_AbstractTransport_connect: class Transport(transport._AbstractTransport): def _init_socket(self, *args): pass @pytest.fixture(autouse=True) def setup_transport(self, patching): self.t = self.Transport('localhost:5672', 10) patching('amqp.transport.set_cloexec') def test_connect_socket_fails(self): with patch('socket.socket', side_effect=socket.error): with pytest.raises(socket.error): self.t.connect() assert self.t.sock is None and self.t.connected is False def test_connect_socket_initialization_fails(self): with patch('socket.socket', side_effect=socket.error), \ patch('socket.getaddrinfo', return_value=[ (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.1', 5672)), (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.2', 5672)) ]): with pytest.raises(socket.error): self.t.connect() assert self.t.sock is None and self.t.connected is False def test_connect_multiple_addr_entries_fails(self): with patch('socket.socket', return_value=MockSocket()) as sock_mock, \ patch('socket.getaddrinfo', return_value=[ (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.1', 5672)), (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.2', 5672)) ]): self.t.sock = Mock() self.t.close() with patch.object(sock_mock.return_value, 'connect', side_effect=socket.error): with pytest.raises(socket.error): self.t.connect() def test_connect_multiple_addr_entries_succeed(self): with patch('socket.socket', return_value=MockSocket()) as sock_mock, \ patch('socket.getaddrinfo', return_value=[ (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.1', 5672)), (socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.2', 5672)) ]): self.t.sock = Mock() self.t.close() with patch.object(sock_mock.return_value, 'connect', side_effect=(socket.error, None)): self.t.connect() def test_connect_calls_getaddrinfo_with_af_unspec(self): with patch('socket.socket', return_value=MockSocket()), \ patch('socket.getaddrinfo') as getaddrinfo: self.t.sock = Mock() self.t.close() self.t.connect() getaddrinfo.assert_called_with( 'localhost', 5672, socket.AF_UNSPEC, ANY, ANY) def test_connect_getaddrinfo_raises_gaierror(self): with patch('socket.getaddrinfo', side_effect=socket.gaierror): with pytest.raises(socket.error): self.t.connect() def test_connect_survives_not_implemented_set_cloexec(self): with patch('socket.socket', return_value=MockSocket()), \ patch('socket.getaddrinfo', return_value=[(socket.AF_INET, 1, socket.IPPROTO_TCP, '', ('127.0.0.1', 5672))]): with patch('amqp.transport.set_cloexec', side_effect=NotImplementedError) as cloexec_mock: self.t.connect() assert cloexec_mock.called def test_connect_already_connected(self): assert not self.t.connected with patch('socket.socket', return_value=MockSocket()): self.t.connect() assert self.t.connected sock_obj = self.t.sock self.t.connect() assert self.t.connected and self.t.sock is sock_obj def test_close__close_error(self): # sock.close() can raise an error if the fd is invalid # make sure the socket is properly deallocated sock = self.t.sock = Mock() sock.unwrap.return_value = sock sock.close.side_effect = OSError self.t.close() sock.close.assert_called_with() assert self.t.sock is None and self.t.connected is False class test_SSLTransport: class Transport(transport.SSLTransport): def _connect(self, *args): pass def _init_socket(self, *args): pass def test_repr_disconnected(self): assert re.fullmatch( r'', repr(transport.SSLTransport('host', 3)) ) def test_repr_connected(self): t = transport.SSLTransport('host', 3) t.sock = MockSocket() re.fullmatch( ' 1.2.3.4:5671 at 0x.*>', repr(t) ) @pytest.fixture(autouse=True) def setup_transport(self): self.t = self.Transport( 'fe80::9a5a:ebff::fecb::ad1c:30', 3, ssl={'foo': 30}, ) def test_setup_transport(self): sock = self.t.sock = Mock() self.t._wrap_socket = Mock() self.t._setup_transport() self.t._wrap_socket.assert_called_with(sock, foo=30) self.t.sock.do_handshake.assert_called_with() assert self.t._quick_recv is self.t.sock.read def test_wrap_socket(self): sock = Mock() self.t._wrap_context = Mock() self.t._wrap_socket_sni = Mock() self.t._wrap_socket(sock, foo=1) self.t._wrap_socket_sni.assert_called_with(sock, foo=1) self.t._wrap_socket(sock, {'c': 2}, foo=1) self.t._wrap_context.assert_called_with(sock, {'foo': 1}, c=2) def test_wrap_context(self): with patch('ssl.create_default_context', create=True) as create_default_context: sock = Mock() self.t._wrap_context(sock, {'f': 1}, check_hostname=True, bar=3) create_default_context.assert_called_with(bar=3) ctx = create_default_context() assert ctx.check_hostname ctx.wrap_socket.assert_called_with(sock, f=1) def test_wrap_socket_sni(self): # testing default values of _wrap_socket_sni() with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() context.wrap_socket.return_value = sentinel.WRAPPED_SOCKET ret = self.t._wrap_socket_sni(sock) context.load_cert_chain.assert_not_called() context.load_verify_locations.assert_not_called() context.set_ciphers.assert_not_called() context.verify_mode.assert_not_called() context.load_default_certs.assert_called_with( ssl.Purpose.SERVER_AUTH ) context.wrap_socket.assert_called_with( sock=sock, server_side=False, do_handshake_on_connect=False, suppress_ragged_eofs=True, server_hostname=None ) assert ret == sentinel.WRAPPED_SOCKET def test_wrap_socket_sni_certfile(self): # testing _wrap_socket_sni() with parameters certfile and keyfile with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni( sock, keyfile=sentinel.KEYFILE, certfile=sentinel.CERTFILE ) context.load_default_certs.assert_called_with( ssl.Purpose.SERVER_AUTH ) context.load_cert_chain.assert_called_with( sentinel.CERTFILE, sentinel.KEYFILE ) def test_wrap_socket_ca_certs(self): # testing _wrap_socket_sni() with parameter ca_certs with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni(sock, ca_certs=sentinel.CA_CERTS) context.load_default_certs.assert_not_called() context.load_verify_locations.assert_called_with(sentinel.CA_CERTS) def test_wrap_socket_ciphers(self): # testing _wrap_socket_sni() with parameter ciphers with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() set_ciphers_method_mock = context.set_ciphers self.t._wrap_socket_sni(sock, ciphers=sentinel.CIPHERS) set_ciphers_method_mock.assert_called_with(sentinel.CIPHERS) def test_wrap_socket_sni_cert_reqs(self): with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni(sock, cert_reqs=ssl.CERT_NONE) context.load_default_certs.assert_not_called() assert context.verify_mode == ssl.CERT_NONE # testing _wrap_socket_sni() with parameter cert_reqs != ssl.CERT_NONE with patch('ssl.SSLContext') as mock_ssl_context_class: sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni(sock, cert_reqs=sentinel.CERT_REQS) context.load_default_certs.assert_called_with( ssl.Purpose.SERVER_AUTH ) assert context.verify_mode == sentinel.CERT_REQS # testing context creation inside _wrap_socket_sni() with parameter # cert_reqs == ssl.CERT_NONE. Previously raised ValueError because # code path attempted to set context.verify_mode=ssl.CERT_NONE before # setting context.check_hostname = False which raised a ValueError with patch('ssl.SSLContext.wrap_socket') as mock_wrap_socket: with patch('ssl.SSLContext.load_default_certs') as mock_load_default_certs: sock = Mock() self.t._wrap_socket_sni( sock, server_side=True, cert_reqs=ssl.CERT_NONE ) mock_load_default_certs.assert_not_called() mock_wrap_socket.assert_called_once() with patch('ssl.SSLContext.wrap_socket') as mock_wrap_socket: with patch('ssl.SSLContext.load_default_certs') as mock_load_default_certs: sock = Mock() self.t._wrap_socket_sni( sock, server_side=False, cert_reqs=ssl.CERT_NONE ) mock_load_default_certs.assert_not_called() mock_wrap_socket.assert_called_once() with patch('ssl.SSLContext.wrap_socket') as mock_wrap_socket: with patch('ssl.SSLContext.load_default_certs') as mock_load_default_certs: sock = Mock() self.t._wrap_socket_sni( sock, server_side=True, cert_reqs=ssl.CERT_REQUIRED ) mock_load_default_certs.assert_called_with(ssl.Purpose.CLIENT_AUTH) mock_wrap_socket.assert_called_once() with patch('ssl.SSLContext.wrap_socket') as mock_wrap_socket: with patch('ssl.SSLContext.load_default_certs') as mock_load_default_certs: sock = Mock() self.t._wrap_socket_sni( sock, server_side=False, cert_reqs=ssl.CERT_REQUIRED ) mock_load_default_certs.assert_called_once_with( ssl.Purpose.SERVER_AUTH ) mock_wrap_socket.assert_called_once() def test_wrap_socket_sni_setting_sni_header(self): # testing _wrap_socket_sni() without parameter server_hostname # SSL module supports SNI with patch('ssl.SSLContext') as mock_ssl_context_class, \ patch('ssl.HAS_SNI', new=True): sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni(sock) assert context.check_hostname is False # SSL module does not support SNI with patch('ssl.SSLContext') as mock_ssl_context_class, \ patch('ssl.HAS_SNI', new=False): sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni(sock) assert context.check_hostname is False # testing _wrap_socket_sni() with parameter server_hostname # SSL module supports SNI with patch('ssl.SSLContext') as mock_ssl_context_class, \ patch('ssl.HAS_SNI', new=True): sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni( sock, server_hostname=sentinel.SERVER_HOSTNAME ) context.wrap_socket.assert_called_with( sock=sock, server_side=False, do_handshake_on_connect=False, suppress_ragged_eofs=True, server_hostname=sentinel.SERVER_HOSTNAME ) assert context.check_hostname is True # SSL module does not support SNI with patch('ssl.SSLContext') as mock_ssl_context_class, \ patch('ssl.HAS_SNI', new=False): sock = Mock() context = mock_ssl_context_class() self.t._wrap_socket_sni( sock, server_hostname=sentinel.SERVER_HOSTNAME ) context.wrap_socket.assert_called_with( sock=sock, server_side=False, do_handshake_on_connect=False, suppress_ragged_eofs=True, server_hostname=sentinel.SERVER_HOSTNAME ) assert context.check_hostname is False def test_shutdown_transport(self): self.t.sock = None self.t._shutdown_transport() sock = self.t.sock = Mock() self.t._shutdown_transport() assert self.t.sock is sock.unwrap() def test_close__unwrap_error(self): # sock.unwrap() can raise an error if the was a connection failure # make sure the socket is properly closed and deallocated sock = self.t.sock = Mock() sock.unwrap.side_effect = OSError self.t.close() assert self.t.sock is None def test_read_EOF(self): self.t.sock = Mock(name='SSLSocket') self.t.connected = True self.t._quick_recv = Mock(name='recv', return_value='') with pytest.raises(IOError, match=r'.*Server unexpectedly closed connection.*'): self.t._read(64) def test_write_success(self): self.t.sock = Mock(name='SSLSocket') self.t.sock.write.return_value = 2 self.t._write('foo') self.t.sock.write.assert_called_with(ANY) def test_write_socket_closed(self): self.t.sock = Mock(name='SSLSocket') self.t.sock.write.return_value = '' with pytest.raises(IOError, match=r'.*Socket closed.*'): self.t._write('foo') def test_write_ValueError(self): self.t.sock = Mock(name='SSLSocket') self.t.sock.write.return_value = 2 self.t.sock.write.side_effect = ValueError("Some error") with pytest.raises(IOError, match=r'.*Socket closed.*'): self.t._write('foo') def test_read_timeout(self): self.t.sock = Mock(name='SSLSocket') self.t._quick_recv = Mock(name='recv', return_value='4') self.t._quick_recv.side_effect = socket.timeout() self.t._read_buffer = MagicMock(return_value='AA') with pytest.raises(socket.timeout): self.t._read(64) def test_read_SSLError(self): self.t.sock = Mock(name='SSLSocket') self.t._quick_recv = Mock(name='recv', return_value='4') self.t._quick_recv.side_effect = socket.timeout() self.t._read_buffer = MagicMock(return_value='AA') with pytest.raises(socket.timeout): self.t._read(64) def test_handshake_timeout(self): self.t.sock = Mock() self.t._wrap_socket = Mock() self.t._wrap_socket.return_value = self.t.sock self.t.sock.do_handshake.side_effect = socket.timeout() with pytest.raises(socket.timeout): self.t._setup_transport() class test_TCPTransport: class Transport(transport.TCPTransport): def _connect(self, *args): pass def _init_socket(self, *args): pass def test_repr_disconnected(self): assert re.fullmatch( r'', repr(transport.TCPTransport('host', 3)) ) def test_repr_connected(self): t = transport.SSLTransport('host', 3) t.sock = MockSocket() re.fullmatch( ' 1.2.3.4:5671 at 0x.*>', repr(t) ) @pytest.fixture(autouse=True) def setup_transport(self): self.t = self.Transport('host', 3) def test_setup_transport(self): self.t.sock = Mock() self.t._setup_transport() assert self.t._write is self.t.sock.sendall assert self.t._read_buffer is not None assert self.t._quick_recv is self.t.sock.recv def test_read_EOF(self): self.t.sock = Mock(name='socket') self.t.connected = True self.t._quick_recv = Mock(name='recv', return_value='') with pytest.raises(IOError, match=r'.*Server unexpectedly closed connection.*'): self.t._read(64) def test_read_frame__windowstimeout(self, monkeypatch): """Make sure BlockingIOError on Windows properly saves off partial reads. See https://github.com/celery/py-amqp/issues/320 """ self.t._quick_recv = Mock() self.t._quick_recv.side_effect = [ pack('>BHI', 1, 1, 16), socket.error( 10035, "A non-blocking socket operation could " "not be completed immediately" ), b'thequickbrownfox', b'\xce' ] monkeypatch.setattr(os, 'name', 'nt') monkeypatch.setattr(errno, 'EWOULDBLOCK', 10035) assert len(self.t._read_buffer) == 0 with pytest.raises(socket.timeout): self.t.read_frame() assert len(self.t._read_buffer) == 7 frame_type, channel, payload = self.t.read_frame() assert len(self.t._read_buffer) == 0 assert frame_type == 1 assert channel == 1 assert payload == b'thequickbrownfox' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1670760133.0 amqp-5.3.1/t/unit/test_utils.py0000644000076500000240000000266014345343305015737 0ustar00nusnusstafffrom unittest.mock import Mock, patch from amqp.utils import bytes_to_str, coro, get_logger, str_to_bytes class test_coro: def test_advances(self): @coro def x(): yield 1 yield 2 it = x() assert next(it) == 2 class test_str_to_bytes: def test_from_unicode(self): assert isinstance(str_to_bytes('foo'), bytes) def test_from_bytes(self): assert isinstance(str_to_bytes(b'foo'), bytes) def test_supports_surrogates(self): bytes_with_surrogates = '\ud83d\ude4f'.encode('utf-8', 'surrogatepass') assert str_to_bytes('\ud83d\ude4f') == bytes_with_surrogates class test_bytes_to_str: def test_from_unicode(self): assert isinstance(bytes_to_str('foo'), str) def test_from_bytes(self): assert bytes_to_str(b'foo') def test_support_surrogates(self): assert bytes_to_str('\ud83d\ude4f') == '\ud83d\ude4f' class test_get_logger: def test_as_str(self): with patch('logging.getLogger') as getLogger: x = get_logger('foo.bar') getLogger.assert_called_with('foo.bar') assert x is getLogger() def test_as_logger(self): with patch('amqp.utils.NullHandler') as _NullHandler: m = Mock(name='logger') m.handlers = None x = get_logger(m) assert x is m x.addHandler.assert_called_with(_NullHandler())