kombu-3.0.7/0000755000076500000000000000000012247127370013255 5ustar asksolwheel00000000000000kombu-3.0.7/AUTHORS0000644000076500000000000000711512243671543014333 0ustar asksolwheel00000000000000========= AUTHORS ========= :order: sorted Adam Nelson Adam Wentz Alex Koshelev Alexandre Bourget Andrew Watts Andrey Antukh Andrii Kostenko Andy McCurdy Antoine Legrand Anton Gyllenberg Ask Solem Basil Mironenko Bobby Beever Brian Bernstein C Anthony Risinger Christophe Chauvet Christopher Grebs Clay Gerrard Corentin Ardeois Dan LaMotte Dan McGee Dane Guempel Davanum Srinivas David Clymer David Gelvin David Strauss David Ziegler Dhananjay Nene Ephemera Fabrice Rabaute Flavio [FlaPer87] Percoco Premoli Florian Munz Franck Cuny Germán M. Bravo Gregory Haskins Hong Minhee Ian Eure Ian Struble Ionel Maries Cristian James Saryerwinnie James Turk Jason Cater Jasper Bryant-Greene Jeff Balogh Jesper Thomschütz John Shuping John Spray John Watson Jonathan Halcrow Joseph Crosland Keith Fitzgerald Kevin McCarthy Kevin McDonald Mahendra M Marcin Lulek (ergo) Mark Lavin Maxime Rouyrre Mher Movsisyan Michael Barrett Nitzan Miron Noah Kantrowitz Ollie Walsh Pascal Hartig Patrick Schneider Paul McLanahan Petar Radosevic Peter Hoffmann Pierre Riteau Rafael Duran Castaneda Rafal Malinowski Ralf Nyren Rob Ottaway Rumyana Neykova Rune Halvorsen Ryan Petrello Sascha Peilicke Scott Lyons Sean Bleier Sean Creeley Seb Insua Shane Caraveo Steeve Morin Stefan Eletzhofer Stephan Jaekel Stephen Day Tareque Hossain Thomas Johansson Tomaž Muraus Tommie McAfee Travis Cline Travis Swicegood Victor Garcia Viet Hung Nguyen Vince Gonzalez Vincent Driessen Zach Smith Zhao Xiaohong haridsv iSlava kombu-3.0.7/Changelog0000644000076500000000000021714412247127071015076 0ustar asksolwheel00000000000000.. _changelog: ================ Change history ================ .. _version-3.0.7: 3.0.7 ===== :release-date: 2013-12-02 04:00 P.M UTC :release-by: Ask Solem - Fixes Python 2.6 compatibility. - Redis: Fixes 'bad file descriptor' issue. .. _version-3.0.6: 3.0.6 ===== :release-date: 2013-11-21 04:50 P.M UTC :release-by: Ask Solem - Timer: No longer attempts to hash keyword arguments (Issue #275). - Async: Did not account for the long type for file descriptors. Fix contributed by Fabrice Rabaute. - PyPy: kqueue support was broken. - Redis: Bad pub/sub payloads no longer crashes the consumer. - Redis: Unix socket URLs can now specify a virtual host by including it as a query parameter. Example URL specifying a virtual host using database number 3:: redis+socket:///tmp/redis.sock?virtual_host=3 - ``kombu.VERSION`` is now a named tuple. .. _version-3.0.5: 3.0.5 ===== :release-date: 2013-11-15 11:00 P.M UTC :release-by: Ask Solem - Now depends on :mod:`amqp` 1.3.3. - Redis: Fixed Python 3 compatibility problem (Issue #270). - MongoDB: Fixed problem with URL parsing when authentication used. Fix contributed by dongweiming. - pyamqp: Fixed small issue when publishing the message and the property dictionary was set to None. Fix contributed by Victor Garcia. - Fixed problem in ``repr(LaxBoundedSemaphore)``. Fix contributed by Antoine Legrand. - Tests now passing on Python 3.3. .. _version-3.0.4: 3.0.4 ===== :release-date: 2013-11-08 1:00 P.M UTC :release-by: Ask Solem - common.QoS: ``decrement_eventually`` now makes sure the value does not go below 1 if a prefetch count is enabled. .. _version-3.0.3: 3.0.3 ===== :release-date: 2013-11-04 3:00 P.M UTC :release-by: Ask Solem - SQS: Properly reverted patch that caused delays between messages. Contributed by James Saryerwinnie - select: Clear all registerd fds on poller.cloe - Eventloop: unregister if EBADF raised. .. _version-3.0.2: 3.0.2 ===== :release-date: 2013-10-29 2:00 P.M UTC :release-by: Ask Solem - Now depends on :mod:`amqp` version 1.3.2. - select: Fixed problem where unregister did not properly remove the fd. .. _version-3.0.1: 3.0.1 ===== :release-date: 2013-10-24 04:00 P.M UTC :release-by: Ask Solem - Now depends on :mod:`amqp` version 1.3.1. - Redis: New option ``fanout_keyprefix`` This transport option is recommended for all users as it ensures that broadcast (fanout) messages sent is only seen by the current virtual host:: Connection('redis://', transport_options={'fanout_keyprefix': True}) However, enabling this means that you cannot send or receive messages from older Kombu versions so make sure all of your participants are upgraded and have the transport option enabled. This will be the default behavior in Kombu 4.0. - Distribution: Removed file ``requirements/py25.txt``. - MongoDB: Now disables ``auto_start_request``. - MongoDB: Enables ``use_greenlets`` if eventlet/gevent used. - Pidbox: Fixes problem where expires header was None, which is a value not supported by the amq protocol. - ConsumerMixin: New ``consumer_context`` method for starting the consumer without draining events. .. _version-3.0.0: 3.0.0 ===== :release-date: 2013-10-14 04:00 P.M BST :release-by: Ask Solem - Now depends on :mod:`amqp` version 1.3. - No longer supports Python 2.5 The minimum Python version supported is now Python 2.6.0 for Python2, and Python 3.3 for Python3. - Dual codebase supporting both Python 2 and 3. No longer using ``2to3``, making it easier to maintain support for both versions. - pickle, yaml and msgpack deserialization is now disabled by default. This means that Kombu will by default refuse to handle any content type other than json. Pickle is known to be a security concern as it will happily load any object that is embedded in a pickle payload, and payloads can be crafted to do almost anything you want. The default serializer in Kombu is json but it also supports a number of other serialization formats that it will evaluate if received: including pickle. It was always assumed that users were educated about the security implications of pickle, but in hindsight we don't think users should be expected to secure their services if we have the ability to be secure by default. By disabling any content type that the user did not explicitly want enabled we ensure that the user must be conscious when they add pickle as a serialization format to support. The other built-in serializers (yaml and msgpack) are also disabled even though they aren't considered insecure [#f1]_ at this point. Instead they're disabled so that if a security flaw is found in one of these libraries in the future, you will only be affected if you have explicitly enabled them. To have your consumer accept formats other than json you have to explicitly add the wanted formats to a white-list of accepted content types:: >>> c = Consumer(conn, accept=['json', 'pickle', 'msgpack']) or when using synchronous access:: >>> msg = queue.get(accept=['json', 'pickle', 'msgpack']) The ``accept`` argument was first supported for consumers in version 2.5.10, and first supported by ``Queue.get`` in version 2.5.15 so to stay compatible with previous versions you can enable the previous behavior: >>> from kombu import enable_insecure_serializers >>> enable_insecure_serializers() But note that this has global effect, so be very careful should you use it. .. rubric:: Footnotes .. [#f1] The PyYAML library has a :func:`yaml.load` function with some of the same security implications as pickle, but Kombu uses the :func:`yaml.safe_load` function which is not known to be affected. - kombu.async: Experimental event loop implementation. This code was previously in Celery but was moved here to make it easier for async transport implementations. The API is meant to match the Tulip API which will be included in Python 3.4 as the ``asyncio`` module. It's not a complete implementation obviously, but the goal is that it will be easy to change to it once that is possible. - Utility function ``kombu.common.ipublish`` has been removed. Use ``Producer(..., retry=True)`` instead. - Utility function ``kombu.common.isend_reply`` has been removed Use ``send_reply(..., retry=True)`` instead. - ``kombu.common.entry_to_queue`` and ``kombu.messaging.entry_to_queue`` has been removed. Use ``Queue.from_dict(name, **options)`` instead. - Redis: Messages are now restored at the end of the list. Contributed by Mark Lavin. - ``StdConnectionError`` and ``StdChannelError`` is removed and :exc:`amqp.ConnectionError` and :exc:`amqp.ChannelError` is used instead. - Message object implementation has moved to :class:`kombu.message.Message`. - Serailization: Renamed functions encode/decode to :func:`~kombu.serialization.dumps` and :func:`~kombu.serialization.loads`. For backward compatibility the old names are still available as aliases. - The ``kombu.log.anon_logger`` function has been removed. Use :func:`~kombu.log.get_logger` instead. - ``queue_declare`` now returns namedtuple with ``queue``, ``message_count``, and ``consumer_count`` fields. - LamportClock: Can now set lock class - :mod:`kombu.utils.clock`: Utilities for ordering events added. - :class:`~kombu.simple.SimpleQueue` now allows you to override the exchange type used. Contributed by Vince Gonzales. - Zookeeper transport updated to support new changes in the :mod:`kazoo` library. Contributed by Mahendra M. - pyamqp/librabbitmq: Transport options are now forwarded as keyword arguments to the underlying connection (Issue #214). - Transports may now distinguish between recoverable and irrecoverable connection and channel errors. - ``kombu.utils.Finalize`` has been removed: Use :mod:`multiprocessing.util.Finalize` instead. - Memory transport now supports the fanout exchange type. Contributed by Davanum Srinivas. - Experimental new `Pyro`_ transport (:mod:`kombu.transport.pyro`). Contributed by Tommie McAfee. .. _`Pyro`: http://pythonhosted.org/Pyro - Experimental new `SoftLayer MQ`_ transport (:mod:`kombu.transport.SLMQ`). Contributed by Kevin McDonald .. _`SoftLayer MQ`: http://www.softlayer.com/services/additional/message-queue - Eventio: Kqueue breaks in subtle ways so select is now used instead. - SQLAlchemy transport: Can now specify table names using the ``queue_tablename`` and ``message_tablename`` transport options. Contributed by Ryan Petrello. Redis transport: Now supports using local UNIX sockets to communicate with the Redis server (Issue #1283) To connect using a UNIX socket you have to use the ``redis+socket`` URL-prefix: ``redis+socket:///tmp/redis.sock``. This functionality was merged from the `celery-redis-unixsocket`_ project. Contributed by Maxime Rouyrre. ZeroMQ transport: drain_events now supports timeout. Contributed by Jesper Thomschütz. .. _`celery-redis-unixsocket`: https://github.com/piquadrat/celery-redis-unixsocket .. _version-2.5.16: 2.5.16 ====== :release-date: 2013-10-04 03:30 P.M BST :release-by: Ask Solem - Python3: Fixed problem with dependencies not being installed. .. _version-2.5.15: 2.5.15 ====== :release-date: 2013-10-04 03:30 P.M BST :release-by: Ask Solem - Declaration cache: Now only keeps hash of declaration so that it does not keep a reference to the channel. - Declaration cache: Now respects ``entity.can_cache_declaration`` attribute. - Fixes Python 2.5 compatibility. - Fixes tests after python-msgpack changes. - ``Queue.get``: Now supports ``accept`` argument. .. _version-2.5.14: 2.5.14 ====== :release-date: 2013-08-23 17:00 P.M BST :release-by: Ask Solem - safe_str did not work properly resulting in :exc:`UnicodeDecodeError` (Issue #248). .. _version-2.5.13: 2.5.13 ====== :release-date: 2013-08-16 16:00 P.M BST :release-by: Ask Solem - Now depends on :mod:`amqp` 1.0.13 - Fixed typo in Django functional tests. - safe_str now returns Unicode in Python 2.x Fix contributed by Germán M. Bravo. - amqp: Transport options are now merged with arguments supplied to the connection. - Tests no longer depends on distribute, which was deprecated and merged back into setuptools. Fix contributed by Sascha Peilicke. - ConsumerMixin now also restarts on channel related errors. Fix contributed by Corentin Ardeois. .. _version-2.5.12: 2.5.12 ====== :release-date: 2013-06-28 15:30 P.M BST :release-by: Ask Solem - Redis: Ignore errors about keys missing in the round-robin cycle. - Fixed test suite errors on Python 3. - Fixed msgpack test failures. .. _version-2.5.11: 2.5.11 ====== :release-date: 2013-06-25 14:30 P.M BST :release-by: Ask Solem - Now depends on amqp 1.0.12 (Py3 compatibility issues). - MongoDB: Removed cause of a "database name in URI is being ignored" warning. Fix by Flavio Percoco Premoli - Adds ``passive`` option to :class:`~kombu.Exchange`. Setting this flag means that the exchange will not be declared by kombu, but that it must exist already (or an exception will be raised). Contributed by Rafal Malinowski - Connection.info() now gives the current hostname and not the list of available hostnames. Fix contributed by John Shuping. - pyamqp: Transport options are now forwarded as kwargs to ``amqp.Connection``. - librabbitmq: Transport options are now forwarded as kwargs to ``librabbitmq.Connection``. - librabbitmq: Now raises :exc:`NotImplementedError` if SSL is enabled. The librabbitmq library does not support ssl, but you can use stunnel or change to the ``pyamqp://`` transport instead. Fix contributed by Dan LaMotte. - librabbitmq: Fixed a cyclic reference at connection close. - eventio: select implementation now removes bad file descriptors. - eventio: Fixed Py3 compatibility problems. - Functional tests added for py-amqp and librabbitmq transports. - Resource.force_close_all no longer uses a mutex. - Pidbox: Now ignores `IconsistencyError` when sending replies, as this error simply means that the client may no longer be alive. - Adds new :meth:`Connection.collect <~kombu.Connection.collect>` method, that can be used to clean up after connections without I/O. - ``queue_bind`` is no longer called for queues bound to the "default exchange" (Issue #209). Contributed by Jonathan Halcrow. - The max_retries setting for retries was not respected correctly (off by one). .. _version-2.5.10: 2.5.10 ====== :release-date: 2013-04-11 18:10 P.M BST :release-by: Ask Solem Note about upcoming changes for Kombu 3.0 ----------------------------------------- Kombu 3 consumers will no longer accept pickle/yaml or msgpack by default, and you will have to explicitly enable untrusted deserializers either globally using :func:`kombu.enable_insecure_serializers`, or using the ``accept`` argument to :class:`~kombu.Consumer`. Changes ------- - New utility function to disable/enable untrusted serializers. - :func:`kombu.disable_insecure_serializers` - :func:`kombu.enable_insecure_serializers`. - Consumer: ``accept`` can now be used to specify a whitelist of content types to accept. If the accept whitelist is set and a message is received with a content type that is not in the whitelist then a :exc:`~kombu.exceptions.ContentDisallowed` exception is raised. Note that this error can be handled by the already existing `on_decode_error` callback Examples:: Consumer(accept=['application/json']) Consumer(accept=['pickle', 'json']) - Now depends on amqp 1.0.11 - pidbox: Mailbox now supports the ``accept`` argument. - Redis: More friendly error for when keys are missing. - Connection URLs: The parser did not work well when there were multiple '+' tokens. .. _version-2.5.9: 2.5.9 ===== :release-date: 2013-04-08 05:07 P.M BST :release-by: Ask Solem - Pidbox: Now warns if there are multiple nodes consuming from the same pidbox. - Adds :attr:`Queue.on_declared ` A callback to be called when the queue is declared, with signature ``(name, messages, consumers)``. - Now uses fuzzy matching to suggest alternatives to typos in transport names. - SQS: Adds new transport option ``queue_prefix``. Contributed by j0hnsmith. - pyamqp: No longer overrides verify_connection. - SQS: Now specifies the ``driver_type`` and ``driver_name`` attributes. Fix contributed by Mher Movsisyan. - Fixed bug with ``kombu.utils.retry_over_time`` when no errback specified. .. _version-2.5.8: 2.5.8 ===== :release-date: 2013-03-21 04:00 P.M UTC :release-by: Ask Solem - Now depends on :mod:`amqp` 1.0.10 which fixes a Python 3 compatibility error. - Redis: Fixed a possible race condition (Issue #171). - Redis: Ack emulation/visibility_timeout can now be disabled using a transport option. Ack emulation adds quite a lot of overhead to ensure data is safe even in the event of an unclean shutdown. If data loss do not worry you there is now an `ack_emulation` transport option you can use to disable it:: Connection('redis://', transport_options={'ack_emulation': False}) - SQS: Fixed :mod:`boto` v2.7 compatibility (Issue #207). - Exchange: Should not try to re-declare default exchange (``""``) (Issue #209). - SQS: Long polling is now disabled by default as it was not implemented correctly, resulting in long delays between receiving messages (Issue #202). - Fixed Python 2.6 incompatibility depending on ``exc.errno`` being available. Fix contributed by Ephemera. .. _version-2.5.7: 2.5.7 ===== :release-date: 2013-03-08 01:00 P.M UTC :release-by: Ask Solem - Now depends on amqp 1.0.9 - Redis: A regression in 2.5.6 caused the redis transport to ignore options set in ``transport_options``. - Redis: New ``socket_timeout`` transport option. - Redis: ``InconsistencyError`` is now regarded as a recoverable error. - Resource pools: Will no longer attempt to release resource that was never acquired. - MongoDB: Now supports the ``ssl`` option. Contributed by Sebastian Pawlus. .. _version-2.5.6: 2.5.6 ===== :release-date: 2013-02-08 01:00 P.M UTC :release-by: Ask Solem - Now depends on amqp 1.0.8 which works around a bug found on some Python 2.5 installations where 2**32 overflows to 0. .. _version-2.5.5: 2.5.5 ===== :release-date: 2013-02-07 17:00 P.M UTC :release-by: Ask Solem SQS: Now supports long polling (Issue #176). The polling interval default has been changed to 0 and a new transport option (``wait_time_seconds``) has been added. This parameter specifies how long to wait for a message from SQS, and defaults to 20 seconds, which is the maximum value currently allowed by Amazon SQS. Contributed by James Saryerwinnie. - SQS: Now removes unpickleable fields before restoring messages. - Consumer.__exit__ now ignores exceptions occurring while cancelling the consumer. - Virtual: Routing keys can now consist of characters also used in regular expressions (e.g. parens) (Issue #194). - Virtual: Fixed compression header when restoring messages. Fix contributed by Alex Koshelev. - Virtual: ack/reject/requeue now works while using ``basic_get``. - Virtual: Message.reject is now supported by virtual transports (requeue depends on individual transport support). - Fixed typo in hack used for static analyzers. Fix contributed by Basil Mironenko. .. _version-2.5.4: 2.5.4 ===== :release-date: 2012-12-10 12:35 P.M UTC :release-by: Ask Solem - Fixed problem with connection clone and multiple URLs (Issue #182). Fix contributed by Dane Guempel. - zeromq: Now compatible with libzmq 3.2.x. Fix contributed by Andrey Antukh. - Fixed Python 3 installation problem (Issue #187). .. _version-2.5.3: 2.5.3 ===== :release-date: 2012-11-29 12:35 P.M UTC :release-by: Ask Solem - Pidbox: Fixed compatibility with Python 2.6 2.5.2 ===== :release-date: 2012-11-29 12:35 P.M UTC :release-by: Ask Solem .. _version-2.5.2: 2.5.2 ===== :release-date: 2012-11-29 12:35 P.M UTC :release-by: Ask Solem - [Redis] Fixed connection leak and added a new 'max_connections' transport option. .. _version-2.5.1: 2.5.1 ===== :release-date: 2012-11-28 12:45 P.M UTC :release-by: Ask Solem - Fixed bug where return value of Queue.as_dict could not be serialized with JSON (Issue #177). .. _version-2.5.0: 2.5.0 ===== :release-date: 2012-11-27 04:00 P.M UTC :release-by: Ask Solem - `py-amqp`_ is now the new default transport, replacing ``amqplib``. The new `py-amqp`_ library is a fork of amqplib started with the following goals: - Uses AMQP 0.9.1 instead of 0.8 - Support for heartbeats (Issue #79 + Issue #131) - Automatically revives channels on channel errors. - Support for all RabbitMQ extensions - Consumer Cancel Notifications (Issue #131) - Publisher Confirms (Issue #131). - Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``. - API compatible with :mod:`librabbitmq` so that it can be used as a pure-python replacement in environments where rabbitmq-c cannot be compiled. librabbitmq will be updated to support all the same features as py-amqp. - Support for using multiple connection URL's for failover. The first argument to :class:`~kombu.Connection` can now be a list of connection URLs: .. code-block:: python Connection(['amqp://foo', 'amqp://bar']) or it can be a single string argument with several URLs separated by semicolon: .. code-block:: python Connection('amqp://foo;amqp://bar') There is also a new keyword argument ``failover_strategy`` that defines how :meth:`~kombu.Connection.ensure_connection`/ :meth:`~kombu.Connection.ensure`/:meth:`kombu.Connection.autoretry` will reconnect in the event of connection failures. The default reconnection strategy is ``round-robin``, which will simply cycle through the list forever, and there's also a ``shuffle`` strategy that will select random hosts from the list. Custom strategies can also be used, in that case the argument must be a generator yielding the URL to connect to. Example: .. code-block:: python Connection('amqp://foo;amqp://bar') - Now supports PyDev, PyCharm, pylint and other static code analysis tools. - :class:`~kombu.Queue` now supports multiple bindings. You can now have multiple bindings in the same queue by having the second argument be a list: .. code-block:: python from kombu import binding, Queue Queue('name', [ binding(Exchange('E1'), routing_key='foo'), binding(Exchange('E1'), routing_key='bar'), binding(Exchange('E2'), routing_key='baz'), ]) To enable this, helper methods have been added: - :meth:`~kombu.Queue.bind_to` - :meth:`~kombu.Queue.unbind_from` Contributed by Rumyana Neykova. - Custom serializers can now be registered using Setuptools entry-points. See :ref:`serialization-entrypoints`. - New :class:`kombu.common.QoS` class used as a thread-safe way to manage changes to a consumer or channels prefetch_count. This was previously an internal class used in Celery now moved to the :mod:`kombu.common` module. - Consumer now supports a ``on_message`` callback that can be used to process raw messages (not decoded). Other callbacks specified using the ``callbacks`` argument, and the ``receive`` method will be not be called when a on message callback is present. - New utility :func:`kombu.common.ignore_errors` ignores connection and channel errors. Must only be used for cleanup actions at shutdown or on connection loss. - Support for exchange-to-exchange bindings. The :class:`~kombu.Exchange` entity gained ``bind_to`` and ``unbind_from`` methods: .. code-block:: python e1 = Exchange('A')(connection) e2 = Exchange('B')(connection) e2.bind_to(e1, routing_key='rkey', arguments=None) e2.unbind_from(e1, routing_key='rkey', arguments=None) This is currently only supported by the ``pyamqp`` transport. Contributed by Rumyana Neykova. .. _version-2.4.10: 2.4.10 ====== :release-date: 2012-11-22 06:00 P.M UTC :release-by: Ask Solem - The previous versions connection pool changes broke Redis support so that it would always connect to localhost (default setting) no matter what connection parameters were provided (Issue #176). .. _version-2.4.9: 2.4.9 ===== :release-date: 2012-11-21 03:00 P.M UTC :release-by: Ask Solem - Redis: Fixed race condition that could occur while trying to restore messages (Issue #171). Fix contributed by Ollie Walsh. - Redis: Each channel is now using a specific connection pool instance, which is disconnected on connection failure. - ProducerPool: Fixed possible dead-lock in the acquire method. - ProducerPool: ``force_close_all`` no longer tries to call the non-existent ``Producer._close``. - librabbitmq: Now implements ``transport.verify_connection`` so that connection pools will not give back connections that are no longer working. - New and better ``repr()`` for Queue and Exchange objects. - Python3: Fixed problem with running the unit test suite. - Python3: Fixed problem with JSON codec. .. _version-2.4.8: 2.4.8 ===== :release-date: 2012-11-02 05:00 P.M UTC :release-by: Ask Solem - Redis: Improved fair queue cycle implementation (Issue #166). Contributed by Kevin McCarthy. - Redis: Unacked message restore limit is now unlimited by default. Also, the limit can now be configured using the ``unacked_restore_limit`` transport option: .. code-block:: python Connection('redis://', transport_options={ 'unacked_restore_limit': 100, }) A limit of 100 means that the consumer will restore at most 100 messages at each pass. - Redis: Now uses a mutex to ensure only one consumer restores messages at a time. The mutex expires after 5 minutes by default, but can be configured using the ``unacked_mutex_expire`` transport option. - LamportClock.adjust now returns the new clock value. - Heartbeats can now be specified in URLs. Fix contributed by Mher Movsisyan. - Kombu can now be used with PyDev, PyCharm and other static analysis tools. - Fixes problem with msgpack on Python 3 (Issue #162). Fix contributed by Jasper Bryant-Greene - amqplib: Fixed bug with timeouts when SSL is used in non-blocking mode. Fix contributed by Mher Movsisyan .. _version-2.4.7: 2.4.7 ===== :release-date: 2012-09-18 03:00 P.M BST :release-by: Ask Solem - Virtual: Unknown exchanges now default to 'direct' when sending a message. - MongoDB: Fixed memory leak when merging keys stored in the db (Issue #159) Fix contributed by Michael Korbakov. - MongoDB: Better index for MongoDB transport (Issue #158). This improvement will create a new compund index for queue and _id in order to be able to use both indexed fields for getting a new message (using queue field) and sorting by _id. It'll be necessary to manually delete the old index from the collection. Improvement contributed by rmihael .. _version-2.4.6: 2.4.6 ===== :release-date: 2012-09-12 03:00 P.M BST :release-by: Ask Solem - Adds additional compatibility dependencies: - Python <= 2.6: - importlib - ordereddict - Python <= 2.5 - simplejson .. _version-2.4.5: 2.4.5 ===== :release-date: 2012-08-30 03:36 P.M BST :release-by: Ask Solem - Last version broke installtion on PyPy and Jython due to test requirements clean-up. .. _version-2.4.4: 2.4.4 ===== :release-date: 2012-08-29 04:00 P.M BST :release-by: Ask Solem - amqplib: Fixed a bug with asynchronously reading large messages. - pyamqp: Now requires amqp 0.9.3 - Cleaned up test requirements. .. _version-2.4.3: 2.4.3 ===== :release-date: 2012-08-25 10:30 P.M BST :release-by: Ask Solem - Fixed problem with amqp transport alias (Issue #154). .. _version-2.4.2: 2.4.2 ===== :release-date: 2012-08-24 05:00 P.M BST :release-by: Ask Solem - Having an empty transport name broke in 2.4.1. .. _version-2.4.1: 2.4.1 ===== :release-date: 2012-08-24 04:00 P.M BST :release-by: Ask Solem - Redis: Fixed race condition that could cause the consumer to crash (Issue #151) Often leading to the error message ``"could not convert string to float"`` - Connection retry could cause an inifite loop (Issue #145). - The ``amqp`` alias is now resolved at runtime, so that eventlet detection works even if patching was done later. .. _version-2.4.0: 2.4.0 ===== :release-date: 2012-08-17 08:00 P.M BST :release-by: Ask Solem - New experimental :mod:`ZeroMQ >> conn = Connection('pyamqp://guest:guest@localhost//') The ``pyamqp://`` transport will be the default fallback transport in Kombu version 3.0, when :mod:`librabbitmq` is not installed, and librabbitmq will also be updated to support the same features. - Connection now supports heartbeat argument. If enabled you must make sure to manually maintain heartbeats by calling the ``Connection.heartbeat_check`` at twice the rate of the specified heartbeat interval. E.g. if you have ``Connection(heartbeat=10)``, then you must call ``Connection.heartbeat_check()`` every 5 seconds. if the server has not sent heartbeats at a suitable rate then the heartbeat check method must raise an error that is listed in ``Connection.connection_errors``. The attribute ``Connection.supports_heartbeats`` has been added for the ability to inspect if a transport supports heartbeats or not. Calling ``heartbeat_check`` on a transport that does not support heartbeats results in a noop operation. - SQS: Fixed bug with invalid characters in queue names. Fix contributed by Zach Smith. - utils.reprcall: Fixed typo where kwargs argument was an empty tuple by default, and not an empty dict. .. _version-2.2.6: 2.2.6 ===== :release-date: 2012-07-10 17:00 P.M BST :release-by: Ask Solem - Adds ``kombu.messaging.entry_to_queue`` for compat with previous versions. .. _version-2.2.5: 2.2.5 ===== :release-date: 2012-07-10 17:00 P.M BST :release-by: Ask Solem - Pidbox: Now sets queue expire at 10 seconds for reply queues. - EventIO: Now ignores ``ValueError`` raised by epoll unregister. - MongoDB: Fixes Issue #142 Fix by Flavio Percoco Premoli .. _version-2.2.4: 2.2.4 ===== :release-date: 2012-07-05 16:00 P.M BST :release-by: Ask Solem - Support for msgpack-python 0.2.0 (Issue #143) The latest msgpack version no longer supports Python 2.5, so if you're still using that you need to depend on an earlier msgpack-python version. Fix contributed by Sebastian Insua - :func:`~kombu.common.maybe_declare` no longer caches entities with the ``auto_delete`` flag set. - New experimental filesystem transport. Contributed by Bobby Beever. - Virtual Transports: Now support anonymous queues and exchanges. .. _version-2.2.3: 2.2.3 ===== :release-date: 2012-06-24 17:00 P.M BST :release-by: Ask Solem - ``BrokerConnection`` now renamed to ``Connection``. The name ``Connection`` has been an alias for a very long time, but now the rename is official in the documentation as well. The Connection alias has been available since version 1.1.3, and ``BrokerConnection`` will still work and is not deprecated. - ``Connection.clone()`` now works for the sqlalchemy transport. - :func:`kombu.common.eventloop`, :func:`kombu.utils.uuid`, and :func:`kombu.utils.url.parse_url` can now be imported from the :mod:`kombu` module directly. - Pidbox transport callback ``after_reply_message_received`` now happens in a finally block. - Trying to use the ``librabbitmq://`` transport will now show the right name in the :exc:`ImportError` if :mod:`librabbitmq` is not installed. The librabbitmq falls back to the older ``pylibrabbitmq`` name for compatibility reasons and would therefore show ``No module named pylibrabbitmq`` instead of librabbitmq. .. _version-2.2.2: 2.2.2 ===== :release-date: 2012-06-22 02:30 P.M BST :release-by: Ask Solem - Now depends on :mod:`anyjson` 0.3.3 - Json serializer: Now passes :class:`buffer` objects directly, since this is supported in the latest :mod:`anyjson` version. - Fixes blocking epoll call if timeout was set to 0. Fix contributed by John Watson. - setup.py now takes requirements from the :file:`requirements/` directory. - The distribution directory :file:`contrib/` is now renamed to :file:`extra/` .. _version-2.2.1: 2.2.1 ===== :release-date: 2012-06-21 01:00 P.M BST :release-by: Ask Solem - SQS: Default visibility timeout is now 30 minutes. Since we have ack emulation the visibility timeout is only in effect if the consumer is abrubtly terminated. - retry argument to ``Producer.publish`` now works properly, when the declare argument is specified. - Json serializer: didn't handle buffer objects (Issue #135). Fix contributed by Jens Hoffrichter. - Virtual: Now supports passive argument to ``exchange_declare``. - Exchange & Queue can now be bound to connections (which will use the default channel): >>> exchange = Exchange('name') >>> bound_exchange = exchange(connection) >>> bound_exchange.declare() - ``SimpleQueue`` & ``SimpleBuffer`` can now be bound to connections (which will use the default channel). - ``Connection.manager.get_bindings`` now works for librabbitmq and pika. - Adds new transport info attributes:: - ``Transport.driver_type`` Type of underlying driver, e.g. "amqp", "redis", "sql". - ``Transport.driver_name`` Name of library used e.g. "amqplib", "redis", "pymongo". - ``Transport.driver_version()`` Version of underlying library. .. _version-2.2.0: 2.2.0 ===== :release-date: 2012-06-07 3:10 P.M BST :release-by: Ask Solem .. _v220-important: Important Notes --------------- - The canonical source code repository has been moved to http://github.com/celery/kombu - Pidbox: Exchanges used by pidbox are no longer auto_delete. Auto delete has been described as a misfeature, and therefore we have disabled it. For RabbitMQ users old exchanges used by pidbox must be removed, these are named ``mailbox_name.pidbox``, and ``reply.mailbox_name.pidbox``. The following command can be used to clean up these exchanges:: VHOST=/ URL=amqp:// python -c'import sys,kombu;[kombu.Connection( sys.argv[-1]).channel().exchange_delete(x) for x in sys.argv[1:-1]]' \ $(sudo rabbitmqctl -q list_exchanges -p "$VHOST" \ | grep \.pidbox | awk '{print $1}') "$URL" The :envvar:`VHOST` variable must be set to the target RabbitMQ virtual host, and the :envvar:`URL` must be the AMQP URL to the server. - The ``amqp`` transport alias will now use :mod:`librabbitmq` if installed. `py-librabbitmq`_ is a fast AMQP client for Python using the librabbitmq C library. It can be installed by:: $ pip install librabbitmq It will not be used if the process is monkey patched by eventlet/gevent. .. _`py-librabbitmq`: https://github.com/celery/librabbitmq .. _v220-news: News ---- - Redis: Ack emulation improvements. Reducing the possibility of data loss. Acks are now implemented by storing a copy of the message when the message is consumed. The copy is not removed until the consumer acknowledges or rejects it. This means that unacknowledged messages will be redelivered either when the connection is closed, or when the visibility timeout is exceeded. - Visibility timeout This is a timeout for acks, so that if the consumer does not ack the message within this time limit, the message is redelivered to another consumer. The timeout is set to one hour by default, but can be changed by configuring a transport option: >>> Connection('redis://', transport_options={ ... 'visibility_timeout': 1800, # 30 minutes ... }) **NOTE**: Messages that have not been acked will be redelivered if the visibility timeout is exceeded, for Celery users this means that ETA/countdown tasks that are scheduled to execute with a time that exceeds the visibility timeout will be executed twice (or more). If you plan on using long ETA/countdowns you should tweak the visibility timeout accordingly:: BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 18000} # 5 hours Setting a long timeout means that it will take a long time for messages to be redelivered in the event of a power failure, but if so happens you could temporarily set the visibility timeout lower to flush out messages when you start up the systems again. - Experimental `Apache ZooKeeper`_ transport More information is in the module reference: :mod:`kombu.transport.zookeeper`. Contributed by Mahendra M. .. _`Apache ZooKeeper`: http://zookeeper.apache.org/ - Redis: Priority support. The message's ``priority`` field is now respected by the Redis transport by having multiple lists for each named queue. The queues are then consumed by in order of priority. The priority field is a number in the range of 0 - 9, where 0 is the default and highest priority. The priority range is collapsed into four steps by default, since it is unlikely that nine steps will yield more benefit than using four steps. The number of steps can be configured by setting the ``priority_steps`` transport option, which must be a list of numbers in **sorted order**:: >>> x = Connection('redis://', transport_options={ ... 'priority_steps': [0, 2, 4, 6, 8, 9], ... }) Priorities implemented in this way is not as reliable as priorities on the server side, which is why nickname the feature "quasi-priorities"; **Using routing is still the suggested way of ensuring quality of service**, as client implemented priorities fall short in a number of ways, e.g. if the worker is busy with long running tasks, has prefetched many messages, or the queues are congested. Still, it is possible that using priorities in combination with routing can be more beneficial than using routing or priorities alone. Experimentation and monitoring should be used to prove this. Contributed by Germán M. Bravo. - Redis: Now cycles queues so that consuming is fair. This ensures that a very busy queue won't block messages from other queues, and ensures that all queues have an equal chance of being consumed from. This used to be the case before, but the behavior was accidentally changed while switching to using blocking pop. - Redis: Auto delete queues that are bound to fanout exchanges is now deleted at channel.close. - amqplib: Refactored the drain_events implementation. - Pidbox: Now uses ``connection.default_channel``. - Pickle serialization: Can now decode buffer objects. - Exchange/Queue declarations can now be cached even if the entity is non-durable. This is possible because the list of cached declarations are now kept with the connection, so that the entities will be redeclared if the connection is lost. - Kombu source code now only uses one-level of explicit relative imports. .. _v220-fixes: Fixes ----- - eventio: Now ignores ENOENT raised by ``epoll.register``, and EEXIST from ``epoll.unregister``. - eventio: kqueue now ignores :exc:`KeyError` on unregister. - Redis: ``Message.reject`` now supports the ``requeue`` argument. - Redis: Remove superfluous pipeline call. Fix contributed by Thomas Johansson. - Redis: Now sets redelivered header for redelivered messages. - Now always makes sure references to :func:`sys.exc_info` is removed. - Virtual: The compression header is now removed before restoring messages. - More tests for the SQLAlchemy backend. Contributed by Franck Cuny. - Url parsing did not handle MongoDB URLs properly. Fix contributed by Flavio Percoco Premoli. - Beanstalk: Ignore default tube when reserving. Fix contributed by Zhao Xiaohong. Nonblocking consume support --------------------------- librabbitmq, amqplib and redis transports can now be used non-blocking. The interface is very manual, and only consuming messages is non-blocking so far. The API should not be regarded as stable or final in any way. It is used by Celery which has very limited needs at this point. Hopefully we can introduce a proper callback-based API later. - ``Transport.eventmap`` Is a map of ``fd -> callback(fileno, event)`` to register in an eventloop. - ``Transport.on_poll_start()`` Is called before every call to poll. The poller must support ``register(fd, callback)`` and ``unregister(fd)`` methods. - ``Transport.on_poll_start(poller)`` Called when the hub is initialized. The poller argument must support the same interface as :class:`kombu.utils.eventio.poll`. - ``Connection.ensure_connection`` now takes a callback argument which is called for every loop while the connection is down. - Adds ``connection.drain_nowait`` This is a non-blocking alternative to drain_events, but only supported by amqplib/librabbitmq. - drain_events now sets ``connection.more_to_read`` if there is more data to read. This is to support eventloops where other things must be handled between draining events. .. _version-2.1.8: 2.1.8 ===== :release-date: 2012-05-06 3:06 P.M BST :release-by: Ask Solem * Bound Exchange/Queue's are now pickleable. * Consumer/Producer can now be instantiated without a channel, and only later bound using ``.revive(channel)``. * ProducerPool now takes ``Producer`` argument. * :func:`~kombu.utils.fxrange` now counts forever if the stop argument is set to None. (fxrange is like xrange but for decimals). * Auto delete support for virtual transports were incomplete and could lead to problems so it was removed. * Cached declarations (:func:`~kombu.common.maybe_declare`) are now bound to the underlying connection, so that entities are redeclared if the connection is lost. This also means that previously uncacheable entities (e.g. non-durable) can now be cached. * compat ConsumerSet: can now specify channel. .. _version-2.1.7: 2.1.7 ===== :release-date: 2012-04-27 6:00 P.M BST :release-by: Ask Solem * compat consumerset now accepts optional channel argument. .. _version-2.1.6: 2.1.6 ===== :release-date: 2012-04-23 1:30 P.M BST :release-by: Ask Solem * SQLAlchemy transport was not working correctly after URL parser change. * maybe_declare now stores cached declarations per underlying connection instead of globally, in the rare case that data disappears from the broker after connection loss. * Django: Added South migrations. Contributed by Joseph Crosland. .. _version-2.1.5: 2.1.5 ===== :release-date: 2012-04-13 3:30 P.M BST :release-by: Ask Solem * The url parser removed more than the first leading slash (Issue #121). * SQLAlchemy: Can now specify url using + separator Example:: Connection('sqla+mysql://localhost/db') * Better support for anonymous queues (Issue #116). Contributed by Michael Barrett. * ``Connection.as_uri`` now quotes url parts (Issue #117). * Beanstalk: Can now set message TTR as a message property. Contributed by Andrii Kostenko .. _version-2.1.4: 2.1.4 ===== :release-date: 2012-04-03 4:00 P.M GMT :release-by: Ask Solem * MongoDB: URL parsing are now delegated to the pymongo library (Fixes Issue #103 and Issue #87). Fix contributed by Flavio Percoco Premoli and James Sullivan * SQS: A bug caused SimpleDB to be used even if sdb persistence was not enabled (Issue #108). Fix contributed by Anand Kumria. * Django: Transaction was committed in the wrong place, causing data cleanup to fail (Issue #115). Fix contributed by Daisuke Fujiwara. * MongoDB: Now supports replica set URLs. Contributed by Flavio Percoco Premoli. * Redis: Now raises a channel error if a queue key that is currently being consumed from disappears. Fix contributed by Stephan Jaekel. * All transport 'channel_errors' lists now includes ``kombu.exception.StdChannelError``. * All kombu exceptions now inherit from a common :exc:`~kombu.exceptions.KombuError`. .. _version-2.1.3: 2.1.3 ===== :release-date: 2012-03-20 3:00 P.M GMT :release-by: Ask Solem * Fixes Jython compatibility issues. * Fixes Python 2.5 compatibility issues. .. _version-2.1.2: 2.1.2 ===== :release-date: 2012-03-01 01:00 P.M GMT :release-by: Ask Solem * amqplib: Last version broke SSL support. .. _version-2.1.1: 2.1.1 ===== :release-date: 2012-02-24 02:00 P.M GMT :release-by: Ask Solem * Connection URLs now supports encoded characters. * Fixed a case where connection pool could not recover from connection loss. Fix contributed by Florian Munz. * We now patch amqplib's ``__del__`` method to skip trying to close the socket if it is not connected, as this resulted in an annoying warning. * Compression can now be used with binary message payloads. Fix contributed by Steeve Morin. .. _version-2.1.0: 2.1.0 ===== :release-date: 2012-02-04 10:38 P.M GMT :release-by: Ask Solem * MongoDB: Now supports fanout (broadcast) (Issue #98). Contributed by Scott Lyons. * amqplib: Now detects broken connections by using ``MSG_PEEK``. * pylibrabbitmq: Now supports ``basic_get`` (Issue #97). * gevent: Now always uses the ``select`` polling backend. * pika transport: Now works with pika 0.9.5 and 0.9.6dev. The old pika transport (supporting 0.5.x) is now available as alias ``oldpika``. (Note terribly latency has been experienced with the new pika versions, so this is still an experimental transport). * Virtual transports: can now set polling interval via the transport options (Issue #96). Example:: >>> Connection('sqs://', transport_options={ ... 'polling_interval': 5.0}) The default interval is transport specific, but usually 1.0s (or 5.0s for the Django database transport, which can also be set using the ``KOMBU_POLLING_INTERVAL`` setting). * Adds convenience function: :func:`kombu.common.eventloop`. .. _version-2.0.0: 2.0.0 ===== :release-date: 2012-01-15 18:34 P.M GMT :release-by: Ask Solem .. _v200-important: Important Notes --------------- .. _v200-python-compatibility: Python Compatibility ~~~~~~~~~~~~~~~~~~~~ * No longer supports Python 2.4. Users of Python 2.4 can still use the 1.x series. The 1.x series has entered bugfix-only maintenance mode, and will stay that way as long as there is demand, and a willingness to maintain it. .. _v200-new-transports: New Transports ~~~~~~~~~~~~~~ * ``django-kombu`` is now part of Kombu core. The Django message transport uses the Django ORM to store messages. It uses polling, with a default polling interval of 5 seconds. The polling interval can be increased or decreased by configuring the ``KOMBU_POLLING_INTERVAL`` Django setting, which is the polling interval in seconds as an int or a float. Note that shorter polling intervals can cause extreme strain on the database: if responsiveness is needed you shall consider switching to a non-polling transport. To use it you must use transport alias ``"django"``, or as an URL:: django:// and then add ``kombu.transport.django`` to ``INSTALLED_APPS``, and run ``manage.py syncdb`` to create the necessary database tables. **Upgrading** If you have previously used ``django-kombu``, then the entry in ``INSTALLED_APPS`` must be changed from ``djkombu`` to ``kombu.transport.django``:: INSTALLED_APPS = (…, 'kombu.transport.django') If you have previously used django-kombu, then there is no need to recreate the tables, as the old tables will be fully compatible with the new version. * ``kombu-sqlalchemy`` is now part of Kombu core. This change requires no code changes given that the ``sqlalchemy`` transport alias is used. .. _v200-news: News ---- * :class:`kombu.mixins.ConsumerMixin` is a mixin class that lets you easily write consumer programs and threads. See :ref:`examples` and :ref:`guide-consumers`. * SQS Transport: Added support for SQS queue prefixes (Issue #84). The queue prefix can be set using the transport option ``queue_name_prefix``:: BrokerTransport('SQS://', transport_options={ 'queue_name_prefix': 'myapp'}) Contributed by Nitzan Miron. * ``Producer.publish`` now supports automatic retry. Retry is enabled by the ``reply`` argument, and retry options set by the ``retry_policy`` argument:: exchange = Exchange('foo') producer.publish(message, exchange=exchange, retry=True, declare=[exchange], retry_policy={ 'interval_start': 1.0}) See :meth:`~kombu.Connection.ensure` for a list of supported retry policy options. * ``Producer.publish`` now supports a ``declare`` keyword argument. This is a list of entities (:class:`Exchange`, or :class:`Queue`) that should be declared before the message is published. .. _v200-fixes: Fixes ----- * Redis transport: Timeout was multiplied by 1000 seconds when using ``select`` for event I/O (Issue #86). .. _version-1.5.1: 1.5.1 ===== :release-date: 2011-11-30 01:00 P.M GMT :release-by: Ask Solem * Fixes issue with ``kombu.compat`` introduced in 1.5.0 (Issue #83). * Adds the ability to disable content_types in the serializer registry. Any message with a content type that is disabled will be refused. One example would be to disable the Pickle serializer: >>> from kombu.serialization import registry # by name >>> registry.disable('pickle') # or by mime-type. >>> registry.disable('application/x-python-serialize') .. _version-1.5.0: 1.5.0 ===== :release-date: 2011-11-27 06:00 P.M GMT :release-by: Ask Solem * kombu.pools: Fixed a bug resulting in resources not being properly released. This was caused by the use of ``__hash__`` to distinguish them. * Virtual transports: Dead-letter queue is now disabled by default. The dead-letter queue was enabled by default to help application authors, but now that Kombu is stable it should be removed. There are after all many cases where messages should just be dropped when there are no queues to buffer them, and keeping them without supporting automatic cleanup is rather considered a resource leak than a feature. If wanted the dead-letter queue can still be enabled, by using the ``deadletter_queue`` transport option:: >>> x = Connection('redis://', ... transport_options={'deadletter_queue': 'ae.undeliver'}) In addition, an :class:`UndeliverableWarning` is now emitted when the dead-letter queue is enabled and a message ends up there. Contributed by Ionel Maries Cristian. * MongoDB transport now supports Replicasets (Issue #81). Contributed by Ivan Metzlar. * The ``Connection.ensure`` methods now accepts a ``max_retries`` value of 0. A value of 0 now means *do not retry*, which is distinct from :const:`None` which means *retry indefinitely*. Contributed by Dan McGee. * SQS Transport: Now has a lowercase ``sqs`` alias, so that it can be used with broker URLs (Issue #82). Fix contributed by Hong Minhee * SQS Transport: Fixes KeyError on message acknowledgements (Issue #73). The SQS transport now uses UUID's for delivery tags, rather than a counter. Fix contributed by Brian Bernstein. * SQS Transport: Unicode related fixes (Issue #82). Fix contributed by Hong Minhee. * Redis version check could crash because of improper handling of types (Issue #63). * Fixed error with `Resource.force_close_all` when resources were not yet properly initialized (Issue #78). .. _version-1.4.3: 1.4.3 ===== :release-date: 2011-10-27 10:00 P.M BST :release-by: Ask Solem * Fixes bug in ProducerPool where too many resources would be acquired. .. _version-1.4.2: 1.4.2 ===== :release-date: 2011-10-26 05:00 P.M BST :release-by: Ask Solem * Eventio: Polling should ignore `errno.EINTR` * SQS: str.encode did only start accepting kwargs after Py2.7. * simple_task_queue example didn't run correctly (Issue #72). Fix contributed by Stefan Eletzhofer. * Empty messages would not raise an exception not able to be handled by `on_decode_error` (Issue #72) Fix contributed by Christophe Chauvet. * CouchDB: Properly authenticate if user/password set (Issue #70) Fix contributed by Rafael Duran Castaneda * Connection.Consumer had the wrong signature. Fix contributed by Pavel Skvazh .. _version-1.4.1: 1.4.1 ===== :release-date: 2011-09-26 04:00 P.M BST :release-by: Ask Solem * 1.4.0 broke the producer pool, resulting in new connections being established for every acquire. .. _version-1.4.0: 1.4.0 ===== :release-date: 2011-09-22 05:00 P.M BST :release-by: Ask Solem * Adds module :mod:`kombu.mixins`. This module contains a :class:`~kombu.mixins.ConsumerMixin` class that can be used to easily implement a message consumer thread that consumes messages from one or more :class:`kombu.Consumer` instances. * New example: :ref:`task-queue-example` Using the ``ConsumerMixin``, default channels and the global connection pool to demonstrate new Kombu features. * MongoDB transport did not work with MongoDB >= 2.0 (Issue #66) Fix contributed by James Turk. * Redis-py version check did not account for beta identifiers in version string. Fix contributed by David Ziegler. * Producer and Consumer now accepts a connection instance as the first argument. The connections default channel will then be used. In addition shortcut methods has been added to Connection:: >>> connection.Producer(exchange) >>> connection.Consumer(queues=..., callbacks=...) * Connection has aquired a ``connected`` attribute that can be used to check if the connection instance has established a connection. * ``ConnectionPool.acquire_channel`` now returns the connections default channel rather than establising a new channel that must be manually handled. * Added ``kombu.common.maybe_declare`` ``maybe_declare(entity)`` declares an entity if it has not previously been declared in the same process. * :func:`kombu.compat.entry_to_queue` has been moved to :mod:`kombu.common` * New module :mod:`kombu.clocks` now contains an implementation of Lamports logical clock. .. _version-1.3.5: 1.3.5 ===== :release-date: 2011-09-16 06:00 P.M BST :release-by: Ask Solem * Python 3: AMQP_PROTOCOL_HEADER must be bytes, not str. .. _version-1.3.4: 1.3.4 ===== :release-date: 2011-09-16 06:00 P.M BST :release-by: Ask Solem * Fixes syntax error in pools.reset .. _version-1.3.3: 1.3.3 ===== :release-date: 2011-09-15 02:00 P.M BST :release-by: Ask Solem * pools.reset did not support after forker arguments. .. _version-1.3.2: 1.3.2 ===== :release-date: 2011-09-10 01:00 P.M BST :release-by: Mher Movsisyan * Broke Python 2.5 compatibility by importing ``parse_qsl`` from ``urlparse`` * Connection.default_channel is now closed when connection is revived after connection failures. * Pika: Channel now supports the ``connection.client`` attribute as required by the simple interface. * pools.set_limit now raises an exception if the limit is lower than the previous limit. * pools.set_limit no longer resets the pools. .. _version-1.3.1: 1.3.1 ===== :release-date: 2011-10-07 03:00 P.M BST :release-by: Ask Solem * Last release broke after fork for pool reinitialization. * Producer/Consumer now has a ``connection`` attribute, giving access to the :class:`Connection` of the instance. * Pika: Channels now have access to the underlying :class:`Connection` instance using ``channel.connection.client``. This was previously required by the ``Simple`` classes and is now also required by :class:`Consumer` and :class:`Producer`. * Connection.default_channel is now closed at object revival. * Adds kombu.clocks.LamportClock. * compat.entry_to_queue has been moved to new module :mod:`kombu.common`. .. _version-1.3.0: 1.3.0 ===== :release-date: 2011-10-05 01:00 P.M BST :release-by: Ask Solem * Broker connection info can be now be specified using URLs The broker hostname can now be given as an URL instead, of the format:: transport://user:password@hostname:port/virtual_host for example the default broker is expressed as:: >>> Connection('amqp://guest:guest@localhost:5672//') Transport defaults to amqp, and is not required. user, password, port and virtual_host is also not mandatory and will default to the corresponding transports default. .. note:: Note that the path component (virtual_host) always starts with a forward-slash. This is necessary to distinguish between the virtual host '' (empty) and '/', which are both acceptable virtual host names. A virtual host of '/' becomes: amqp://guest:guest@localhost:5672// and a virtual host of '' (empty) becomes:: amqp://guest:guest@localhost:5672/ So the leading slash in the path component is **always required**. * Now comes with default global connection and producer pools. The acquire a connection using the connection parameters from a :class:`Connection`:: >>> from kombu import Connection, connections >>> connection = Connection('amqp://guest:guest@localhost//') >>> with connections[connection].acquire(block=True): ... # do something with connection To acquire a producer using the connection parameters from a :class:`Connection`:: >>> from kombu import Connection, producers >>> connection = Connection('amqp://guest:guest@localhost//') >>> with producers[connection].acquire(block=True): ... producer.publish({'hello': 'world'}, exchange='hello') Acquiring a producer will in turn also acquire a connection from the associated pool in ``connections``, so you the number of producers is bound the same limit as number of connections. The default limit of 100 connections per connection instance can be changed by doing:: >>> from kombu import pools >>> pools.set_limit(10) The pool can also be forcefully closed by doing:: >>> from kombu import pools >>> pool.reset() * SQS Transport: Persistence using SimpleDB is now disabled by default, after reports of unstable SimpleDB connections leading to errors. * :class:`Producer` can now be used as a context manager. * ``Producer.__exit__`` now properly calls ``release`` instead of close. The previous behavior would lead to a memory leak when using the :class:`kombu.pools.ProducerPool` * Now silences all exceptions from `import ctypes` to match behaviour of the standard Python uuid module, and avoid passing on MemoryError exceptions on SELinux-enabled systems (Issue #52 + Issue #53) * ``amqp`` is now an alias to the ``amqplib`` transport. * ``kombu.syn.detect_environment`` now returns 'default', 'eventlet', or 'gevent' depending on what monkey patches have been installed. * Serialization registry has new attribute ``type_to_name`` so it is possible to lookup serializater name by content type. * Exchange argument to ``Producer.publish`` can now be an :class:`Exchange` instance. * ``compat.Publisher`` now supports the ``channel`` keyword argument. * Acking a message on some transports could lead to :exc:`KeyError` being raised (Issue #57). * Connection pool: Connections are no long instantiated when the pool is created, but instantiated as needed instead. * Tests now pass on PyPy. * ``Connection.as_uri`` now includes the password if the keyword argument ``include_password`` is set. * Virtual transports now comes with a default ``default_connection_params`` attribute. .. _version-1.2.1: 1.2.1 ===== :release-date: 2011-07-29 12:52 P.M BST :release-by: Ask Solem * Now depends on amqplib >= 1.0.0. * Redis: Now automatically deletes auto_delete queues at ``basic_cancel``. * ``serialization.unregister`` added so it is possible to remove unwanted seralizers. * Fixes MemoryError while importing ctypes on SELinux (Issue #52). * ``Connection.autoretry`` is a version of ``ensure`` that works with arbitrary functions (i.e. it does not need an associated object that implements the ``revive`` method. Example usage: .. code-block:: python channel = connection.channel() try: ret, channel = connection.autoretry(send_messages, channel=channel) finally: channel.close() * ``ConnectionPool.acquire`` no longer force establishes the connection. The connection will be established as needed. * ``Connection.ensure`` now supports an ``on_revive`` callback that is applied whenever the connection is re-established. * ``Consumer.consuming_from(queue)`` returns True if the Consumer is consuming from ``queue``. * ``Consumer.cancel_by_queue`` did not remove the queue from ``queues``. * ``compat.ConsumerSet.add_queue_from_dict`` now automatically declared the queue if ``auto_declare`` set. .. _version-1.2.0: 1.2.0 ===== :release-date: 2011-07-15 12:00 P.M BST :release-by: Ask Solem * Virtual: Fixes cyclic reference in Channel.close (Issue #49). * Producer.publish: Can now set additional properties using keyword arguments (Issue #48). * Adds Queue.no_ack option to control the no_ack option for individual queues. * Recent versions broke pylibrabbitmq support. * SimpleQueue and SimpleBuffer can now be used as contexts. * Test requirements specifies PyYAML==3.09 as 3.10 dropped Python 2.4 support * Now properly reports default values in Connection.info/.as_uri .. _version-1.1.6: 1.1.6 ===== :release-date: 2011-06-13 04:00 P.M BST :release-by: Ask Solem * Redis: Fixes issue introduced in 1.1.4, where a redis connection failure could leave consumer hanging forever. * SQS: Now supports fanout messaging by using SimpleDB to store routing tables. This can be disabled by setting the `supports_fanout` transport option: >>> Connection(transport='SQS', ... transport_options={'supports_fanout': False}) * SQS: Now properly deletes a message when a message is acked. * SQS: Can now set the Amazon AWS region, by using the ``region`` transport option. * amqplib: Now uses `localhost` as default hostname instead of raising an error. .. _version-1.1.5: 1.1.5 ===== :release-date: 2011-06-07 06:00 P.M BST :release-by: Ask Solem * Fixes compatibility with redis-py 2.4.4. .. _version-1.1.4: 1.1.4 ===== :release-date: 2011-06-07 04:00 P.M BST :release-by: Ask Solem * Redis transport: Now requires redis-py version 2.4.4 or later. * New Amazon SQS transport added. Usage: >>> conn = Connection(transport='SQS', ... userid=aws_access_key_id, ... password=aws_secret_access_key) The environment variables :envvar:`AWS_ACCESS_KEY_ID` and :envvar:`AWS_SECRET_ACCESS_KEY` are also supported. * librabbitmq transport: Fixes default credentials support. * amqplib transport: Now supports `login_method` for SSL auth. :class:`Connection` now supports the `login_method` keyword argument. Default `login_method` is ``AMQPLAIN``. .. _version-1.1.3: 1.1.3 ===== :release-date: 2011-04-21 16:00 P.M CEST :release-by: Ask Solem * Redis: Consuming from multiple connections now works with Eventlet. * Redis: Can now perform channel operations while the channel is in BRPOP/LISTEN mode (Issue #35). Also the async BRPOP now times out after 1 second, this means that cancelling consuming from a queue/starting consuming from additional queues has a latency of up to one second (BRPOP does not support subsecond timeouts). * Virtual: Allow channel objects to be closed multiple times without error. * amqplib: ``AttributeError`` has been added to the list of known connection related errors (:attr:`Connection.connection_errors`). * amqplib: Now converts :exc:`SSLError` timeout errors to :exc:`socket.timeout` (http://bugs.python.org/issue10272) * Ensures cyclic references are destroyed when the connection is closed. .. _version-1.1.2: 1.1.2 ===== :release-date: 2011-04-06 16:00 P.M CEST :release-by: Ask Solem * Redis: Fixes serious issue where messages could be lost. The issue could happen if the message exceeded a certain number of kilobytes in size. It is recommended that all users of the Redis transport should upgrade to this version, even if not currently experiencing any issues. .. _version-1.1.1: 1.1.1 ===== :release-date: 2011-04-05 15:51 P.M CEST :release-by: Ask Solem * 1.1.0 started using ``Queue.LifoQueue`` which is only available in Python 2.6+ (Issue #33). We now ship with our own LifoQueue. .. _version-1.1.0: 1.1.0 ===== :release-date: 2011-04-05 01:05 P.M CEST :release-by: Ask Solem .. _v110-important: Important Notes --------------- * Virtual transports: Message body is now base64 encoded by default (Issue #27). This should solve problems sending binary data with virtual transports. Message compatibility is handled by adding a ``body_encoding`` property, so messages sent by older versions is compatible with this release. However -- If you are accessing the messages directly not using Kombu, then you have to respect the ``body_encoding`` property. If you need to disable base64 encoding then you can do so via the transport options:: Connection(transport='...', transport_options={'body_encoding': None}) **For transport authors**: You don't have to change anything in your custom transports, as this is handled automatically by the base class. If you want to use a different encoder you can do so by adding a key to ``Channel.codecs``. Default encoding is specified by the ``Channel.body_encoding`` attribute. A new codec must provide two methods: ``encode(data)`` and ``decode(data)``. * ConnectionPool/ChannelPool/Resource: Setting ``limit=None`` (or 0) now disables pool semantics, and will establish and close the resource whenever acquired or released. * ConnectionPool/ChannelPool/Resource: Is now using a LIFO queue instead of the previous FIFO behavior. This means that the last resource released will be the one acquired next. I.e. if only a single thread is using the pool this means only a single connection will ever be used. * Connection: Cloned connections did not inherit transport_options (``__copy__``). * contrib/requirements is now located in the top directory of the distribution. * MongoDB: Now supports authentication using the ``userid`` and ``password`` arguments to :class:`Connection` (Issue #30). * Connection: Default autentication credentials are now delegated to the individual transports. This means that the ``userid`` and ``password`` arguments to Connection is no longer *guest/guest* by default. The amqplib and pika transports will still have the default credentials. * :meth:`Consumer.__exit__` did not have the correct signature (Issue #32). * Channel objects now have a ``channel_id`` attribute. * MongoDB: Version sniffing broke with development versions of mongod (Issue #29). * New environment variable :envvar:`KOMBU_LOG_CONNECTION` will now emit debug log messages for connection related actions. :envvar:`KOMBU_LOG_DEBUG` will also enable :envvar:`KOMBU_LOG_CONNECTION`. .. _version-1.0.7: 1.0.7 ===== :release-date: 2011-03-28 05:45 P.M CEST :release-by: Ask Solem * Now depends on anyjson 0.3.1 cjson is no longer a recommended json implementation, and anyjson will now emit a deprecation warning if used. * Please note that the Pika backend only works with version 0.5.2. The latest version (0.9.x) drastically changed API, and it is not compatible yet. * on_decode_error is now called for exceptions in message_to_python (Issue #24). * Redis: did not respect QoS settings. * Redis: Creating a connection now ensures the connection is established. This means ``Connection.ensure_connection`` works properly with Redis. * consumer_tag argument to ``Queue.consume`` can't be :const:`None` (Issue #21). A None value is now automatically converted to empty string. An empty string will make the server generate a unique tag. * Connection now supports a ``transport_options`` argument. This can be used to pass additional arguments to transports. * Pika: ``drain_events`` raised :exc:`socket.timeout` even if no timeout set (Issue #8). .. version-1.0.6: 1.0.6 ===== :release-date: 2011-03-22 04:00 P.M CET :release-by: Ask Solem * The ``delivery_mode`` aliases (persistent/transient) were not automatically converted to integer, and would cause a crash if using the amqplib transport. * Redis: The redis-py :exc:`InvalidData` exception suddenly changed name to :exc:`DataError`. * The :envvar:`KOMBU_LOG_DEBUG` environment variable can now be set to log all channel method calls. Support for the following environment variables have been added: * :envvar:`KOMBU_LOG_CHANNEL` will wrap channels in an object that logs every method call. * :envvar:`KOMBU_LOG_DEBUG` both enables channel logging and configures the root logger to emit messages to standard error. **Example Usage**:: $ KOMBU_LOG_DEBUG=1 python >>> from kombu import Connection >>> conn = Connection() >>> channel = conn.channel() Start from server, version: 8.0, properties: {u'product': 'RabbitMQ',.............. } Open OK! known_hosts [] using channel_id: 1 Channel open >>> channel.queue_declare('myq', passive=True) [Kombu channel:1] queue_declare('myq', passive=True) (u'myq', 0, 1) .. _version-1.0.5: 1.0.5 ===== :release-date: 2011-03-17 04:00 P.M CET :release-by: Ask Solem * Fixed memory leak when creating virtual channels. All virtual transports affected (redis, mongodb, memory, django, sqlalchemy, couchdb, beanstalk). * Virtual Transports: Fixed potential race condition when acking messages. If you have been affected by this, the error would show itself as an exception raised by the OrderedDict implementation. (``object no longer exists``). * MongoDB transport requires the ``findandmodify`` command only available in MongoDB 1.3+, so now raises an exception if connected to an incompatible server version. * Virtual Transports: ``basic.cancel`` should not try to remove unknown consumer tag. .. _version-1.0.4: 1.0.4 ===== :release-date: 2011-02-28 04:00 P.M CET :release-by: Ask Solem * Added Transport.polling_interval Used by django-kombu to increase the time to sleep between SELECTs when there are no messages in the queue. Users of django-kombu should upgrade to django-kombu v0.9.2. .. _version-1.0.3: 1.0.3 ===== :release-date: 2011-02-12 04:00 P.M CET :release-by: Ask Solem * ConnectionPool: Re-connect if amqplib connection closed * Adds ``Queue.as_dict`` + ``Exchange.as_dict``. * Copyright headers updated to include 2011. .. _version-1.0.2: 1.0.2 ===== :release-date: 2011-01-31 10:45 P.M CET :release-by: Ask Solem * amqplib: Message properties were not set properly. * Ghettoq backend names are now automatically translated to the new names. .. _version-1.0.1: 1.0.1 ===== :release-date: 2011-01-28 12:00 P.M CET :release-by: Ask Solem * Redis: Now works with Linux (epoll) .. _version-1.0.0: 1.0.0 ===== :release-date: 2011-01-27 12:00 P.M CET :release-by: Ask Solem * Initial release .. _version-0.1.0: 0.1.0 ===== :release-date: 2010-07-22 04:20 P.M CET :release-by: Ask Solem * Initial fork of carrot kombu-3.0.7/docs/0000755000076500000000000000000012247127370014205 5ustar asksolwheel00000000000000kombu-3.0.7/docs/.static/0000755000076500000000000000000012247127370015552 5ustar asksolwheel00000000000000kombu-3.0.7/docs/.static/.keep0000644000076500000000000000000012064115765016467 0ustar asksolwheel00000000000000kombu-3.0.7/docs/.templates/0000755000076500000000000000000012247127370016261 5ustar asksolwheel00000000000000kombu-3.0.7/docs/.templates/sidebarintro.html0000644000076500000000000000037612241157622021637 0ustar asksolwheel00000000000000

Kombu

Kombu is a messaging library for Python.

kombu-3.0.7/docs/.templates/sidebarlogo.html0000644000076500000000000000027312064115765021445 0ustar asksolwheel00000000000000 kombu-3.0.7/docs/_ext/0000755000076500000000000000000012247127370015144 5ustar asksolwheel00000000000000kombu-3.0.7/docs/_ext/applyxrefs.py0000644000076500000000000000413012223041316017676 0ustar asksolwheel00000000000000"""Adds xref targets to the top of files.""" import sys import os testing = False DONT_TOUCH = ('./index.txt', ) def target_name(fn): if fn.endswith('.txt'): fn = fn[:-4] return '_' + fn.lstrip('./').replace('/', '-') def process_file(fn, lines): lines.insert(0, '\n') lines.insert(0, '.. %s:\n' % target_name(fn)) try: f = open(fn, 'w') except IOError: print("Can't open %s for writing. Not touching it." % fn) return try: f.writelines(lines) except IOError: print("Can't write to %s. Not touching it." % fn) finally: f.close() def has_target(fn): try: f = open(fn, 'r') except IOError: print("Can't open %s. Not touching it." % fn) return (True, None) readok = True try: lines = f.readlines() except IOError: print("Can't read %s. Not touching it." % fn) readok = False finally: f.close() if not readok: return (True, None) #print fn, len(lines) if len(lines) < 1: print("Not touching empty file %s." % fn) return (True, None) if lines[0].startswith('.. _'): return (True, None) return (False, lines) def main(argv=None): if argv is None: argv = sys.argv if len(argv) == 1: argv.extend('.') files = [] for root in argv[1:]: for (dirpath, dirnames, filenames) in os.walk(root): files.extend([(dirpath, f) for f in filenames]) files.sort() files = [os.path.join(p, fn) for p, fn in files if fn.endswith('.txt')] #print files for fn in files: if fn in DONT_TOUCH: print("Skipping blacklisted file %s." % fn) continue target_found, lines = has_target(fn) if not target_found: if testing: print '%s: %s' % (fn, lines[0]), else: print "Adding xref to %s" % fn process_file(fn, lines) else: print "Skipping %s: already has a xref" % fn if __name__ == '__main__': sys.exit(main()) kombu-3.0.7/docs/_ext/literals_to_xrefs.py0000644000076500000000000001130112237554371021246 0ustar asksolwheel00000000000000""" Runs through a reST file looking for old-style literals, and helps replace them with new-style references. """ import re import sys import shelve try: input = input except NameError: input = raw_input # noqa refre = re.compile(r'``([^`\s]+?)``') ROLES = ( 'attr', 'class', "djadmin", 'data', 'exc', 'file', 'func', 'lookup', 'meth', 'mod', "djadminopt", "ref", "setting", "term", "tfilter", "ttag", # special "skip", ) ALWAYS_SKIP = [ "NULL", "True", "False", ] def fixliterals(fname): data = open(fname).read() last = 0 new = [] storage = shelve.open("/tmp/literals_to_xref.shelve") lastvalues = storage.get("lastvalues", {}) for m in refre.finditer(data): new.append(data[last:m.start()]) last = m.end() line_start = data.rfind("\n", 0, m.start()) line_end = data.find("\n", m.end()) prev_start = data.rfind("\n", 0, line_start) next_end = data.find("\n", line_end + 1) # Skip always-skip stuff if m.group(1) in ALWAYS_SKIP: new.append(m.group(0)) continue # skip when the next line is a title next_line = data[m.end():next_end].strip() if next_line[0] in "!-/:-@[-`{-~" and \ all(c == next_line[0] for c in next_line): new.append(m.group(0)) continue sys.stdout.write("\n" + "-" * 80 + "\n") sys.stdout.write(data[prev_start + 1:m.start()]) sys.stdout.write(colorize(m.group(0), fg="red")) sys.stdout.write(data[m.end():next_end]) sys.stdout.write("\n\n") replace_type = None while replace_type is None: replace_type = input( colorize("Replace role: ", fg="yellow")).strip().lower() if replace_type and replace_type not in ROLES: replace_type = None if replace_type == "": new.append(m.group(0)) continue if replace_type == "skip": new.append(m.group(0)) ALWAYS_SKIP.append(m.group(1)) continue default = lastvalues.get(m.group(1), m.group(1)) if default.endswith("()") and \ replace_type in ("class", "func", "meth"): default = default[:-2] replace_value = input( colorize("Text [", fg="yellow") + default + colorize("]: ", fg="yellow"), ).strip() if not replace_value: replace_value = default new.append(":%s:`%s`" % (replace_type, replace_value)) lastvalues[m.group(1)] = replace_value new.append(data[last:]) open(fname, "w").write("".join(new)) storage["lastvalues"] = lastvalues storage.close() def colorize(text='', opts=(), **kwargs): """ Returns your text, enclosed in ANSI graphics codes. Depends on the keyword arguments 'fg' and 'bg', and the contents of the opts tuple/list. Returns the RESET code if no parameters are given. Valid colors: 'black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white' Valid options: 'bold' 'underscore' 'blink' 'reverse' 'conceal' 'noreset' - string will not be auto-terminated with the RESET code Examples: colorize('hello', fg='red', bg='blue', opts=('blink',)) colorize() colorize('goodbye', opts=('underscore',)) print colorize('first line', fg='red', opts=('noreset',)) print 'this should be red too' print colorize('and so should this') print 'this should not be red' """ color_names = ('black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white') foreground = dict([(color_names[x], '3%s' % x) for x in range(8)]) background = dict([(color_names[x], '4%s' % x) for x in range(8)]) RESET = '0' opt_dict = {'bold': '1', 'underscore': '4', 'blink': '5', 'reverse': '7', 'conceal': '8'} text = str(text) code_list = [] if text == '' and len(opts) == 1 and opts[0] == 'reset': return '\x1b[%sm' % RESET for k, v in kwargs.iteritems(): if k == 'fg': code_list.append(foreground[v]) elif k == 'bg': code_list.append(background[v]) for o in opts: if o in opt_dict: code_list.append(opt_dict[o]) if 'noreset' not in opts: text = text + '\x1b[%sm' % RESET return ('\x1b[%sm' % ';'.join(code_list)) + text if __name__ == '__main__': try: fixliterals(sys.argv[1]) except (KeyboardInterrupt, SystemExit): print kombu-3.0.7/docs/_theme/0000755000076500000000000000000012247127370015446 5ustar asksolwheel00000000000000kombu-3.0.7/docs/_theme/celery/0000755000076500000000000000000012247127370016731 5ustar asksolwheel00000000000000kombu-3.0.7/docs/_theme/celery/static/0000755000076500000000000000000012247127370020220 5ustar asksolwheel00000000000000kombu-3.0.7/docs/_theme/celery/static/celery.css_t0000644000076500000000000001476712064115765022561 0ustar asksolwheel00000000000000/* * celery.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: BSD, see LICENSE for details. */ {% set page_width = 940 %} {% set sidebar_width = 220 %} {% set body_font_stack = 'Optima, Segoe, "Segoe UI", Candara, Calibri, Arial, sans-serif' %} {% set headline_font_stack = 'Futura, "Trebuchet MS", Arial, sans-serif' %} {% set code_font_stack = "'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace" %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { align: left; font-family: {{ body_font_stack }}; font-size: 17px; background-color: white; color: #000; margin: 30px 0 0 0; padding: 0; } div.document { width: {{ page_width }}px; margin: 0 auto; } div.related { width: {{ page_width - 20 }}px; padding: 5px 10px; background: #F2FCEE; margin: 15px auto 15px auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}px; } div.sphinxsidebar { width: {{ sidebar_width }}px; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.celerylogo { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width - 15 }}px; margin: 10px auto 30px auto; padding-right: 15px; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dashed #DCF0D5; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 7px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: {{ headline_font_stack }}; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: {{ body_font_stack }}; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #348613; text-decoration: underline; } a:hover { color: #59B833; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: {{ headline_font_stack }}; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } div.body h1 { margin-top: 0; padding-top: 0; font-size: 200%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } div.body h1 a.toc-backref, div.body h2 a.toc-backref, div.body h3 a.toc-backref, div.body h4 a.toc-backref, div.body h5 a.toc-backref, div.body h6 a.toc-backref { color: inherit!important; text-decoration: none; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition p.admonition-title { font-family: {{ headline_font_stack }}; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight{ background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: {{ code_font_stack }}; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul { margin: 10px 0 10px 30px; padding: 0; } pre { background: #F0FFEB; padding: 7px 10px; margin: 15px 0; border: 1px solid #C7ECB8; border-radius: 2px; -moz-border-radius: 2px; -webkit-border-radius: 2px; line-height: 1.3em; } tt { background: #F0FFEB; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background: #F0FFEB; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dashed #DCF0D5; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dashed #DCF0D5; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } kombu-3.0.7/docs/_theme/celery/theme.conf0000644000076500000000000000007312064115765020704 0ustar asksolwheel00000000000000[theme] inherit = basic stylesheet = celery.css [options] kombu-3.0.7/docs/changelog.rst0000644000076500000000000000000012247127071022243 1kombu-3.0.7/Changelogustar asksolwheel00000000000000kombu-3.0.7/docs/conf.py0000644000076500000000000000411712237554371015513 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- import sys import os # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. sys.path.append(os.path.join(os.pardir, "tests")) import kombu from django.conf import settings if not settings.configured: settings.configure() # General configuration # --------------------- extensions = ['sphinx.ext.autodoc', 'sphinx.ext.coverage'] # Add any paths that contain templates here, relative to this directory. templates_path = ['.templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Kombu' copyright = '2009-2013, Ask Solem' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = ".".join(map(str, kombu.VERSION[0:2])) # The full version, including alpha/beta/rc tags. release = kombu.__version__ exclude_trees = ['.build'] # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'colorful' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['.static'] html_use_smartypants = True # If false, no module index is generated. html_use_modindex = True # If false, no index is generated. html_use_index = True latex_documents = [ ('index', 'Kombu.tex', 'Kombu Documentation', 'Ask Solem', 'manual'), ] html_theme = "celery" html_theme_path = ["_theme"] html_sidebars = { 'index': ['sidebarintro.html', 'sourcelink.html', 'searchbox.html'], '**': ['sidebarlogo.html', 'localtoc.html', 'relations.html', 'sourcelink.html', 'searchbox.html'], } kombu-3.0.7/docs/faq.rst0000644000076500000000000000066412064115765015516 0ustar asksolwheel00000000000000============================ Frequently Asked Questions ============================ Questions ========= Q: Message.reject doesn't work? -------------------------------------- **Answer**: Earlier versions of RabbitMQ did not implement ``basic.reject``, so make sure your version is recent enough to support it. Q: Message.requeue doesn't work? -------------------------------------- **Answer**: See _`Message.reject doesn't work?` kombu-3.0.7/docs/images/0000755000076500000000000000000012247127370015452 5ustar asksolwheel00000000000000kombu-3.0.7/docs/images/kombu.jpg0000644000076500000000000034206312064115765017303 0ustar asksolwheel00000000000000JFIF)ExifMM*  (12ihNIKON CORPORATIONNIKON D50w@'w@'Adobe Photoshop CS2 Windows2008:03:24 12:01:10&"0221֑   , 8080800100H6>.    8 2008:03:05 13:23:212008:03:05 13:23:21* 6 ASCII R980100(%LHHJFIFHH Adobe_CMAdobed            " ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?TI%)S.ƀKCՌsU}blg0ONְY~}Y6\[ĵú"wTK ׀^}[=Jʿ_u Fei-s1m@.ቆ~вRA /bO^2h#s}u~AEޓFP3 'r%Ɵ#¼q\u ΥkcbصLqqeL긶Zƌ\O沌!cJ+^%fl,uxֶ$ճO_f=w"ʚk6=ku4=VG8COS'ڙQ\1Iy뺻d1#+!i_cWU[nSq!8dO=$U:a> ;ˈ'V1Y^7a6쳣`oQp%u{ ٸ`mMkIy=uu9cdZq}|}13*ʬ0-:"Q1$⤒I$?TI%)s'ݐƳ hk=3[eͱl]>_S*-*1wߛe2"l*2K8fG[z_ߓhybgc/6K̦ܯgWOiYpU,~KY+1pdjESZl[G^ɰӺCz?;ЪoBͿw[ɪoU&˨E?&KOMÅax9 `n]e{+v?qgzis+8Ãgx8(2x {2룤F-׸5Ƭ8^'Z.=7=kqN6 6=K]t3m1?",U\#}u5C}oeћ?j7^mʶ zBƊ,u>u6jo[=㐱)_VLDc+s75fQz3=>nX7<0-:߳ƳQm>CCv;W1262ëmU32]D6{rY,|9ߴScI2W6Rb`eU7CSaG(qZ_q?ޘ .y%bcuge,k)fui5XrM~ҿKJޙԮ3&9cneU~+,H';fs?`}Vi{\,q>WXssUK1ZiۑM̶z =N8}D}?w&Wec=CER^-~ONS*Xg^ϡ6{]c\dK+9]N=ZODGRI$TI%)q\)ٌe~;Yccf%_S7 Kz_H;Ck_bGb<@OB xYZ\lf]5^=Fޛ]BߡuRqr/lu̶i߲PIs/ձoc*(M=|{FEd\ګe>~>z\PWaacm3/詊J$9^, MGŏؿ=57, ϩvf}pwbgP(t 2eøhQ4.4Pj JR&R&R:u.1ڃΫזn$6Dߥōkj vNE`;pUUdcծcь$ ܒHqo[pk+Ʒ"NY} ;GHNiF]hsOo/NŹ=&W]`}dVf_zT[7-i}wXQ:M:iichKkҫm?E[3z7Rx3Yck n,]UCY۲Ѷo}?1[vG+q$;{A\Onַtli#^K\ym{[C_~H7:YϷ1_zmѮr_Rfc6f>6Mp/d}֢6jnFa5I$K_TI%)p[$}g,?̯޻i~s5*PF;_&VUwlf`!w_[O@#M6OnO5$T wͅ76i!7_wZ`gCoMT1, :[~bI>53Jھhk湦O;ͳciF7OtcoEI ~ ^K~:+yFvn\"5Wu۲}/z{陾O' 3%6GmYݗ]VbY_W}Me_+WC~m̗gr~OTiٽSp0wNuYv~epk Ħ&F_m>vYtԫ3񟅎Vڮ{ Y9m?e1iue'juέcm v5NS]UW/~Tgd]qwc4C挆?[}{G'XXY]?^>yd4]P۝]oO~sz3,M_KbVKj/zX5l,wfUYuzǹ&Lŷc^=;7кv+mnc][Ou m9]4W[FUYe<2G vT%w_åh_[z{LU}/asҳ}U˃̪:Uo-qre~e7Uv=|^I$JTI%)pYH?\:Hq|}dq}4nqM+.^xtjnj?ig?V{#YzexXox?ɴ粐&vGU1,=60w'؊luTK@p"OskTA5m"5kţca'7 n2ky멻+~[Ӻ+ޛ8}e.8ƍm2}*Kj_[VYF 74nvZ^c?ªn]*쁹F]XV*K-[u nّ}Sx;m>yOJ3g)B ]7]ۏ~%Ksq)K[me7~eXN5]^=W{n}}գ[?K[OaGuޏ?fM]Aٖ0Vdh~m:{-4U}ӪѱߡroN־K2K>`Tl%{]?Rt,LbUӲOvvPuӐAͮW?s?T7:[y~f6̇z>7M]/5}ccg,JjLQ~N ?&}sZnw؏RpCcϣ{+n:܏2l&29Nʷ!]}>k1p{4mmWQXz[-# UT~ icT>>_k-[j 꾰tnd0[~k}L?JNI$HTI%)p}xىib. k8pťvS<$Nj_SKzze,s鱶XYwVop,ct.sG]7srƿ\ H:L4~k?&=V>3sn,Qj}dX[znG#Bg[^6+}<|{ڪc2}L_Ko6u"xq w}mTQekU*VVE8/h5yСeSճ!:*v۰mbYqoLu=ޯlQ:1grW~􎣆 ][,>q뱾Qc}T2/ro36Z푻L֬GV6F7j>Sra=ŦS[@aN+e>=9}Bkz_\3Ԟs3+uFEOm]Nut(g*ͮuWZޕeNOz^L竢U]4toVw@/]_w;k~KӫݏgY4Yu^oqmc+Σ~noF%S[?"ӝly mEzoGݟHWI9t܎uXc ~~C5~ Gbu:N%n-yֆ}9ޯkudރKze mXƿ+-)~еcѹW[i:4mmy-wz3汬/ٻxG5Xպ֚P I|;wQn~.CSoR@`VZmg1~/z龬㚰=\Ĉ;kmt t9lmP컋Ժi'mO$Of{W/6mngF-[MeQEKh)5$HTI%)y=>abZ uƛw&;_[1H?3] c`ϖ̪I6=6wnf-c~VGOΣ7Y[\}c=;v>:33){౻=QC?K?G@mX؅vʪ޳:?_ NrfuXX,}Wտ:Mr!Y_V1l^S@~Ml9}4YdwWvnq瑩emu_F%on.'[gG׾t,|lzu6쪻Ÿ?hٵW?"}TǑn09iYlkFE.$F@4v:nNeYz-/ o~+XʯgIzΣe7=kPh֦TdhK'Qv~8n&'}FYV}>U}K)ʣ:}V5aGNsjwU9?TAg lK7sYء{Ǩ)D} TOWo3\*ǨXkqkjF%YlpeUx~Cѩ}f7\H˲%kZ{M$4,1]J1_qd״XJcf5ߤ&5,^_bf.EBlublckԳ=3!{XQ8K2skk`׃h eW~+]4u{wm%>@n~k6_u rS(ܼۜgu m ONVGUפI$HTI%)puϮQ؍t*r8չw"\[}!NNs9ge,lil%'}uŷtԹ_G3cQwXm7xxszfʷ @Ӱ>DWwag{s>ÓQ&MnXE6c@۱%~oY_WzE9=uˮ0ەclb͖U};˯{ku{.s x$:G;svsfܾ̾em? u"u 7 lh^Q9k3/'')Ue7ֺNnUU׳]Ů^>V^Tu̺I8 e/g}uϪS볨cdcgٕ{lLksngߠ8_osM}:32wlUewz?{_EJM,yqSwmZDYyh7I8ʝ907pu[@8m4Z_p\׏C> ;=+fXYkH݇mEV-ET1{ȤS e;q+ek,H7q'{`QM9Y}ZiW9?%M{z׽{[IdXWb+ {1\0{v-si{}Oru .} 漦6Y{z/=2UڵcD4cciq:Ƿ}z?hêQӯKzR˟F5wwё/H\9FCov=dNWOYUz]IK2+auYUmdnߥg.v1_ [VҷB])n ,ˮc8Km5ܕ@4,{[`s@;x+V~oWz/K:cܖ;tc݌3~5ޭ-.lYgoK .k:MΠiu8g'`:]X%^T˽z?"jőZk1a΢_QMoxs~U_U#n2ەӫs`ooq]}JizIW`axkw" s{Z;9h;SCg[vaߺ}ǯ~[:~=,V4*ژMR;Eyic<RS XۜlVYg ;_5sP\%4c},^=jr]c1ˢh]I$/ˆy;3 $e= zS[2:X5PV%<=.+v#hCXA'6>c[˥\=vG5~M˽JurdZ ZI\`e}TĽζUk"ŎٻgieMߴ]o(<U;m1滨6!;]vu7G;.$Ŧ2~YwGwI <E犪gQ"lfNkcXXt?fU'&I 3&HtLoRwt ~]8 $-b1k/d_Hsh&=xW\'֫/=K`!o"͒I+Photoshop 3.08BIM%8BIM8BIM&?8BIM 8BIM8BIM 8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM@@8BIM8BIMEDSC_0124nullboundsObjcRct1Top longLeftlongBtomlongRghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlongRghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM8BIM %h,%LJFIFHH Adobe_CMAdobed            " ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?TI%)S.ƀKCՌsU}blg0ONְY~}Y6\[ĵú"wTK ׀^}[=Jʿ_u Fei-s1m@.ቆ~вRA /bO^2h#s}u~AEޓFP3 'r%Ɵ#¼q\u ΥkcbصLqqeL긶Zƌ\O沌!cJ+^%fl,uxֶ$ճO_f=w"ʚk6=ku4=VG8COS'ڙQ\1Iy뺻d1#+!i_cWU[nSq!8dO=$U:a> ;ˈ'V1Y^7a6쳣`oQp%u{ ٸ`mMkIy=uu9cdZq}|}13*ʬ0-:"Q1$⤒I$?TI%)s'ݐƳ hk=3[eͱl]>_S*-*1wߛe2"l*2K8fG[z_ߓhybgc/6K̦ܯgWOiYpU,~KY+1pdjESZl[G^ɰӺCz?;ЪoBͿw[ɪoU&˨E?&KOMÅax9 `n]e{+v?qgzis+8Ãgx8(2x {2룤F-׸5Ƭ8^'Z.=7=kqN6 6=K]t3m1?",U\#}u5C}oeћ?j7^mʶ zBƊ,u>u6jo[=㐱)_VLDc+s75fQz3=>nX7<0-:߳ƳQm>CCv;W1262ëmU32]D6{rY,|9ߴScI2W6Rb`eU7CSaG(qZ_q?ޘ .y%bcuge,k)fui5XrM~ҿKJޙԮ3&9cneU~+,H';fs?`}Vi{\,q>WXssUK1ZiۑM̶z =N8}D}?w&Wec=CER^-~ONS*Xg^ϡ6{]c\dK+9]N=ZODGRI$TI%)q\)ٌe~;Yccf%_S7 Kz_H;Ck_bGb<@OB xYZ\lf]5^=Fޛ]BߡuRqr/lu̶i߲PIs/ձoc*(M=|{FEd\ګe>~>z\PWaacm3/詊J$9^, MGŏؿ=57, ϩvf}pwbgP(t 2eøhQ4.4Pj JR&R&R:u.1ڃΫזn$6Dߥōkj vNE`;pUUdcծcь$ ܒHqo[pk+Ʒ"NY} ;GHNiF]hsOo/NŹ=&W]`}dVf_zT[7-i}wXQ:M:iichKkҫm?E[3z7Rx3Yck n,]UCY۲Ѷo}?1[vG+q$;{A\Onַtli#^K\ym{[C_~H7:YϷ1_zmѮr_Rfc6f>6Mp/d}֢6jnFa5I$K_TI%)p[$}g,?̯޻i~s5*PF;_&VUwlf`!w_[O@#M6OnO5$T wͅ76i!7_wZ`gCoMT1, :[~bI>53Jھhk湦O;ͳciF7OtcoEI ~ ^K~:+yFvn\"5Wu۲}/z{陾O' 3%6GmYݗ]VbY_W}Me_+WC~m̗gr~OTiٽSp0wNuYv~epk Ħ&F_m>vYtԫ3񟅎Vڮ{ Y9m?e1iue'juέcm v5NS]UW/~Tgd]qwc4C挆?[}{G'XXY]?^>yd4]P۝]oO~sz3,M_KbVKj/zX5l,wfUYuzǹ&Lŷc^=;7кv+mnc][Ou m9]4W[FUYe<2G vT%w_åh_[z{LU}/asҳ}U˃̪:Uo-qre~e7Uv=|^I$JTI%)pYH?\:Hq|}dq}4nqM+.^xtjnj?ig?V{#YzexXox?ɴ粐&vGU1,=60w'؊luTK@p"OskTA5m"5kţca'7 n2ky멻+~[Ӻ+ޛ8}e.8ƍm2}*Kj_[VYF 74nvZ^c?ªn]*쁹F]XV*K-[u nّ}Sx;m>yOJ3g)B ]7]ۏ~%Ksq)K[me7~eXN5]^=W{n}}գ[?K[OaGuޏ?fM]Aٖ0Vdh~m:{-4U}ӪѱߡroN־K2K>`Tl%{]?Rt,LbUӲOvvPuӐAͮW?s?T7:[y~f6̇z>7M]/5}ccg,JjLQ~N ?&}sZnw؏RpCcϣ{+n:܏2l&29Nʷ!]}>k1p{4mmWQXz[-# UT~ icT>>_k-[j 꾰tnd0[~k}L?JNI$HTI%)p}xىib. k8pťvS<$Nj_SKzze,s鱶XYwVop,ct.sG]7srƿ\ H:L4~k?&=V>3sn,Qj}dX[znG#Bg[^6+}<|{ڪc2}L_Ko6u"xq w}mTQekU*VVE8/h5yСeSճ!:*v۰mbYqoLu=ޯlQ:1grW~􎣆 ][,>q뱾Qc}T2/ro36Z푻L֬GV6F7j>Sra=ŦS[@aN+e>=9}Bkz_\3Ԟs3+uFEOm]Nut(g*ͮuWZޕeNOz^L竢U]4toVw@/]_w;k~KӫݏgY4Yu^oqmc+Σ~noF%S[?"ӝly mEzoGݟHWI9t܎uXc ~~C5~ Gbu:N%n-yֆ}9ޯkudރKze mXƿ+-)~еcѹW[i:4mmy-wz3汬/ٻxG5Xպ֚P I|;wQn~.CSoR@`VZmg1~/z龬㚰=\Ĉ;kmt t9lmP컋Ժi'mO$Of{W/6mngF-[MeQEKh)5$HTI%)y=>abZ uƛw&;_[1H?3] c`ϖ̪I6=6wnf-c~VGOΣ7Y[\}c=;v>:33){౻=QC?K?G@mX؅vʪ޳:?_ NrfuXX,}Wտ:Mr!Y_V1l^S@~Ml9}4YdwWvnq瑩emu_F%on.'[gG׾t,|lzu6쪻Ÿ?hٵW?"}TǑn09iYlkFE.$F@4v:nNeYz-/ o~+XʯgIzΣe7=kPh֦TdhK'Qv~8n&'}FYV}>U}K)ʣ:}V5aGNsjwU9?TAg lK7sYء{Ǩ)D} TOWo3\*ǨXkqkjF%YlpeUx~Cѩ}f7\H˲%kZ{M$4,1]J1_qd״XJcf5ߤ&5,^_bf.EBlublckԳ=3!{XQ8K2skk`׃h eW~+]4u{wm%>@n~k6_u rS(ܼۜgu m ONVGUפI$HTI%)puϮQ؍t*r8չw"\[}!NNs9ge,lil%'}uŷtԹ_G3cQwXm7xxszfʷ @Ӱ>DWwag{s>ÓQ&MnXE6c@۱%~oY_WzE9=uˮ0ەclb͖U};˯{ku{.s x$:G;svsfܾ̾em? u"u 7 lh^Q9k3/'')Ue7ֺNnUU׳]Ů^>V^Tu̺I8 e/g}uϪS볨cdcgٕ{lLksngߠ8_osM}:32wlUewz?{_EJM,yqSwmZDYyh7I8ʝ907pu[@8m4Z_p\׏C> ;=+fXYkH݇mEV-ET1{ȤS e;q+ek,H7q'{`QM9Y}ZiW9?%M{z׽{[IdXWb+ {1\0{v-si{}Oru .} 漦6Y{z/=2UڵcD4cciq:Ƿ}z?hêQӯKzR˟F5wwё/H\9FCov=dNWOYUz]IK2+auYUmdnߥg.v1_ [VҷB])n ,ˮc8Km5ܕ@4,{[`s@;x+V~oWz/K:cܖ;tc݌3~5ޭ-.lYgoK .k:MΠiu8g'`:]X%^T˽z?"jőZk1a΢_QMoxs~U_U#n2ەӫs`ooq]}JizIW`axkw" s{Z;9h;SCg[vaߺ}ǯ~[:~=,V4*ژMR;Eyic<RS XۜlVYg ;_5sP\%4c},^=jr]c1ˢh]I$/ˆy;3 $e= zS[2:X5PV%<=.+v#hCXA'6>c[˥\=vG5~M˽JurdZ ZI\`e}TĽζUk"ŎٻgieMߴ]o(<U;m1滨6!;]vu7G;.$Ŧ2~YwGwI <E犪gQ"lfNkcXXt?fU'&I 3&HtLoRwt ~]8 $-b1k/d_Hsh&=xW\'֫/=K`!o"͒I8BIM!UAdobe PhotoshopAdobe Photoshop CS28BIMCYhttp://ns.adobe.com/xap/1.0/ 1 2 1800000/10000 1800000/10000 2 NIKON CORPORATION NIKON D50 256,257,258,259,262,274,277,284,530,531,282,283,296,301,318,319,529,532,306,270,271,272,305,315,33432;7F2E6A5D5BEFC3067CAFD64E23E5F118 2008-03-24T12:01:10+01:00 Adobe Photoshop CS2 Windows 2008-03-24T12:01:10+01:00 2008-03-24T12:01:10+01:00 0221 0100 1 1 2 3 0 4/1 400 400 2008-03-05T13:23:21+01:00 2008-03-05T13:23:21+01:00 10/1250 56/10 0 0/6 42/10 5 0 True 3 3 False False 310/10 2 3 1 0 0 0 1/1 46 0 0 0 0 0 0 36864,40960,40961,37121,37122,40962,40963,37510,40964,36867,36868,33434,33437,34850,34852,34855,34856,37377,37378,37379,37380,37381,37382,37383,37384,37385,37386,37396,41483,41484,41486,41487,41488,41492,41493,41495,41728,41729,41730,41985,41986,41987,41988,41989,41990,41991,41992,41993,41994,41995,41996,42016,0,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,20,22,23,24,25,26,27,28,30;01E21FAC5F15A1DAF80AC79893D4F821 uuid:43F4408690F9DC1192FB80ADC6E99C63 uuid:44F4408690F9DC1192FB80ADC6E99C63 uuid:42F4408690F9DC1192FB80ADC6E99C63 uuid:42F4408690F9DC1192FB80ADC6E99C63 image/jpeg 3 sRGB IEC61966-2.1 XICC_PROFILE HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)KmAdobed@      2   !1A"Qaq2# BRb3$r4%CScsѢDt5ƒ&F'!1AQaq"2BR#br3$Scs4C%DTd5U ?"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF""4Dh#DF R-7]Tss,Zc-V[L}/?]S8^YA)_D%G\W3pV+tQY!]"pfHN#Vf ;|p)MJY)L|/e(`z$Z)n Z1L|Jiҏ76O묺}T)0I?ci먔t)m3 k1Rf"'J1$;<͝nT;$y쭯/#vݿ@uET0HܑuiWi2^|)DUfMYY'7+So \G4HH?=Yn. d]u~g !NoJcGh-I  ^ZhE s+ۺ;<hM%o:EAY,0ZmUX=qM"4Dh#D_"4Dh#DFrFWK 9AEs5Uڑ5GWRj(݊칐˷!r`]<@UE5Q%TEUf?l"Z+QQi5 gNJۓwzRDɗ-8T6$5#&S.xe!g)LW-_n4uܼ-;TzQ(em"N-NA]*ct2Z6!8GjiJ_hu+;TX^rn(}qԄCE'I/d䄏0/]j˰Q"mOFb+Fmn&0:PPfYuC3HA t>Tp+\B\/(rDlԢMl 呵p܈ؗ\NS2 gpѽW]LS8ܛ\䠩}A#P2o͛*jBܐyq|J2mq8Y=آq^IH:#jeZGg#8p̯܏f){Mf"-!M,2 KBGy5qİR=[_J=rŝT+aQne'iڕݏgJ_JQkNgRɑyr ס8C)vvCf\;W׳-.?I?=oO&\;WӳtN NG^e۟Lv?{>W=[B}?뿝gU&\;Bw S[Mk>܁AYTd7vݛ'e6䬫_fψeK%S~y8[ dbBBc>T~훛QË|UlbR\ L,= r2ki'Bw@~a-Dܙd<¯kN#oB{{ݽmts%Lg8~V, 5xUM%6[ ZumdX/9]\rH#}(=nP1?"&rtUG^vx(8tyHWGwj|fC.'b4dݗ՝JfL켞oRI5=5~>=j?Qkg?ސ,=B{ӣqKR~۾DYo"xRҤmKq$T"ZMw-ٽƇsQeӜӤGZ2,%tHQg-N0c?x'뱢['r]ʪXS!M.'nvw3cPr܎>6w^hwm"Ŗ{DߡjR7z@̯Bk3xjh#DF"4Dh#DF>s.%n1"ڀbbĠ %U9,{j~Fۛ$0;ܟZR Ot]"&[|֍N8#6S碄Qrfw.w\-eYKY-8ݮ9TO_4ju:r6[4iv1_Ly_: x[37rv1U7ڛq h]L Ec'ui]T.dXxA'ҰNIo߻یH1^ћl Qu*fxm %C,yN[mLDSu@$Hhpּց@lNIHi>!* [c'Mz(E)5˚[ 4z ^7$d0 ߩDZ$ư4OJ鶬IחwUcFOLԽ$ںw$T#GY~>C2 l\:@ܥېM:)7f.^Bz,i 3Qo"1Lp9غOW>(?k-|#:6N% qZR<6Vk~ q}Viȸ%0֫\Ӓ sG^l!KF~A`]GK?$C?Z2wbE<yu_H巩1%ŽQ?Ȑuc}w˓p7dl)ؒ!pp[_O}v>[r{ {#-법'Nuϗ%yV9݋9ŤX@pm}[zc*Sz^|2綴 13rbS/ѿz}Z}%@p2_e*)fr弁(\źͲOݴQM( ju752nHG0࿠;_-I-L2/MZ6Ryg8"ɐfJPA y}!sM7m$fYcn_sY :xgÎ`U.\NMi(O_q%/[~7i\*B8tc2QdѼ KGs֥EBOGm) XK4dVjU(xܴӧt{yM߱f>v`trCS,T~ؙH?mUmIwwzgo=dsW[TuF_oKRC$H2r~-{G&YV iCyq~yzHp-H6kwt.r.XiKٰJ?~S=g=))ǔzK,y`Ͻ=ӓ=GN 7l##_ ;uZ ec"<^_H6x1SN=q7&r'*A qWu(k:1,pou{֟ekt//itxL>R 8 h7f,*E,z=5ѳ5~Ҁ;-D[K0R,̈vvn"CAb3`BޞÅߌ2ͨ;|){w!ۛҚX2hkM$0~ڨ;"u\9OQru<рsvuUА;1! ktZ/sG|V3{|4_Kw4g3Y'yb`\aF5Ϙs ci'JH} eKd٪BqI:M2N@8)t OnzwKI9f!ځæC7ڌjxqj!CoPj>sq̧:. r4pv)E$h#DF"4Dh#DF%Hqn}@oGJTI(TjTTuhcy l(Z8h]VKf3~Qr&ߏ[-ȕs>dő bAbm@j%~w[gRca>C)FS?jMNr~CU;p5rErr6ۮKP"k5MzTk5Lw '-ӣ[jZhgLMQn7Xɲcc&.pVy,Nm+L o3u>)4n[3cnB_O;Mgj~!&Q`{<ũ=k~G(gy {mz3L^;Ÿ)#(V'5 ,Fz^jP=IywC`V64FZ!q%mR_ csk~Yko }CTjRJhGiJvK摥G dp (u[P[d)M%ͨT-vYϣ>'YTDluRžZQMF+J/;1~K]9FL0pu^8WSnme$uLe%OׁHጆ9c%}n>y}[rn8c,"EEv`XEM]QoY7>|ޙOXN8AcOi],ym--ۣ~dIU1z>+Ǐ5/OmJMTԽe\1>^:դ.\Ov ~ ޺ո B\:r8 vf,51&*x-U 땯˃ۗ"\dκ!(8vx֣_v:qzNXuX[`SƲ mtE%ڔ>4j־H\4nO@{cDZ'^uJN.6Y$ z>U@ Y@ؼRKt-Ri0Cq ԭy"]`W#v/2!^ kA_ߨWD)E=!*|BJ )?oGu|єESNG./ SS B&Cܸ!)?#qU v8ÎJwۣ #kΰ< n lPا ~x`) E=/7e=fv哳8ٓh\Ss~5f1!KEcdzR0"]_^"4Dh/"4Dh#DF~Gymij- %~ <5(:K︟*3|XmDHJe]KjI*XM|"#TsW;fTjcTn  b;9Z[w%1e(TWQC{ :k42'4 6}E_pit`2CrOUZs)`Yc|f< !2`(j+@j<9m[rW#)lJ~4ф[z8gk>¾\eկE5mu;cRk,]jVr``!Ġ:442lVDxdtz6'~y-A3oJAu|.] $.M{are |Hud`en9XqLߔjm؀'\M~[mUMysR/_A%9tZs$:Ywz*c+n&ȯd\ʨ*GϪWQ*Y.DHq6Ҥ_Ve䥁qVP Vޚ."w:]YƗVk{ƸML51mK+H)AC^ŎGvrƇ÷kɹg6+f/"pCLqڔf~Bkf)]mnʃ"Lv]m.r吖cTPZMN^rDmIG߫Ct: Y^g,%Vxde<ű{T]q\6tԤIyl[TӍ?ykֱs˶+BͶu+R曞_ =59A^,c"70+lwY]Fz+*9͡d|`${v*M5^kj"`-,]*DcgT^@RBO]~x +c{yZJo)M6Xʸ}|¿&ع ʼVAj6U}=/7MBWa'IS_c՟0bbLrX uA(n#Ң׭FvLDK?bObRf%lZfIjDqšHU PzMt[`@<ȈR1-iPloO!$VmӦY`;{7|_Xq&eN:{dl)7]ye-hܾzOW^~u-}g [*iܿܶz+ĥ9D$JuO2яc~HzCvWc.qeSz\op@wT0=Ԭn PT~W>kӊe5(ڵW&755in.zcz8}? 050/NߝnM:ۣF0|J^bLnӑX=.iyK>{ra#%v:mj4mPAa#';E:|_~MN3ZcXɤe GY&c6i)߁!ƠI296 @^vf6bf l<5#4gXP'S|[1y7Y#>۷x󱐠d! I i(+eC^KռD޲ə 8~-ήr0x3\gjwm!KЪW1b0`w['}Ò%皞g&fr}W"͘[zQ98Ӑ`MO.Ec:248KHKt@M_-\Bv&A^$aZ9ֺ6g p)AZjoe,Dw`rQ?"qcؗ=te~)) kcͩ6fϛ^-Ym|Q≎ O"q/$MƱ܍I .b&= TPzǓڹ oߐ4l./M~ͣ>W:|VTsk'UD9]U;ͻ^/ܫ`P2L`vnpu¶8F\i׉>KjR3Ě倔3˵n4L^-́Yp\j{Jq:77=HĔ9w _FA 5r7.A|v&+<=V/ь o=$) B\IO|a [1R=סe ڥ d^_Lk=>V-#r=:ʣb_jO0d+/UD}mmۑ$;PHբ|VꒄMi5:P(@R@޾li0_DgYu#u5D%^ޥ7_i$bܵ×^WJWqlbyxrET>]~οKDzgW/MrlŢc=.{C p52ks4+=e'uS?"H<ۄ9,Ӗ-Rs$],eD1%.J6,(*Tyjၐ"eSy656Y7eekP,_J-QAOٻ z'"E㵙˜呁xr޹mqk6yO\c*\[# }%]iU[^2yS0q^ c Z5mkdN3m,Xq'bnF;꧆,E|tЧNھ<ӺkRcvW'N׊$-( 8na:KS3t񼃗˚t2Hg H(\͓U=堼ڥۍA |,5OʼnO=h'-X26KTh la9810m)Ǭ7gKP}V4Fzfw_^kt:}>^'G`r=Nj~n A&e$IotoDnh wU%bihk.jOhV0o4^7%b1˘K0/d ^[/-% ][y B\CƸIo%qhx|CQc)\w[je0ᙁ vt|~IJLᆭѣfRIh|5ߗ#"a z~&YZhZY#k|w;Z$] CL<)AOsm]<: 6˖}C_CT%icq\sz}qYf~&9\JPZ s6H Gt./⇓n1x5O>l2$n{i=܅0ױkA N?: _ͿR90%#=#Gb7+#^|>0ˊ"!ąlqj"Nwn n/wԾ[HM]$mwkk]RR`])n\|Y. i ɗg΁nKiXUE PhO=gө:3un`: HpA֢&&䝿]{{-7Pk\ 4n2dDD*G3|}hXTT_RcH  ]Q/̜_Ojv0Ūe )Ũ% $*$xY8lTg$ þHoYRwx}5+- 2\Q-춸(U )hzZI3tc>o.<͇qKqRcuVCBGMcQeb0rXmܿީ˄KAA+HMO@FUP緯\n ڻWL7FGuoLoؗJYR>Z Σ{j奅,D )T;},Wq,yB@ܓRƻzW.׾m!ZcK([h̎K]@I'X9^ܡvƓNMn)c)zv\ 8v|B.)D8$)&A'] ib%? Dž=h~e?З7Y-u=͚ӬceM]ZH]z:K6ĥwq 71Ӌ:˶hcqc\]˰5pIrwKo"3Q" PV:cyцM<8&(ra嵦g*|j>n%^-b}M[ S?6uN_<휾#^A9ȕ(҇0H}/*ܥwbO8m0K^ n #1z똹DOuhSS[`/G jUi R ܆i#a6&Íܶil\M bV^'^_M ;YZr*S틈HQo]+^.vah o{jJ֢H-(pZԊk,Iq+! *۔Fp!3IwiTQPm+uh+a$$xᷤq {M-c?8+'v3Ks%u)XT|H֋win@,E Pc}f02kMs"I,BiA]1buጀ9]_⬪ݎ5rCO8EiBT(8Obg!(BāMUɸo\1 2#.AHpūm64!1#vޥd~}=qXki jCs)*{}:U c2L`Zn@ejYQI홛D5wjM%G-( ,u̽+mu+ɊpNRWpU(Ц>rȄEw ^abĊ+$J2lG8j $<$޷H+~UDĴr-’*I$.e]O'sDOzpގqMҤvƝqn)ٰVHly$SܛXBͱ7Db̆xr$-;HZ롄;WkzmDa[1㽶0 N[P]jXƎ!'rZyl$A^7^ԘpPfNr~%2[.R fE@'Ql":pLWw}tJ-xډq b[in.-l= # 7RFi=9v&G8NC.[r{lj$^ƻv%)-%:\!HiҔV΋OjXFWיg.e5 >cm,_m S$)D:"xa:5h#DF"4Dh#DF_q?q>n%qM͟ct=7jDkgqzv%5M^[ ĕ!2ZBT6?o{wlGL s!~'rwnc>lchg9@#epawͬmuIXQ\heI.**Zyь|t {~lny\3C;`6%K%A!1c4z{vRJS$vgW[׬ 0F&4~g~!ǜ] D2nM bp 6&p=6G.ToUmSy(ZΠWR~? 7l89͓6nzی[U)&mc ,C QB)A;$޷4r1ռ2s[jTNIh2mț1}k:r;6e$ 0¦:qqf㬲u$>F]p"?O;VWJtֽ=s1w~Kss<ȂrorqH7eܽ]^~!,r@P q*R' ZmřW.}vS pu`r7M1~8홗~)+\hPRwx\{\7VqՃ%Kc#d%@2'4!GW^O:6dܘq/yY5ߔi )3%x OmMR~U׻p~S]Fo>*?n"wb,9'$oaHQ A$&:'ȝ|=OO#cb_Go]l?nC#=ghUzr%/ͱkM hO>:݌#uD/56o]-TekA#*\H1%2 5@TtלI8_Qt5}01{n+fb ;!ɖUf(% 4D}v1j1lQ ,?~GuzuŋDl؍ʺ5e\6$XkCX8 (PGy}n?пJo4>/˟2NmmvۈZ9yev$}TŚ{2LJ !BQy)Ft ^5vC~{Iޔ)DR;alxlH#PW9 f3_q>#?a~Y`I'Ě zȗi_ر Qh}%JB-~S>D+%&_fC˲#:R :tD%ҧ!rӑ*S yr\Hx)i<1?g|@ҧr{p^>ȻZ(:֬ԭI6D#&CqߺHNvW G̎ib%QLj6l7~i>uHW2k( I m.WTKK E{pZG`|Ns3-Cy7;C}vo'ctHBbRd:S56vs6rs:`ح,3$x|[@Ė]V/xgQO8Rz.9DA3ʟzmww_d4_QY{WrLv\y [hUTU 3 R8_> vKQI.ꤶpOQN߳qf\?F3)kqi͘n|^c/.9"cc4ICli)HlĜJ͸ـh7{{) +}CqFRrp2De!fKOă$ҁ`ӦxMR,& 2RVTI5ViEiE )mݒP}/P D-9$>չqt ,$HW_RP Q`p/ E6;Iv2hw*zg,}OO3an KIxz[m_ uhRnDnWVTL]'yW2)`UȊm(ЩXOSL:UɄ^= 7t~A]`',+kE&jB}*æxr9`"ڳ2Vc ̓GOS K1ĆR'宎*2x|,CxNq-[P#S:a9h*HcZmb}9jlmSkG Rmw٣&+Yү=i.yZY5ex'SH5ϧ] *Ԭ|="T0 SS8GJ2@}6fgwJ_Nk Y1a]yBxClvFn]@CF8k߭,b iUD?۬C*;+Z'[+GQ ]FUW.mwĤLIm axQFh^iyq.]ȈnM.A&kץuI0xtb"]r ݢ51 +u(@wSoZ[h =ITڸ3==j2˂yvV -$5>8ĄCbua,b^%z#!1Ҭ7._dy \;j¢:Bצbxj<ɑhP5Ӗf>x;JǸ϶[E)I7V|dN~;= l4Dh#DF"4Dh#DF#pGq瀞W֕.EGPZ `vdK˰ RJ\v1A5T)I uu%lצEjJVؿ[Ai\#!uPP+EGr-!fau!ju銄nJD(V_HeqO5Rn>`sl^kB8{n9ap(&OU .d((z'/~fp B?x~kr_B?7z58Vޓ{s{wEpe%5-6(УЁ־&$2/#Mb6 !ݤ'`IUBv{Gܠ5\1ZdM|V6)ZBBiՕuY`.u 1Wj\XP uۮʡT%K}"둜KfuvN?d;ZnJqb( ij@QDmH-Vj*R~RhhGF-ČŞ֔ͽ{LȝiRq;E:TxA6HfnI$Gu1MGq†!4wR| 1b738MV㠥 %-ʪP|E~H(5]{>irN3˴]z.Jq># /*\ -[{}^u*`%]סi;g3?m7:-,XUēn;U1I+D 7!:8(GkFܣD88RM~|Q-ꐤJ۪[_ۆ5_j?6 7ɝQkO!ʙ,QŴKpGTQk kT3mmWLh#6X-nc-xݡU(Zo RA:AuzfPF٤,\^ g8rYn-hóYL%(k|tKȒqZ"4Dh#D_"4Dh#DA9CiYhTh@C ZG&a"ZZT[\uw>TܛrPc{K!e#8(hj~ʺA֦Q4I2<"~;;v/i?%6u یm\1oŔYg1ێQil@2!hUZPC#pm_ӻ{Q~zL^^G '!*>E>=5X^->~Tڨ` ]' n"_Jzxjk9-nܕ|_~[ҔܢObV~ަn}sei2gR.q_n YC2kbJ[\%'yn'rn^ (@ 8=D@]OJRcԢ@Uqe1RnK'*PNuۗ- 4ۣQl -;PğAx(A ^&Ҫq% B=TM M uXֹ6lTQڰ@#(TgbO#Oʤ9khJ aDz@: ޺J10-๜F[ ^]E|(#}O"pVA'K&AmʦS~vWS^VQ5iұ;2Ş7SGԤKN!\j,W I7[~rbݲ4'fj{pEHCE@/|l,r11ZDzlquq#*QRHSDFSxJ2"Xj-6DAu+,,D`7q.㶍b8nI*H\t$"s+۝5K_%(m J@#hH&J1m 2q6Ėd2v-%+Ta& #0RVKtH+~+Qu |J9P*a*Zy6>9/Yj na䔣wІh!+v1,Yh7Y _'dkÉqІ7V!Aړ|F9juXgqq fc?MܷUUZ$\lʒ1 ()Lt2Jr[Ӟu+ƽk ]o3/O0wx4[ݻ:KΨ|H֘d%HY0\ vY@i7[YqkAQ3@KlVi'$S^u<Ќ;Vtu")I ֌Rb5fɃ/bE[oNŋcDyBhm (eF>Rt50=D#Т G23K[c7"E*fZnqhGWQMvD|[zK9YusgM!o.joPZդBlڂ"4Dh#DF"4Dh#DF?v* mBi4iQ} &T ㊌Kp[2Œ$tձ:סA`(v)%v7y [`{LDz=5+!!yv_O27 8ۅȎT(蕐|(: d1%kr1kq 9)͉|m%EU )x8ZRzX.q] lD+Jz9OiA;TȢuCiMBЪӯ@QTob$[֛N:~{EIi<*$Tz upGLHXRRJ!FU 5:+\[b)Z窠οsw5J= )Tִ8ҙCqN>P=tӡEו_P;tR@$J H@NJY8L+cEfwrhuJj:ka/Ũ\(ˊmb -cI$S2xNWC_j{?}O h5,DnBa-z@E ]N0=JaǙn!q %RABIԋD q8j+~;po/aەcQf4Ώ!iʅ"C?Z߻> DK|cA.ι4}>vD/ k!Mt "C⑵ 1~.oMv0)#믍FFH@"gC/?UKQ)c_ CNz~vۛ )N9%s5]&~a!ۀ!MJu"Z 53ӗDK *ۧzi\{ ѥY,چ32BTL8B]hz![#tNk+C9ܠr+Ks|KB~uS'in,!ّ-4Ƅ5rԣQܘ׼S/.;q~Fa!řf$W]A %[q=J,՗.:R%r]N3/0^5^ T"(%(ChJ~.]7-wB!R%6}me &OǦ!UPN9d[:ȻħZ8)ҾA43&Lc"5Q zw]g-@`jd5ru׌s_ дY$1& @Z] R H6\ч=9iVvu]]6M7vZ|!p ^gj#DF"4Dh"4Dh#DA:"~\_ZQifzGJQDiػ/%Aa(GBk^"ګ8UlL[-.D -P~g\R5IkP~Yԕ,GX<%cE+rK;*P?P"\Hع)0yJ`j)&ڥ|4mLUU>TeL ?R .7@ZHJROJhi*%ZD)-G.miH* x2f %Uˎ;1$nqBiU*X%Lg$D}a $}TTh<RQ@;bFi6)c6hڇB(+ƤbHQ'~!?!CoJoP@ iYrAg07&_snkO3%K[5Q`R+J]U;fvj.nƋ);T57qu۞2?3c9mn2=ƷR#qRԥ>N0UfY$y1%s~^58Kp$"m'礤M@~`1n/3Jʞn0AKUHqB ]Vnpcqk"SVsI (ju؜8ùثj%*Y6odh I"H-@{mI{mV〷5;MR:yYJ{m:,JK6IvTK >b6KlXvQgc-=-<"uC1ZS:ygeŝ̗%N:HBiZ#ұKų.깋!ZZaXJ,ʗT}:vWځ 2vǯzbs 1]&NjK8 xW7nc-n|cmMSZW@*;iZ&E UP.3qI̫2T-lV KF,Xu@Ic-dvޙnb~C>PmRٴ ?cerWB4]j}UvӦ;Xɭ)* Rtqy"txXOˣ ܩ>? SsV B;CJ!AJTw#hR5i^ӥ+tX F?IjR$ootj۶BHj@Rh!Ū @J۫UmF%>ϑa̓gap)KTQJQMD'b"= Ӧ9dn*l#-5F?ҤLB}H4Fv^ʉ %HMt; 1R.%iT,*Q:Yz6hMOlW`SMCs6Jn BStQ BCmWN/O[JS1Lm@ȦST'i=KA/-]fdUDH( dh 6P3'5'_^I o$_ıewh Hb RwUJ[YN $R@,+''eR{+1ŋ%;RRRA 5'܏ÓσcxqulY?1̓ ^!BscEiФIG]àW mWxWSyC26efnkZGZ`hӑZ{тøn}\s[IڸIm*OA)XZ\uwÍγYC}ЇT\)Nv\*ܫ 9V ߓefd鄋, 0C-CĔ+4׫4E(Kӱ']!W=k~ U3q)nV ]`n;֍12-';|bߘ[5M>ZC?zOqoI#DF"4Dh"4Dh#DA:">Nڻ([xj⏽-}g?&~5$ūF¨ u ;W 4~gܬ%m[|:TgZάFXnnpHrK1 5ĤCK[-2ETBH>=+㪝Zd}3!js@a&R-m%Հ>CUE6\ F"ڥ<J=+jN8. +n+7 8Rz/N5>k Mt*9AZ;6P˽C.:RUp~Ui1mҩ ,Wʺ-U˄E?\U<=0qtĝc׳D~>)KZrlyDˮvw[^2mhdKsX\eSY @x(Rګ$R<*2V'iqWJϙ>ZY9w$;1GnJjOj@\_n%vd9_W" rP-AwO_-i ApKsK$b1ZNj%SUc@py[dDΊ٧+z|E||u\WU_>Ի1, $'#r2!uZJcKMzEP<̝ 3Ԯvɴ`Fs܅%;.@G nT[Yz"#d ٶofDay>ʤ 4oWK`Тlh$$y TEu6Mj`']ue*Srҟ٣Uq蔚aӐԝktrb\R /˨vxhN#BmƿL s3B,&D*}guU$5oz8ݿ'۶?taEnܚ'qNX<$Vɍ.lǖN=&-$5]B:O#]ݪ?%BzJZtYYܭ:vZm%ca*td 8ܮh?`O}GKisdӪ3)Iש􂼮fQmɵ/Yt-}`ѥpIKlD]YJT a3npى C|UB,Zs͢ $uL7O8u+urvWK}MT"4Dh#D_"4Dh#D_ ؐY퓺5֟Q~j?ғOzq8l*-;82wU4֛_T̈+ssf;jzt!R6%m+cBi˓X\ /!.GD [JPk(ZÑTYJQ i?= Ap9HOGu2OJE;LY@?t TΪH;.0d{6tO1R$ti1[uKNHb;,r؁ cJqhB:u?=HT}2Hİ$Nj7WXonA,6Y-&7*&ɸ1vTHK[B{u#rGM \/*-H0SJ*Rv  uCNՑT]X4R-f%X_'Ғ|MBe.l3.ܷ)% 54 1V*d^˓y" 78ZBM:TT߁Xɖ7"ݝUi- .=@ $uQ#tHpѻip\ڭ7 HMOU2zV[%0 ,حX=2䦘q&M {iSíEK*'UT Uَ9hO;H6Gy6Hi+D#ec$] ҕkMFF৥$;S}Ko<6UBi^(b1]6ǝi[@\HK*\u->jl•uTJH]frpDžy .>5 v/1 ##+uNAouԭ̂2pH m$g$#v¬nVo>ЭˁVXpORIVaֵ܌#֪PW,kDF"4Dh"4Dh#D_"d%;qgR~4UjފS庮@C ҁ@43pKh[N a`[q5JE*5`nxe`lP<(~F!'ұKF9ݝf{jR'ԪZmDke,Ȑ.+I4fSen+^%=J?5hW}ܵ)>>Do,'ʚ*"WfMm"Ԥ[Z*RA=I&iR0HljaL [.Pjem\uϣ4DP뀈w"`wfQ:=%1)QVY-DOizJnL8î {yB ?y'ʚU@)_dRo7K Ց*C-4zW\*XzWjBiYP=vQ'x\0 a~f',gG b*и%;jJJJ<௹Rm\FW6 0t'jTگ#pyXi#_ dm74#pڵZZkWzbn e\Ie؞Spr(1 ׎bRnv*1z*'n뭏;gK菠%BU2N!N7c@O;쫹>B~a+Yd2,3.qBޅ}FH(8\؍Сi_-˹*o9}2&e,F6vC uhKU FDK+R6GoxQ]1匾"E%JIʠ.#HB#)u%KTْ.:%j\*k7Xn?_'`3m_1,̇d H[N-ϭ4PNZ껍˃,g%}rA]oRq]ķ^l:ɁzA v1˄+ /F^\M lha4ڴuݓ}``ۓs.H ڬfn"$"0m!4 Jʕөupb2"v_reKu)rA}+XHW@>zè ZtzWwA2jCWhOH?-g|V/ȫ[kBh*6(>>#\ Ғ,}Y~OrS%j BIܯ?h1X 쬝 V85$k$VBvJ/%D7{:APGrppɷE Y/ٕ":~} AZd-rY?Y"4Dh/"4Dh#D_"bkIW>K >d1?;K?B] A6G#%gϩݘ_Qy%͓/tTGұPGFx/JgS)map-s_1SB1k˓f[.#Yt ֟ t4 au^BUFW WUx vc6}@($GN]tbGT[p;^.Kmgٚz+)^yUS 5}DB2#%&q{[mݐ\y1$sr!w֝F BRq0$ڮi l{O H dvnHpb<6sQJuĩ^*S/m}#:9l_uL.D&;a썪MjYkۊٲ$YKm}M׶TjH(Up5GqU$'.;fg̀e Kyu'-_tAMpN5aO.L}d{-kJ](I֛w(62m6ij iQOȠ+sGƢЋ} w#q]zSEpRpZco&3#CANlcQeTH*V(<501urC|V{sz<5t=ۇ¨ˌ€+K[]*7DSq=@)g\9IXptbZa^r8{RuqinK[EȻղkmHO 1myJ C^ ʕ _@uÁ8 =fsўCiA% [ u_^ eެqsY ùkrbeKUj%?kr*"R^fuglm̿ x\VJb3m:k"++B@ݻ܄ץ5 쮴FWzlVðLzU׻ R`ʼmJZF'êuqyB'N俟jŭ- !O0=OW_P>BxBa;6QJRHRtיfэסz:"=(KJQDU|Y+)L̷KqX[J pWL]hWe~Uײp_`j@AG#•l]Pdqi7/19M'^VSNd.:RJU]tU+rw( ^(Еȕ&;ivkre%zzkZk, .Hն]"nУĶ꧜"$&P2&@]M958F%6ܚm >:@ jt1lS*Brel8( )UG]*Qvܠ}#[n lT0?Nʊ(Ѫ[#̸>E>kK~aOA(C GvcCGqGh d$ע)`:Y}@vԿnN2뽹23xt܀I=+>j*wrN9rmffC ݰg%^ْڢʔX*QVf .ab]ʳ,ib6Cl) !ĠPH 5d#Q"r]{9k:i I𖪉 i"$u(I]*A#IlR4'q_grf TldmKe2cێRA( r"{ k@DJ-pi4Z߭A$x gxVr/5\i9w[+J0!+)@ɷ:tGٸ7kFdsOR: -$ B@ML6xkO*rϢ)eT\RbGV$0ꌴ=DĎB8+! 5 =m̖\-IYb.&|0iI B>DBԈv\'ӒZi2TV8pj铁di̹'vv!e.3a7);D*xVP =ګk'0^sʥ*)V"uim m cO5悋,W$nVK|b&널TFݩ > Y|'lթDf;v|zƿ&3 tW_IBˇ"X$)Bſ ZjOhh: Sx*.Tx8ǽEW} "aa'U=8uő}+d-Cq#$%% ?E:RrI~tq5E D$[X\DWO.wH3R* nrPJKl Ƀ( ˏ7qn FĆeSq(}SB' )٤g&cЛiKCm lP5tU W^C&Ǎܮ; )XJ>kQE=Om\&bzWg(+ɈӞrӔ7;g qinEw҄KNl; 'Sh1j56֣>K틟?8^)L,L 䯩MvG- Y=%aQWoUlج7`xvp*}%YMbeAesQKiWۅd_{y*@7>$wksMǣ!=̀ [|WOW77{QݙBUc`vn)(^:cT+DlT;?͸r<v8hfy e&d_lBRRORu GG|X=n 6"% i_𞺳OB`…T>^'S˻&IMK0RvҤ7@Mr*V2c-vDvLb6yHܻj]Bs`u5 N=>ͱgOs3g0c3|RJjU^i?=WvM08+O j_*k3ؿfU3xzr‘ߵoI;U1˓RvR75KPدMuF+\Pn7ʸh_L wx u⹖[q:׶RTz\KШ3+>#X\ǙOU'sW|-i.ߋ-rE;h.dR)q^AmuT3knQeߔaRڹoYWFԊQF" v蓒27yE8kϸSҚJ'[Lcp(<6=U w;1nqő{ZL2NI*!tuzw1*K 7|ہJіcE[t^RJQ&ڄFW^zj[Brnsg%;#wv.XX=P:}|ȧ]B݇bԧbAn(u È]Dxl [$KOVP}IU6b6UApv~>Ǜ_2NlTGfjq+Ⓔ|n#J9pZܗoI[t卥N%[\أ)$'uM6U=jr[V+;Tir<$ܘ$ @^t!)bG/igxKH4|8H Jh )#-ߔw DFLy3a5{2eEGd(䄆KU5yb 4^xr0$`8pg[S)U/a[uV@J.6T Rz a[[Z``$0#z[^.`U3.;6҃ 6HZRNȯ$A  w8 ;IQ+^ FikL\1aU_xHX ͎+N D3JǤ[c{kD87K[milGAROYF׳r- ɳ~'V92aܔž<% JP:k`=+aңN]dTh*)P]} z^Q0erwbl T#[`9CrXBۨREA":^0^A+뫈#DF""4Dh#D_E~vʀ2]wMȟLJЫެ*U.s9O[Ë.Gkn\G\12nW~,).y*"|Xǀ qWƳq|2-ֈk*Jdʤ? g_ܮf7w%<|m#r$ܮW&4%+Ə%f2Jaz#"8FaM{ RVv!ܼnَ#oj4lSV TNy.W/^9nФ2@IDmqI&(%pʤcLU2 u=^z:5U~(g!-jTG A-:Td0m;Xrs5"f{ޒeל5~^NC,ۀTO87;Q^nc^;I+L+G8wlu`IỤձ`8ds>: O*4t&yU^8ɘQF9X,] q.$9E)8JT~D6rBې`E.bƹ/Ь!S5'HIIl5mpǽfaÊ7rһ8K;}+x!%>ۡEUQQŠP*ulىgQH6ws\^'EI1%7֒R)%::\H6!H=n* ~Y11-7$N=0ҥjKIZڀ+MQ{N.DBS;;h[Ͱ(NcsH.ˊʦzVV S|5tL@ ԏ01'jicW%rF7qͰ#㭹szlւ,cr@U1mG+N6>,9떼EŴ_. wR-D l :kVuܺjzp^$p&IْIʸˏ4-[cKaQiI<: al3{sDmo8FdDJU.Zԇ]! aG, ۊfqzOBv!̥usbKGjX 5 |$,c(%ؙKqdX$)ҵ֘ b9ᖷn3=Hh2wA>zj{ lشD.^9u]E0ϜQ蔷JQ_@b^͂S˴K˜@ ʵbSܡ~[ XiSƊ[-ՑZQuS engjĥn^CUn#%jO wçB;u֑{뫈#DF""4Dh#D_E~]|t[s ^Y{mHmVtprSF݉Yct:=?٬zBS-{0fq*\Q5їԬ#йvKk \,0ԗB_-c2֋3U*2%Ήn7=|RY4h|kXb!]\/DÑ-kq L;& hTkJWV67Trl!BJ? >d]wbZVnChS^ݫC죮5'rYmRM})@z%Fzgam)%.J\RV#5l-DVw9@D(jxV;UbRz.TW_jǏ6V>2fBz\#=#"p /ݕp~wDqa0F8q WԂXذ:u&ƾ>aA/ȨSr~1j.1l?8sM-m\/z@ WH5T J6%Ϸ(K2#`mW|"qQbڛRSi!ǤHC 4kp(H*Q Iavr,|.퓌yגx3)3 mA_WU IZ\m;]k&Mw߹on|vDXc d7l2Z -x~J@TJNNܒ;Y'x؟sHJ{2 ҂wTxǥ4uܞ=s~)̼AIsyyM}:Q-E)۠;AI%ILDHffhQ"2Zyc`'sLs.{s |fԷnB \`ҡ,T񬃖?^`DI3:,W 8  QJtM7rFZ-i(&;[n6,RCm' 7cbCq@>:!Li*#/W۫rȯHrU%rK*Iԉ&Dധa ݰgQ\c-Pn#:]+64͹5]I4}}n44;Z嗇~ڴ9xIq||cTO|C'|e8 ePϗV6ÖoIi;+[/WH1R"4Dh/"4Dh#D\w)e ڴ#M` ~iY|!EEFܚ+?V+<3TW襡ahnfaJJJJ~T۬7{]H|됉;}O>5Ԁ!OfjoۃH%-IJoZJaeEk=b/}Cmm e*TByk)Z[i_0-dܺ}fTmʥjdnm dNԌb bm JP~hzS*pv}ܟYx*ڞŘiyeqE\kRԒ6j-RFW>U)I qIV;~8A+= ՂN4]ȹP%TTS#^ň x͆~ s/bY*kS4]gqȪZUU@T+(r $Xܠ*ؼBNe?|%4˚ݩ)[2z~Sy@ Hk.(fcB(M{7 _&HÓB[/wiT`WP.+*J:#kZ]g+{WwlvU۲?rN/ɛ3lϥZY[QEx(Vd3ԡ(mQZx-~Z|v!gϹK*VǸxhY:"g,I콭$> i}t9|t"VEt/T˫TQ#DF"4E"4Dh#D\W/O"^~C!]d62nxV؉:MjKI3\7͙<; i?F[JkX992vKofOKN]z [ uRƽ)bWSٌHS K!-m zQtxɡTl+[cDTISM X e|okw&t j3\Te>ڠ.6zx+UF#jn'@TR5*ڽqm4^l$ƃuC&`004 u,Ò]~A#S0p(2Uw̃j]۷=\P۪%p;l #+7bVQ=8qlq#877mih1Rڐoh J*EY-02d˻ˮ%߈xƸ#1S 1hw*bW*RTإL0ޣC>ەOI в}-II>}F]uM%KQ:R õvcfkZNr[s~H#TiU'F.h( mA o- ~e(7i䣪O'qU3 1I1"].!D))GZ qna zP9Y/ƔW})"'GQ@jv /x8h#DF"4Dh#D\W/ +`}y'K6'!DP-[]Mt><︛# 7zpi;XycD߈O޾弪V;;PU#i5B .1 $3pR=(#Tj>Ӈ̨+yӫeJJ*>? Qv5l {umkJJiP>=u`|HK,a)؟qbܟ|kU2䯦jK t8J|OBSgNe~[M͆_F68z i1Ъoޢ7a-UBe*'r+Vv߭8?7nYRĕ)ƙh:kA%RU{{_CWq| iP~.+mhQn+iZ 0i G 1o;qIIx^ xkE"7ê`~))@u<5+S Î- i=I#|Nʹ+T_̡[ˆ8ɪIm^hhz>d.l9;5Y3^yvk&.Nb6C - *uG3^tY#ાKnPRűˬ4}3R%.BUJ ݦ ̝W!w[i-eSDj_.^,>kuV48AB³LQU[>Uĥ!" ] ^ (@;JuY⼍DK8Qnq⋿[Lvʗr->֞ jwWY#˖a99dW.=#O[In+Uj+sNתVD¼sXgq8M0GeʓN࿦gq[wlxM=E"R [EJQ?~HHҽ ݀GWLvW%fWU.0G:Tkbz p+&c!r"iFw4Ie1o,50[wwn\?FY} m>ڇǹfVعԋ>7kZHRp* dFz[ Rwߑݳٳ܉>%2׾(S%iPДop޵G= ]y.xCE(ْC۱ѕ=(ކܥԤ־4%B fIO:ļ: #r$+T4ɤJ9ܮ79-V9uĆw'BVuJۑ0?Pp ln,,LK|{naf6}" {VJeV,${(EOQ4R wvmj ֤9  U1W>sCB]-!6Sq~4IjY.zz{rmm[͜2>>-ǘ{m !J'TI֣ /Eʑ5b4Dh#DF"4Dh#D\w +`M۳Gց?5MݯoLk7#SPN3J%+<V7L/u\?uGW֓JTkyPrxG~)V|5E୲1ڨ Q}%:: qZ@%rp%Kt4H`,кG|uUrT]M%"8PnuhӂPH~2gr=kWZ|>ZrSd GIr^zj7}=i:R#R< qw]/ϊkc_K-,>ʔh(ܤ\%q ֟,J6L8=o]|yVT$>JPRI= +j8w~K3~<͔"17jzBv/\Tg^}mBRNM1-X% ;T7_A]/9On{6S| .[Csh QݪhVCQ}5.ɰE9w8E}TTi٬cؘ ByZ ܶ1͉G-H)Жu4 ,E[njEˇn\,fr$n\m!ZH$QRuXbRә?j6^^ l`Yr-qrYTӊT:쥞2N2d%J&cyrkrݕIVg!/oemkЀH\>x[gRÕ0&"2nt6vyס Bg$8%\z3BJ'7rzI޶JG% 5~Pjy!aUuK)m((Δ MJb\Ifh@v:R(Q)]X"fs;My ݽOVVUhSE)CK2jn+BVˍ. rli7e<R~ i/z@Z8z݌dqov LDMcօ ԚQ NV8mw|]cv;7!zWRDprwfQ7&`2C9s ?$Fzv+JUZҞz t;fV&)3\kNER6U鸐'ZC %GR$ۣ"6Q-W)ʙI۴WR3U*,RRDe)flK^KnؔUgZj --6ɖyAw,iBǀrdp^J!:=ζI9eSroԳ/cc51(M,"0HܔsI?"%JiFRI%~?tf}a솆n:qh}棸VI+4'k՘Գ.CH%HitخѲP.6 k{,G&7"*\/|%B5]cjl]YnR1_)2"C.W@TvhҾ~+Ru==Ejұ 4KKdUw|啪xqUeFiҺvH,Zn&TiH rJzEM@W^eϽHBmHM(V ODLhU݌qy\mOݭ7HnCSuX_Kσzci{Ԇw\'žԾLSzBi 2h-#'ǥuMSٺVtH3LE;$꠩'VzM]( HP?㨊'.J\-JqH*S nfP#mcS!էx]_u=j)pړo{\bCl%FNкM|R5Vw;w_9Qnu嶮F˟MڥiBSUCh~::rqڨܴ죇y/2;Ye"V.RkkmjPV$֚n1 %X3|sF_9ˆGLvd]VJ;ZeD5.VD0Shș#dϐnc1;D id%$:RSlEUކx-AɃ,ߎRJF3̼-Wprr\uYKGVy^DMM{܁.~` .3ӏ/K)mzCue>9m7"tV6Jjj})kjtIW%oU\X/8vCDPIFƉ»81,}"/zUR6:}@n>&k鑈eC}vŰŊa9JԷ_MwJ+ZFn kHXTjgQqre g,7ă7yc쮒܆fJOI .4OW0 ݊1ب~#KEmԔ$=!2Pl>:ۅBYEN(W|ǖ\V[ մ/n>IE(}FiLHPõ-~1\jb$M;:iW9mRc-XJ*?t]c5$eVa~Q>hqp]Bsh S5ɖ72w%9 fcV-s.}KD2™Kjm,v=MD?Z9uy.w[=ne_Լ rڔ<*<+C~>_Mf]I Vy+AUnxG q@ڀsoU931]S!\E*e-ۻn:uL)BaQ(OZQ*^]XN 0p Vޤ)~+uE |f@gF 'Cѭ8.mM4amWRVOP!Beݛ7!޿R>q4 B2wU*X)f<٤In+wDfcޫ-מ.#xcw!byu"VIylL! $V+E WwxZ8+By\ ޫDja0Ӱ)EzH׏b^UwR-zS`߷? FJ2 q$8+^ » &3 kLM*L&uKd<>vBRMzPj<%oB0~3*,m-i@f)vI_0zk(R G+ Y #7XN%%M. ׳c^6grYymR~vo"C)@)XRZ5+_nqr]+b8w^r]s!r\'ƺѥȔ@J]Y+U)|5O( V8d`zUgjyv5\uH0̗dV7o.åCVA+U~dLޤY0]d-2 WYO߸o6[Y(Rź$DuTjV=(y ^(/q yx$&-A#]tUs8R)c6yIJT΄uP<*XӓrGŜ\lt?lDZI6W*b,\* 2fs$w*-㚎}⛼t3ӻD%܂ա (J[:ayMjiS&S,YphO._yW!߆`7{̤”3hBv6WS[bsDuN8~Xջxr$`CaI JBz{CY_ `NaZ-8 ){{Ôn1zIyVTV?1ZDw&C!?!H}Sug 0UPmkY`HCm r'h~:`@&/}]H+3HC(|HpDseQVR*WPꩱT;6vW I(Y!]6 ] W Mfr{m0!7Hm}b4S5yYe54ة-ECڳ7UU+ްS#Dq -:&Qhep7N;Um7тqFq8\a*kߕ[OG%!4)ZRN֚,Qx`9? _xRuv-y\~]b0Fg7T,E}jJqFEiSgz32c}UT1zcE+ ?%1#U:k v+_hu\/ÀeFR/*Y%疆$ F;Dןj&W$jA487%d',MQ}i&$=OU, n]wRצmbzUIL~ؗԁi@dIQz h;aYڰgW/}%U\(iռf2 P&>H֕Z1s#"[. ɋo.d_0۝m5$@;>k$mne{F>ɇ("1c;@N2] )P-O$SP`0a&= usjpEBq~bmC vN_ bLOm\ݲs!y^ϕT,h%,H@5|:5D`mV-DLDl^v]Nӎl/}C|VP0RnQ\>73Yxm< %:$'O۫fD(̪[mθ9&,%)1L,6*`}OGU VHeBKp[f[Y-,(ok5ؼ|h$BCkWyeׂ;))tE(\j[V@Q;S?yz3m=Eݽ9oڦwZг2ĕ7. LCpNc ZNHHtALJ%xS JM 5բO~Ѕjf\†uzITf2]ۀң)FqxdS;˜ B||*25m5턥+SP4\f Hnñ?&p:z{Q⇛,-@zk-f9ض\wȏ#7l/cN)4JUkÎKjC9̛-zӴqT+: 8 Ԯܕ$R0qrٓY #dokͮCrœ+ROhD̈>= ٜiR(Kܥdn)7{s.8.!2+PJOֽTHڲb$7VD rRV_:kj2Z&1Ɂ~^yzyiJꚩ$nw'21ں`rJ$U]eE}4^3Ks-&DF܌[ml2NF.NQ9Ֆv~művzuJ۽U7yݺ@C:&#S ¼ .PF"4Dh"4Dh#D\EbIXrކyTW%ظg:FB JW#*\k4qz"?NN/xSt`J(Eͤ~/#OO֖krMFNyj <~3FîX6| d06z0F h=zɉ 1  YAomMYM)* :`U>Deݢ <)ҟU@'MG! Tă\tGΡ*IZT=k y)mQ^(H=Eu&sK"!H-ph+mbhTK҅{!I s0"Cb"ʸC`jӍ:yĔbOORJ]ob#xa ͔eb^U<#CDzgW œ_1${+$Gmu/C{*4,pW@m?PLJb *̧->?TJ;||koKҩX$;ڜuaan E7/$+Gæ wؖ5ͻȵ[aO!aO%*ځTiAV 1 ିt??۰GhrYŽsn@.{0CdnIbDIEO- VSy|p[cSN^~H x7JV=5(".Z[}VH\s؅{dI}U!*RDxj D >1ekҙm b80jliT[ltʩ @^ufƧbKK1 [c܏iƃmɒs %$;AI$x.d~{Ḵ‡XdiEakZT׭|90bHI,a['#15q^iPO H%;פ8&ܼ1EdX@O(TDuM)$,]Gɽc<#pQi ֵԣ%,ɘL!'8߶$6B@@ MzxQvV{mҭ\-2v.]f%m@>)QCԴsB_ϱl.=7ó \9P䫬evLz2^ZYu{I!E m`q8E(႓SL-$$v:R+iDHUixk{ۀ]A!Q* 0kUiw\KqGU;kpmC+TB tv۳H-zV7m>ػӵr]JlYq OgWe`3UJbr6u$㰤̍>ۓآ_10Pfli !7Fu)M8഑>6j=;P~`rzmf=8~ҳV>RUP%˥ř Ko)a*Qp*_1z^3J;}]xA̐]q-o,|uY}$ң֒ocȸ\AaCgr_@@fb~(Woh~b8G' n6Ҁe/۬ A ZaFD^ 9D3Sʑ9VIpfbA:LVdd[#8Y Hכ'=ӌ#n"(jmxe[ptii!5)pT4UK{EqDv-VeIt=eXsx.U]%HToZ6L&%\ܹdR&;R\"[l^Ȍ=GyeT6/܉e[}2QB(B ~U˰/GO_h'i]x3gʬLngWVD4 R3MB(ňK 4Z1nit] fŁ(([ȪuWpMU90bJJ.A+WqXQփ W+ReRV 탷JJN: Q]/)H\T.+UݘI!!~9Pm`{˼rYɨy,g`R#yPnͭvS$cLqVudרWJQ8ƪ<TVMe[)AaE1[z$VdGP6ҝzj[M,%jk.hukX?v[ژ7;pXrGey ThZCnѽۼMHK˹|o޶E4,bK6%kTB"4Dh/"4Dh#DIRŞ1ZĠ/e}`o ƚˢ[9r}?9:y>P>ޯ[p/-lKf_e%]eP&t ] 5ڎ^2ڨ'7Ul #j?H >“ܬtBa@uqa+IGZҺ )W̘kZ3i/eN: k8[1bKJ/$$Pu+]ib0n՚S‹!K|>2T Rʷܮ^\*[4Rh.)Y ? CH)pR<5"hI> YQ$kg4, WRuETE/Xd9-imڃؤ-$߹ 6 '#-r['S2]>WJPZt+l0 ?12W%POR]=[-G y$&0QllYՂWDȝpC YPR . *iDB8.,+ܬt1gz]7F5*Lz׿"0mM#yv0exmQwż+i9s J3|7-,*mr`ri[{ _%*+]iV\SQEfY&#rX,37ts~;g}^dMm[){ vY}C*W%nhÚ/a RkpLOKT;! i TU([9CWK3,WdkqY2( sz@t:|"cvWfCr[yɅ%l  iRZ@Ҕ~5FR$cNx[ 6$g2HgQJD HQ'JZ/ ej2Ń{P+KJɪQ_j+[:+U!D SuGAMcQmd03.scG餢vK>McW"ݎGלbCo2&le-$">֛BRڈ٘زY8+OgBk<|?-NLeҕ[KPbj7#ۙx>`I?--ҭyS#H Q2^/ Q&)fL7SOZ(Ҡxx3)i8OF#}wY:G>]Z1خk|2c5:eNna zS[EDZf@+ rm J"2hUZ Z#DF"4E"4Dh#D_+B\BZBR'Jx/c ˭WU6P>w,>J!)) JH4I{w 28p{?[qW+p$$|jUFIxХưR2hAZ+𮳀ýY[D17nu~;%5<\&e͜܉SҫRܰ2UJ~$eW5^f ;lƦL#<wgҤ$(\ס^y ;X?ViYf:UAsg-۾k:X)s>]ӮRm]mD$"箙sh wO9Ǜ2sV+润6Y3tM:ėWBQ+B,:/7&GF8tݺw2\pB;^rtnJӋB=Ԋ#܎pu8ތL@vYTD /r!J3JdQRh(K@.0aCJlHV.޿/b)/%cÖ4Z,jO[ 3%) Flّͫ,]ګw opO=u9/Aڢer0MkaD$Y9XZZBS ).;Ҵz?&0&5vgmL Q1|L;rvm`5w%"dObK@{% R%B[YTƺrA80mN̒Ӗb v:Cٌ]q n"" >_rĦ-V#u3?%Beh%F4-ѯ2ai$WEzqol8mvXD9/AtiZkYNkJg}NQb%\ItGm?ӡJ#\>U-L{|~ ncWloJCϢ}D}SP$M<{'p:b۴t`q@.\9d8^yIHCIZ B.̦,kqwwu(--ꪃB[$%*+RT<ñx`Q6eޙ<˸; hRBڜ JwRYsֻvCpbL_]--'0Wm$6[RBO@t̐@[0-Zm T)qLR!%}UZA$;v*gv 6]_`B3ؕsD˦>݊¦e$EJf9dc Lc~ rƒ&&(ڷW,sUPBl_7,2&-ueKM5bL%.AU6<92Sl,Jwn=2H6-ڑoĠ7c@X%*F_Jv C=GҒۭ*;4 xn2\F.a.<@>Tu8 ֥HEoq4;"ūAi- TuQx}*5Wϰ څ zkqh[[Ck dj0$ &4!-X[^oq"X?sI!?shO_m*DHhi1.߹&^Fv6خ\ }:XZ[$Ƌ%1`,8 2v3;S6 V#AOiRZq,8|VCǵ8n!\!Ig-Òn8+,u!an=˟822N4ůvm]J}V%% -.ۀ-.lJO$F OmKl;i}]덶GԺ6GipܪrhjVdY~vYrb7"CvmŢBgTxiFR1OrzWݥ#[.Ify2ӞL-InN2u)9. bjk Lg֫I-)-Bc8ޡ*ilB݆;#߉m>|nr'к%} x!w[ٕl'Sa%Cꅺ/UO.ʐ"1Uy8pNY<ͪژ-nRe!J>@G.5WD% &ڔ9p\KIk'kWIr@,%-p=w)EI? JFN΢#KArX5^BR ?̔ _*~ZPt5kR Qsi'TɬPͮ/_GpOp${!%ˁeȳx3RM)/D-J}\-[dd3FsVݑ'1~c@2N)c*TT2D XJTi:VIQ!y1.)#DF""4Dh#DFMa1yԒhC J$.jpb#_:͗W=AR6,ĕz./2KRqz(4W뀲gIoGL;yYFמpKWЩfuΕ[jُZ} \0zhĹJQ!HC]kڙ!д *x;VS0[HgB[QODXb0y,k./>FJysesj"):u\ [<$]L"d&Ep~{o&9 PWWZG=|:mNLi0Q q>- hTF-F0yV6-z@e:M=$4I%$b"-hr'i'iGU v_ΛD4#_ E]]T,+Y3-|1fV,iKOPST]% iGYcl^R~ w5YR#]n5(ux^@~"W~*RGĝze/c2ClS_斵6r&⦬N!nw)7)4ֱ2J8U'Flߓ*N Q$^LYs$O#am)E='iJJR'="8[Vɟ<1#cM!]BSשK)M(,($AO^ J[-)=ԍqdoE2e OmGxW)?k纏#;fv޿C&ƪٝɝ[D]I8.y V1wEß)'T\-?#^{Zm;Edꈭےn=]gL[RY\a"d%֖^4V0 U\]TF"4E"4Dh#DL~E˚1-%^n:h\PW] eoݗpDY8/sd ySZHͩ>Zow2 `6lհjrR;LJSRMmF+pI*)WM$"i>JBU6Opa[/:JFвiO۪5ګFYfNݛ3^xѐAYXWҺ, n`iH)왻R-8ıG*LJEb)봲%I.ԦԡFc9}D.l6Džn2(IF| A5۫LPwr-wO?˙KhiۻjLxuUc!^ Vcjr@~jY0\gW>4Yݡq ߓ6;4N%|C3qhͯȳ;rq.n 吂]#ph@+Fr8Ko[Dͮf$6XJ$yjVƤN՚13!cқW@͕UK $`Ɍvȉ-@*hH-Bq6 ӄg2( c(qM1P*ܘ@H )#Ak\nHQNZ:ݡ[sU#o 0m):Z'>Z"Tq Mq\KBPyUN$5 5́w7 Iק4`~BYڙ.z-͋woKtڝY\zҝwܥ%HKIU: jLvX Jw+dy}⪊^C+#Hy*Z+UQc$tBsJu$${ \?xhv-u!RNũ>! "t.6uE-0V kbZh(P4P"AvQLU~2V␐O>Z㮶ԥhj9s_ H/|b#ֺn+.WʜC EX!W慤4=AW [L79hml.mHqG@H'D Wrcq332틑-7_)~e[c~FKMA*un8=h:ZK`҇G=lG~ϼErQLD7ocfad˜y ?-VPyw!/ɳrW1~wi:Fcl}z8W#c.CH#}nrxUJ[R#DF""4Dh#DUOrȮdKGTU.N&HYj\-m|ZEj*G6%)R:ˢQe⾓r ܫhN5aapǭZDk!\JR#"z)%BVKi*@.TM8|fnJsS1\Z h_ W| )Za*]mo3 +js'&h_d1EP7ѴAXriϔ Nݛ ,R6w sm%ܰj>: 7de6L݊ [bG@[_! px(! +)O܃JPRt;u?ezjyS50\G;R#Ί:TVR]hH& ؙ 2I1눠 ,jMu YnniRBo⫈w'8V91_ E"vFZ4eeN/N(Dg}$ g5±%0[POKjko1){ qc8q\PJd+?N[y}=O/(Q:tJJjZyƠh(-#yB/ƪf{0p!_>kkM?dG-]NIq'9˞*a.քKBd-[*[J"^:UrLO~Ks]s@4Y'vȱ5 ܹXemo.ε͠G_KiB5Mj< =Gg{2h3hHS]2s+9D,Y ,(1J"BzD˼f܈Ie@0\ޒ WZB@4cjO_]͵ =-Fi&\ RܗX!Dzq.Dlo|GH~\z.Ǫ|<@}į~zțZ,r}q* B/6>«re~ܾx\ne?Ui-]p?S͞E"L.Yŋ.Rq>e۩ub͇|HS0aCx4Xd/` |֢?~^\h|%N|":vJU7¦5M|D~q8>Q`HQ7?z7g .h $Ƒh8Qubv^T.FqF@BQ#f䰋)bay̒rf-I|6ĖEDiWb?g Fg?4ܓ6tz۷+&4 Y)˶m556Xe0ܽ?ca!&{mH(RBr)QR1\8ޏdz'qV]ѻhEUmFe>wCkO :Zrði.ՑԷ%;ͨMO/2R). ׈-0PRVڪ?ީ|oH5z$aw1txε]MO:NW/~tݹlgrc%|׆q2.ӯ_j-?z;_~2>}W7kZoPC).yֈQ=cOeԏ3:cd}te0,X\%-@7]O2bdH"Ê9W9IYۗ1ꊊ3LȽ`yTv$<.:#^F~onl'_nD OzO?/5Čycy1r &2({z\_97 WAϑ{*ړ/oW[;uZlWM@i4-R&>_"g}]_Q3;$k0` |qRҭhfiBG@`,ndfV'ZfI6\h7k->ekkF"4E"4Dh&qywMb)I܄8XA~ZC {廷c]_2ל6#/8WZtۥcdYo<$MekŴO(ƹޅ_"<KMl)_1`6Xu/ 5zʽqߺeM@}/|\<*eHU9!t'Ztw(8K6EG[J\BqZGZ)YXi@*ORgBCmXD7k%e qM$|WJ1#cr<8.mʗ>/[q7IR)<"ꛗo{ +Bngvfo.<"@1.mm2mT GO=gk,\aP{\˘DinJ9m绒-T=.iM-.n\zb qWI']B?jDaOP[9v;oD$dy7'3xc͑^bi }*D_/kJOi,jG+4{W ?R2-}2חx˷bH< zz~VFD:sK3l˒Wm Ёwn~Q?'dQ\aj\շxFȨaW%(Nxqv%E\{#t2J||5WVo#sP3'7Gr"KnhmZi>>ɦS(e)a9H :]lYr+PJ~ѷ=.1u}Zgc4fov!V~ P%+1- 6k;S]h( ei.i\7*g8_+n!{L`M65ᵬMCVcڥ|xvSrv@/㢷ΪVKڹ~ddqz$sٖbۊYv+XxԷ3xNwrζhF 4[SOF 942up (6FFŇ]Wc<]ߖ".B@r<#-`F!ط'_K.]vϨG"=anx#,rzrxsޱ|U>}?5v4uD=ҳuzf?Q$ad~ɕvڭh7YS}I:Q mxEs|khk/rr5$soK2W5-435߮Y>G?R{o\pc-۳q,ap{[~j[دx\]7{w[d D/J/ɿK$I=[zRz@6<:Iz:Ԣ3jn=?rs-ӗw.ue$H9~9ܘ1O.2z-z[ Jd ~zP΅~帿ٽ?f38c.jJڔ _֏UlH/}EsrawS%G]_iԻduQBZ5lyo(f?Xu3_'ľ,pЫuJͫS(f݇W/PsYct}a ,BkOO۪˹Iz56I2˗ӿ(Ɛ)uYܫ yj_|Hj8N,. .B~>Lj.Sw[H^/SYuKQ+Kn2Nh'U +3|V_[R7~jbrwLÌvߒPSAO:cCyq$0aVG|6G7]r'Z 4{|.}m*^=6FT2 e[htkm˗[xٙ!( LqH"=ThmmKh@jb"!. ʁzҞ:ތi(8 nۨK8VT(#DF"4Dh$fHuW|BABAed$xúN;ks˜Ov"V3|ژlKԂkPHzH(p*Ìj鵙b=ܝ/Kz[d#e#/?vny!iҶ E=J1a#xm8f&duh6ܧrRWf,h HZTE4V*z&aK!;)3C (7KZu<茇\KsE vdžq^KOY]qedYJyiֶI.2 qm!_|kթ~l.fEq@QByV|4>zdc8;"&\,DQJ')(F y۫tVsh r2ЧMɆOÒҙZOEJzYqTYy-1gxؠzSZ oe VQryؽWe4Qa(|*B>:F>ܒsBzU' yx%5 "jb K&ы)x*7yҕ먛6L ;۹1G:_qvZJV9T3P>4'/AHA/Z.ߏE OƣS0P2'iN5D~%mDq[8z39l^濸#]n˹9VHHݒLr@S}G% JS):./AA9G u&+θC]uUqq5~Eއ ޣ;2jlbQW?\B>;':솩A<d%:\Hԉ5T'č!pw Ln=AQϡQTYקӫyQJn+̥N6Zq 5MWP)εDF""4Dh#DM%d-ofnSu' ] *㊶pDEOmqba:yoGؾnk' P4oIe1\7/D & f)n_fe"9LrT <:죭fetH` 1& (.qu"4Dh"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF""4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"4Dh#DF"kombu-3.0.7/docs/images/kombusmall.jpg0000644000076500000000000007012012064115765020324 0ustar asksolwheel00000000000000JFIF XICC_PROFILE HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)KmExifMM* z (12‡iNIKON CORPORATIONNIKON D50Adobe Photoshop CS2 Windows2008:03:24 12:01:10!hp"0221x   ,8080800100@@.   }2008:03:05 13:23:212008:03:05 13:23:21ASCII C   %# , #&')*)-0-(0%()(C   (((((((((((((((((((((((((((((((((((((((((((((((((((@@" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?(((6tK( Bx{~wVOmnc%z֠(((((((2qFhƀFAM2"eSPOiY'F ]y;FFo[m.ݶMu4W?Omo9˱~>V|%//aȣjh E"K@Q@Q@Q@Q@Q@ ዧ(TW9^]Gk)uIIN^dv0VLMl-݈tد,1\'CuaWsapya*w/NkPɾS:k̛e4>pAl`krxnSn7ٻi !TJOCYќzki'r:ȣj^o3,o аXzwQC*x[0^_wEy,W'aWG v[@< vr4HKffrՇS# qj^7`Zy5F[[s,c[i7qݯ+VS{Ȅ~YxOauA?=BxSkIAX?N*{?[j¾&p˕gYi8#uCoobXJyME-Vۂo*A9$+ k X}kY=yB{e#s/b (rֹ*bY::=$.ʇfOF2E0_]*N Gzƣk.l'>|+q_c9c{//J*\/=QWO-?UKF[ox^7gτ+>= #LyG64yiHPͅxx3ǭ)g,[Oi{ ,*R~C^Yݰ^h[F}{zW[V8O:= eS8~sڹOKI5ܑ7` qG M] \G{!S>ߑ8:U!.h5ѯBR/ RQ@Q@Q@Q@Q@-gNGEJu\ [ KczqV+S)ƞ%w:Q1ChpL+htvY4,-+#X-BIMx̖qX%U+d+";m"!V%'#FG̝XNrׯ"ϣZjZi̓?3q\2z*]=Y)3FRvR4=󌎸<%St;V?ޝSL@w}qMk$q5gD_VXt9QjSMӘzH§DfFp1 Bv< uhZah][DCucM^nB'>Z)~mG:&{ De^}n"#8y oeSҼ[߫Ǖf6 VuuVmB6i U24 'X+X=)B1R5'u^ӘS%%:r⍍*xs^Hd V>"X6|nWbO!)~HkJTeS9I%vL&@g]YP?a^5HI]o5g5ױB9@?x.7u%i&FdH6Y# Fp9ǯJQ~wW?C ,yݤVY-.W͍?ڡY_2 AF:߇+*x\TaxpH-[̂ܞs`; OZok)xUȷc,಑qҬt=z|Ŵz>L!iD# \7U\q,|-2UKn-e+c E]1dW̮'KYwo#M)r1EF7i'OE*Ud< ZCZWFp4@io%!P4m^Aqړg8ҠY#g2`g Qk o4jXa xt)KVdT=M {ZSc*&a{ۿܞ̙!A>5зMAq+[;mVmTI@$xGIKƭ U *_k:z}׮iGӊ0ZEA9((((({kz>85OEOEto9$(#ӑ{_FaLξhłOđ|ԓ]jׅ9l_x r^^$bkyF{ڳ|{U2mپkH,08ǿV^gSn-?FRnѽ[DߑAϣv"j!QH P˞jINF[ iv.u@֩xo 隶C&! qrFYS|q [Yt2}sssvǰo@O`)ԟ-EO[ֿ8\'UײIs]?k֤>$'})ᇽFhuk+#Lв© GK{Jۈ$pr7g;sYzg,eԮ^JbmNm{ &sYOBQ5'/dֳO۝>k3-bھKݫ+Gib< 5jn>aw=Oe&+J16I*(Ӧ}SK#`O'޻ #LK; DAxQIy߈m%5u2A5)F p8_##?lLӅI9'MCⷎhGqe#7 Q+g9,n,JC LcʆC^d>ٯ6կ& t}FcNv[^3X^Hcq%^[izI\^Onvx]mCM3^İ 'vyp0=]i*ҧD?H6=:Jɫ5+$KU"m=)ntmicT)$hZmCS;ɍaI|q3.$ȬY (&i#6b8|ll'ׄ@u8J'#c^\=F_z\ßMFݴo]eI귗λP>VْPO;*#.\k[v"C>5k^ih9X< y'Nωxr: +I-o3У S=+}"fGL8F?£~yM#_Z>T6W :DtIKh>!|@wweӦ+IidY[Ҽ^Vy$gygydbԓ}1Kx[Uӵ(l -\ !/ :`~; <p%gi)I=9Z5 k6Ə)/taH 7[j""rbX0NIW3m+[}NuB_'<}*5o.ХbL# #qZT^k4Fq:Vh^q">4|VTZ&-o~әC YiH!gH_81\Ƨ;[ E+jm\.0 he'5fjWH.i-vR䎼I3Z-aQrL1ɯ \x̰7Nu9 2E_TC%3\K$WHۍ( ( ( ( ( (9"_8|?gtKh}j3 h,>_Exm )\M_;|']31k/ P]CJZ7.Vnk3/T!{axGqҽ\ɣkȣ, @Wxxxs b!>Nfun#.V)vO1`ԟFmejWoxSrv>ta50%坎KԓMu}yujӵƣvInǷ1Qں"xG<%FG# ]EFUk)'ߕ !OCVm7{Z~t[p@XpTv_H|?+wTQQf^7twz[|1c\f(Jk.:.w{cڻn{sR]Ln7Fws1f VƯagsoxt%U<6Ӓ0}] :Oigs{&5ĒN~3ܴ~ipcݴ ]V-/ <S-9&W'  ZGaro=0Y&l"2I':U߆uXjl Q?)Ǯ+N񆑭G۞K`5oF+\~g<na|-l si$Cx]ot8_>]k7$o*`:F(,㿋3i}Ns\\3u1n+Б#ԶF?f'* 'irm|Wx3ZyᐼlUڸ>rRrOaɯN uo,ڒBZܠRz ׆kUI=Qߠ52KQ>l)hv%ne?,|ewbk?xENhm5DhΤϐ!CqK( =aLs\wEIeރ+8p\Rmdy+.ݓL2xb[=l!=t4U'+2|۱1T5}-.tV8v]ISKhl8xz(;jOWom2 Kft"]Q8zAٽ 5 KD$7.?"T;s#+I#1N!Í2-ݺM>rT}bNhѼInxҊ]ii (QEQEQEQEQEةQk4"10>6kOb)oHdY%2:9m$7ʼ'nψmari:}x ?z׶Vui,?*7=Ⱦ3Cmۦ68Ddyo?0sZTyCpNC)ąWC 1tP9^H>ty10% Gֶg%5C 1JDsr!koEZ-  ;pIV%nr60+Y.T5GT*FX}})1O#ideajGS`8>m,m4zY76o3)M8zYܨAB\a`F䊯6:̈zVwiqX"k#2I:KoǤ : Hy';ToWXiguk5E4 [ӥuO4Qja!&2kK xm,fQuXzZ).vkC4 xIJ4-Bq[Wl^ែO x)o]"_=^Nf ڢ5b<iVGZIxI kXn&v'*ic?Z->FKe('z漣N6hkQk|#\t?"U qdFm>5Γ#;w{o!Ut8 ӊRvo I N_Y?0U%wW鶗WpE 3M*[y|#ZhQO$vU\_C,Q!`$tu}sjwwr^jQGvp1P! П\zVѼ#gcMr\ v*(MCSMV4o)*0!Ianέav)N/ky6]'/댚uH3A,zJ82ijuk|"Dn ^&ݣG8fk>k|ͨKp$cw x|$7q_Ax?A2黊.7ܢ(QEQEQEQEQExCI/jW~+?4|Q|yA5`duEGWO}.Ȥ6ik8 #8ʂ Xj8_|!NqcqڇӚpd3D]Y"_HUI9r:$c9J߂eHcGS8;A=iCi,hRpq[DHrG͞+ Z` -̄׶-@?.FRnli]h]^MQO W}{=v}Y6-;KƳ9*E\ƌ!r[K.Zn]aR|?R951 Aa&GX_.,Vn<5[sOIN8z rMtWZkj/FԱmqoC]o܀Y_|.ݺLJ" I/厧i 8D'u#=Yzs,+'ڇRD(㪟z|e#ޞ|ZvdKo#Y\'תhZi$QYGu9Pϐx=8#^jβM}sXxqJ(똞h4? vS[:)4k +`["\Ҿb|߇ ]{RL'<#?ǟM|ѩ?kte#)QEQEQEQEQEr08蓷8+Oɜq0ǡ/_Cx_ٷm[Os#Jqf/=.葰yƘ s ʜzoF=hLI)zĘ \r;O2-954$| ;qԟZܹi""̇pvzͶG08Z%f|%o3ڛ,LV&D>l}*bK!%UUUē\m0MLhXη0oLDuv y.#㍊G_bH0oM;>6j7Tg9@- 0y_(W?),1|Q庞! ݥUm W|#ƪ5+Պ4|||8MXRCoY4Z4oλbz^܎qײ/4^P&)t)vz<څ%4ܒXj> _YImtۙa˞$x;)2[W.Y[]@y'n;c5pee#n_~(]Nךfmg> ൙0D6khowr5bҳFO1^RuF$11I>Vqs&r5ZStb+.=ɬ"uiVvye%na-T;װ|!H-"!c+̛kr &!v69EAffQE2B(((((#mゾ{ر7}Ok~+Yj׌?^vgqȗ$^,9vq_ׅIƪfz= c=3X:wfdd/rNp}nHȪ-O1[̓0 1Ѓͮy u]^,:EIwVLጓ@ \UuK;t)of!ϰ)%3D}٪6ۮg$4GvIau0I t#c8z}-RN͒c^ 桪նjgiJM; C Z"74X,V7lSۙQϩ7ќwr xP1n:U+4|Qv,>zE>!6$G>2F KlVЁoq ~U>w!g"]'f\}m%,V9=0O66 f_2_,rykڠ¢YucO0>> </uOwrSzo|BۿxoZuŷ|%[Kr6 B&6$`1', X V[r>(:Ǐ|C7ÑB{ת|`.|@0>56^S%TqSr..%krIL~49kz"\1A/yos&#| rޠ@0WϘs-ʵ"m|4N>o ݇ju]†>]td_TQEQQEQEQEQEQE_n%~>JmXg?ֽg.yop 1h3IS%{z˖RIgQ;|ʕ`bg|ą T_~N~=^ n]ֵilV"w!=g%]x;vhv𧞽skG8484{D53gC|0=[5|N_ď}k>Լ~Pày1{֗3j.5?in7N Py MJ=*]B[\KӌrJd3F?uI.llzd,/h.yueusƤ8$t84{ tX?d<x4_vr[{KF@= qUf{kS[#GbNz?iޛ'o67!E[8)<ⴉќ'/[|F+(/&>rAi~i .o<%,9ms̀7qx=;ޖ~EKscx)kkAs aϸ47g%̭sK_>o&-"g%E(_yN=+^\s +A𬖑nZ?&ю?yyÏ4v:5u!8cLmIFz[j4It˨k Gh> znqz;]sPyaf&+Hhy?"OdTd*t"-,qc^զMkV7zeըi m,dB69cowS٧YUnW" =8cQQ[;2Ͳ,f9|i kMgucysI{ֽ7<v/@]:2Ow7cտ:vemky咶d0$S4N۱JwZ-=힛难[zV 1G*I$Q OmKB{%[EWt^&a)V{zV~D%uMc *IrTt7Dh*.!~Uڑb#Ui-@BR1E5y1T45i\, d]?Q (ϫ4ig1"?`8 Ir?5kO|*(((((.|+=WG~,fcJz' hG1l?+ͼHmUZK4~~~Csּ+ŶgfQ'skݣO|r4w>`Ө1&"X[wT='ws>7qp <_Za-$*CwPvJǻ #F]H&2+y'z'8RɷFWd\gJ-I~4xKl͖7Cs+<1/lB0d{V[oPk|]?xA̚=p }Hٰjl֥s'> )[u.H2= _3k]+,mĬGu K6[ҹA`u EHc)g})pwm|l ΔLpݣ][Gq]4AnUS7cg>h7 m-"?}M{흝W'51 'ufX:Rއ x}- kH]OnxkZ0vu7{`x5 b˖5Oh->(h3@Ny56S)I̩oE*<+H#bq ן|=dOO=&)H3.s#z ]TRM-JM;Fm25S-ðb>|`T[7AGraam-M$9,n-|Ǎuv4_(QZQ@Q@Q@Q@Q@KA_/qDžyұÖ7P^w7ċP?luSZ?o/:Od>ʼO Tw^s5&0xXB>vd Sw9ˌnݤU &D$v9J!ǵqu8-ݝ~ŗkxk"A@u}95Fy5 mS_ ]4'hn :=$lv +(J>%#דV.~!t7AvH٩"U;kiOnn7\]ʄUg=SN.<{3?4tVFnA)e2IVX^OnsGCW},-vr+ƨIgo,FX9Z>.7jfK ݮ3䵜ཹ?0Ȫ徨kh/ƀtԮSLmՀߍzGgvЄeN~W~_Fc$3gT?&=T _=|tk R]yȪ3youeh!?cGuwv³MEEDQ@Q@Q@Q@Q@[+ -x<$/S֯aֆO5䶏9k<ta<:?uSZ;O/;1lu^^ ?$ d|uqa^->4f6P}4屄w8X gn:Zt'xX Ğs2#=J,|G ۷k,gq%E2mt{ka !{ ׷XxS 5/Okw$yq x.dqTr\Wu^Eίs> fF졜r?TzTzld9\/Ǩ5oşKƒ/ Z 221wpD#h,|zZIbI,IbrIMSzYg4- _|A2ۋhܿ* <ֻWxkIdPO{ PoO:e~uӧ5Vլ0,}UG2Jbj|aM%i y:lByMq47~!+ͺ*vk>?R+b6έwq #(}c5mkL81J ^BĿ/ߕסVU20b v~xMR²\m`氼cEUΝ imhgc՘/ʠt +9IE6jM&tHvvZc}xjK>}*d z+G}#Shiu:dWgjpdWtooqWxIjaR2=GqW呥^ SQ4Dx)S~q&#T/NNko}6&MCV+"FUe,UI[/ES1  < i-j*0NGԓ\KbZD8I⾇gx4v_5 /W$n'89ϕ%H&hH^$R>ONj{irlx,)g~5Gǖ]Yb.d[2M+nfz\V{_~7a-*C2fKy#ZN់ v3iBƋujʥUq}yu { X+ؿetUaa~yQ8(mؠ3ެ(A ڠ'Uq&w$dC܁^sOc [ۅV{6aP>rkWb4qS\]\Y"P6t7ޒMANJ馭j4o;(}5fakjm}`zsV4I!/=%OkqwV@9sWY1Qo#ګs$橫kT_rZBɂkf6Bv6[=Z ^kO8u.X21`qּet+XAZ5nuR!n#P$0Q[mݤjgLN^&': c[Ao{r0>kW`etKCA~"k,ew׉2m>]jYEKl1Wgsr#tksfNl# ďCJNՕ1'CfOp*f3%yG`~EZIYs)'j~·n Gd)+i=ͺJg'<++Ze6kQVvR\ГopQ@QEQEQEQEQE\9m>ѷ8V&$|Uϓ%dV=AUZX)+{m,;OcMҶmwss& 1xSy_mNʍ^n%q\$^ov)ƝK-C$x $~ku%]C[CIϵqLkg !~GQҴ{\Z4y%Qd(+P]*Syn~^AuQC.qQ͘eT Vk{m/m̆hO΍=EQ>r296JdedpOb=X.ݤIRv \'X|uml쩰q3+uwc6S`W|,4[7_Y7c~oHyDZ 23!>t]QiiZq iq].W~ KXViD@Ŕg ҽmo_,u-yŖ!d)Akz\TuukSWӦ#P`y#U* W9pއ?LYȟnyzs8x{WA*o?WYWc;O1R[yu,,Ak?º_ < jk")lXԀYI6y-麗m) rj;"'nI)1ji8Bn@g#V.efcЌ>¸ZzsY<ʨUnv4&a|v qM#tEe]'+2{f r~{翼Wb1$:bHW/4D8,GU[ uI3ن gYÒxVH"k;F&O_۱qa .-)IX{ntiZ7.[Ԧr a]*A%ֱtZVTMB(Ygq >?ʪGM{]݄ k5% <{Dx[=FDqpzVZcYZIu30hE1@<t=^>]Mj6ܸhwGf7/Au̒ >P 27Y|ugYŹ"h$m{)-XMڤ8=~ķ6,gs%ucҺpǩ%)?eUn? +|"DC=Ν|AnXrY+Rԝch Cj^J}  v+w|mR?4+Kso rK~q&ww-acNnUIXq^Pu^.os| Q{7L_F$8PPcoSѬ<*&$bIvhm6gku{5y4M;Lm,`x+AUByb ,&լ)/ Ib1=4  n't_ _x1T_>L~l[ۯ#0C-q!@$:}7vT҉7i5f yTM=Zy:Hܱ$jmXN@9U܎W#-n51=v\,j'PT֙d?ڗR+v3W)?Hk֍d:+* do&@?w'N=\NoZ3˓QuReFĐ,wƮ?J$؀䜐@Md܀`Ѧ&[M[LpvjKgjI冭 zY1ʸ AKRKH!zi@?S* G {bLuWִ5+$sml[\8:󎝆qHozڦ%8ö^Io}{7|[M̞)>hѽ>M/,$yr+w׷:nzCQz"tb Go?.~µiVKvc)(((+;4k&>Tv?ֺjg_jVJnb92\#SBgV7׾ qjck^]|ʅz=BPq$W,L3j QIPlo!Mɍ(4 lDh.ORI9wtX΄hNVKzއA{I-Uc23z~Y4G|,6}~Zԭ|k;ec8 Z\]_FYn3[spOd>2[hضяpmMeZJ{ך.yhl,Y$>f3qy^ޘ銛 %ѿ(qU /O8Df@z{P\":cFץ-m}MSI2> Jm? < z ?S:>뿂6Gj? {q/vLtJo-{1@*(=f& lib ZEχ]w?٤?Zҵq"si35>— 6o.]WD$&>㺵/;?q猟 MM]/j[:RʹC+E-,My)7g/,[$g .&YܸC;Z n1}܈Ujw{zhR`j$y?>ΟX8LtI/3,tϺ@}8Ēۅet?ֺ? xSIwjC/S2v@?jNs擻2qӁJqsàj!jhWP\"$ŸMJLh@((((Q@Q@Q@Q@Q@Q@Q@Q@((1EPEPEP( ( ( (?kombu-3.0.7/docs/index.rst0000644000076500000000000000045712064115765016056 0ustar asksolwheel00000000000000Kombu Documentation ================================== Contents: .. toctree:: :maxdepth: 2 introduction userguide/index .. toctree:: :maxdepth: 1 faq reference/index changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` kombu-3.0.7/docs/introduction.rst0000600000076500000240000002560112247127137017462 0ustar asksolstaff00000000000000.. _kombu-index: ======================================== kombu - Messaging library for Python ======================================== :Version: 3.0.7 `Kombu` is a messaging library for Python. The aim of `Kombu` is to make messaging in Python as easy as possible by providing an idiomatic high-level interface for the AMQ protocol, and also provide proven and tested solutions to common messaging problems. `AMQP`_ is the Advanced Message Queuing Protocol, an open standard protocol for message orientation, queuing, routing, reliability and security, for which the `RabbitMQ`_ messaging server is the most popular implementation. Features ======== * Allows application authors to support several message server solutions by using pluggable transports. * AMQP transport using the `py-amqp`_ or `librabbitmq`_ client libraries. * High performance AMQP transport written in C - when using `librabbitmq`_ This is automatically enabled if librabbitmq is installed:: $ pip install librabbitmq * Virtual transports makes it really easy to add support for non-AMQP transports. There is already built-in support for `Redis`_, `Beanstalk`_, `Amazon SQS`_, `CouchDB`_, `MongoDB`_, ZeroMQ, `ZooKeeper`_, `SoftLayer MQ`_ and `Pyro`_. * You can also use the SQLAlchemy and Django ORM transports to use a database as the broker. * In-memory transport for unit testing. * Supports automatic encoding, serialization and compression of message payloads. * Consistent exception handling across transports. * The ability to ensure that an operation is performed by gracefully handling connection and channel errors. * Several annoyances with `amqplib`_ has been fixed, like supporting timeouts and the ability to wait for events on more than one channel. * Projects already using `carrot`_ can easily be ported by using a compatibility layer. For an introduction to AMQP you should read the article `Rabbits and warrens`_, and the `Wikipedia article about AMQP`_. .. _`RabbitMQ`: http://www.rabbitmq.com/ .. _`AMQP`: http://amqp.org .. _`py-amqp`: http://pypi.python.org/pypi/amqp/ .. _`Redis`: http://code.google.com/p/redis/ .. _`Amazon SQS`: http://aws.amazon.com/sqs/ .. _`MongoDB`: http://www.mongodb.org/ .. _`CouchDB`: http://couchdb.apache.org/ .. _`Zookeeper`: https://zookeeper.apache.org/ .. _`Beanstalk`: http://kr.github.com/beanstalkd/ .. _`Rabbits and warrens`: http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/ .. _`amqplib`: http://barryp.org/software/py-amqplib/ .. _`Wikipedia article about AMQP`: http://en.wikipedia.org/wiki/AMQP .. _`carrot`: http://pypi.python.org/pypi/carrot/ .. _`librabbitmq`: http://pypi.python.org/pypi/librabbitmq .. _`Pyro`: http://pythonhosting.org/Pyro .. _`SoftLayer MQ`: http://www.softlayer.com/services/additional/message-queue .. _transport-comparison: Transport Comparison ==================== +---------------+----------+------------+------------+---------------+ | **Client** | **Type** | **Direct** | **Topic** | **Fanout** | +---------------+----------+------------+------------+---------------+ | *amqp* | Native | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *redis* | Virtual | Yes | Yes | Yes (PUB/SUB) | +---------------+----------+------------+------------+---------------+ | *mongodb* | Virtual | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *beanstalk* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SQS* | Virtual | Yes | Yes [#f1]_ | Yes [#f2]_ | +---------------+----------+------------+------------+---------------+ | *couchdb* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *zookeeper* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *in-memory* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *django* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *sqlalchemy* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SLMQ* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ .. [#f1] Declarations only kept in memory, so exchanges/queues must be declared by all clients that needs them. .. [#f2] Fanout supported via storing routing tables in SimpleDB. Disabled by default, but can be enabled by using the ``supports_fanout`` transport option. Documentation ------------- Kombu is using Sphinx, and the latest documentation can be found here: http://kombu.readthedocs.org/ Quick overview -------------- :: from kombu import Connection, Exchange, Queue media_exchange = Exchange('media', 'direct', durable=True) video_queue = Queue('video', exchange=media_exchange, routing_key='video') def process_media(body, message): print body message.ack() # connections with Connection('amqp://guest:guest@localhost//') as conn: # produce producer = conn.Producer(serializer='json') producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013}, exchange=media_exchange, routing_key='video', declare=[video_queue]) # the declare above, makes sure the video queue is declared # so that the messages can be delivered. # It's a best practice in Kombu to have both publishers and # consumers declare the queue. You can also declare the # queue manually using: # video_queue(conn).declare() # consume with conn.Consumer(video_queue, callbacks=[process_media]) as consumer: # Process messages and handle events on all channels while True: conn.drain_events() # Consume from several queues on the same channel: video_queue = Queue('video', exchange=media_exchange, key='video') image_queue = Queue('image', exchange=media_exchange, key='image') with connection.Consumer([video_queue, image_queue], callbacks=[process_media]) as consumer: while True: connection.drain_events() Or handle channels manually:: with connection.channel() as channel: producer = Producer(channel, ...) consumer = Producer(channel) All objects can be used outside of with statements too, just remember to close the objects after use:: from kombu import Connection, Consumer, Producer connection = Connection() # ... connection.release() consumer = Consumer(channel_or_connection, ...) consumer.register_callback(my_callback) consumer.consume() # .... consumer.cancel() `Exchange` and `Queue` are simply declarations that can be pickled and used in configuration files etc. They also support operations, but to do so they need to be bound to a channel. Binding exchanges and queues to a connection will make it use that connections default channel. :: >>> exchange = Exchange('tasks', 'direct') >>> connection = Connection() >>> bound_exchange = exchange(connection) >>> bound_exchange.delete() # the original exchange is not affected, and stays unbound. >>> exchange.delete() raise NotBoundError: Can't call delete on Exchange not bound to a channel. Installation ============ You can install `Kombu` either via the Python Package Index (PyPI) or from source. To install using `pip`,:: $ pip install kombu To install using `easy_install`,:: $ easy_install kombu If you have downloaded a source tarball you can install it by doing the following,:: $ python setup.py build # python setup.py install # as root Terminology =========== There are some concepts you should be familiar with before starting: * Producers Producers sends messages to an exchange. * Exchanges Messages are sent to exchanges. Exchanges are named and can be configured to use one of several routing algorithms. The exchange routes the messages to consumers by matching the routing key in the message with the routing key the consumer provides when binding to the exchange. * Consumers Consumers declares a queue, binds it to a exchange and receives messages from it. * Queues Queues receive messages sent to exchanges. The queues are declared by consumers. * Routing keys Every message has a routing key. The interpretation of the routing key depends on the exchange type. There are four default exchange types defined by the AMQP standard, and vendors can define custom types (so see your vendors manual for details). These are the default exchange types defined by AMQP/0.8: * Direct exchange Matches if the routing key property of the message and the `routing_key` attribute of the consumer are identical. * Fan-out exchange Always matches, even if the binding does not have a routing key. * Topic exchange Matches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (`"."`, like domain names), and two special characters are available; star (`"*"`) and hash (`"#"`). The star matches any word, and the hash matches zero or more words. For example `"*.stock.#"` matches the routing keys `"usd.stock"` and `"eur.stock.db"` but not `"stock.nasdaq"`. Getting Help ============ Mailing list ------------ Join the `carrot-users`_ mailing list. .. _`carrot-users`: http://groups.google.com/group/carrot-users/ Bug tracker =========== If you have any suggestions, bug reports or annoyances please report them to our issue tracker at http://github.com/celery/kombu/issues/ Contributing ============ Development of `Kombu` happens at Github: http://github.com/celery/kombu You are highly encouraged to participate in the development. If you don't like Github (for some reason) you're welcome to send regular patches. License ======= This software is licensed under the `New BSD License`. See the `LICENSE` file in the top distribution directory for the full license text. .. image:: https://d2weczhvl823v0.cloudfront.net/celery/kombu/trend.png :alt: Bitdeli badge :target: https://bitdeli.com/free kombu-3.0.7/docs/Makefile0000644000076500000000000000447712064115765015663 0ustar asksolwheel00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d .build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html web pickle htmlhelp latex changes linkcheck help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" clean: -rm -rf .build/* html: mkdir -p .build/html .build/doctrees $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) .build/html @echo @echo "Build finished. The HTML pages are in .build/html." pickle: mkdir -p .build/pickle .build/doctrees $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) .build/pickle @echo @echo "Build finished; now you can process the pickle files." web: pickle json: mkdir -p .build/json .build/doctrees $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) .build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: mkdir -p .build/htmlhelp .build/doctrees $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in .build/htmlhelp." latex: mkdir -p .build/latex .build/doctrees $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex @echo @echo "Build finished; the LaTeX files are in .build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: mkdir -p .build/changes .build/doctrees $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes @echo @echo "The overview file is in .build/changes." linkcheck: mkdir -p .build/linkcheck .build/doctrees $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in .build/linkcheck/output.txt." kombu-3.0.7/docs/reference/0000755000076500000000000000000012247127370016143 5ustar asksolwheel00000000000000kombu-3.0.7/docs/reference/index.rst0000644000076500000000000000275412237554371020020 0ustar asksolwheel00000000000000=========================== API Reference =========================== :Release: |version| :Date: |today| .. toctree:: :maxdepth: 2 kombu kombu.common kombu.mixins kombu.simple kombu.clocks kombu.compat kombu.pidbox kombu.exceptions kombu.log kombu.connection kombu.message kombu.compression kombu.pools kombu.abstract kombu.syn kombu.async kombu.async.hub kombu.async.semaphore kombu.async.timer kombu.transport kombu.transport.pyamqp kombu.transport.librabbitmq kombu.transport.memory kombu.transport.redis kombu.transport.zmq kombu.transport.beanstalk kombu.transport.mongodb kombu.transport.couchdb kombu.transport.zookeeper kombu.transport.filesystem kombu.transport.django kombu.transport.django.models kombu.transport.django.managers kombu.transport.django.management.commands.clean_kombu_messages kombu.transport.sqlalchemy kombu.transport.sqlalchemy.models kombu.transport.SQS kombu.transport.SLMQ kombu.transport.pyro kombu.transport.amqplib kombu.transport.base kombu.transport.virtual kombu.transport.virtual.exchange kombu.transport.virtual.scheduling kombu.serialization kombu.utils kombu.utils.eventio kombu.utils.limits kombu.utils.compat kombu.utils.debug kombu.utils.encoding kombu.utils.functional kombu.utils.url kombu.utils.text kombu.utils.amq_manager kombu.five kombu-3.0.7/docs/reference/kombu.abstract.rst0000644000076500000000000000026512064115765021621 0ustar asksolwheel00000000000000.. currentmodule:: kombu.abstract .. automodule:: kombu.abstract .. contents:: :local: .. autoclass:: MaybeChannelBound :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.async.hub.rst0000644000076500000000000000044412237554371021711 0ustar asksolwheel00000000000000========================================================== Event Loop Implementation - kombu.async.hub ========================================================== .. contents:: :local: .. currentmodule:: kombu.async.hub .. automodule:: kombu.async.hub :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.async.rst0000644000076500000000000000041112237554371021126 0ustar asksolwheel00000000000000========================================================== Event Loop - kombu.async ========================================================== .. contents:: :local: .. currentmodule:: kombu.async .. automodule:: kombu.async :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.async.semaphore.rst0000644000076500000000000000044712237554371023121 0ustar asksolwheel00000000000000========================================================== Semaphores - kombu.async.semaphore ========================================================== .. contents:: :local: .. currentmodule:: kombu.async.semaphore .. automodule:: kombu.async.semaphore :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.async.timer.rst0000644000076500000000000000042612237554371022253 0ustar asksolwheel00000000000000========================================================== Timer - kombu.async.timer ========================================================== .. contents:: :local: .. currentmodule:: kombu.async.timer .. automodule:: kombu.async.timer :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.clocks.rst0000644000076500000000000000043412064115765021272 0ustar asksolwheel00000000000000========================================================== Clocks and Synchronization - kombu.clocks ========================================================== .. contents:: :local: .. currentmodule:: kombu.clocks .. automodule:: kombu.clocks :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.common.rst0000644000076500000000000000042212064115765021301 0ustar asksolwheel00000000000000========================================================== Common Utilities - kombu.common ========================================================== .. contents:: :local: .. currentmodule:: kombu.common .. automodule:: kombu.common :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.compat.rst0000644000076500000000000000115012223041316021256 0ustar asksolwheel00000000000000.. currentmodule:: kombu.compat .. automodule:: kombu.compat .. contents:: :local: Publisher --------- Replace with :class:`kombu.Producer`. .. autoclass:: Publisher :members: :undoc-members: :inherited-members: Consumer -------- Replace with :class:`kombu.Consumer`. .. autoclass:: Consumer :members: :undoc-members: :inherited-members: ConsumerSet ----------- Replace with :class:`kombu.Consumer`. .. autoclass:: ConsumerSet :members: :undoc-members: :inherited-members: kombu-3.0.7/docs/reference/kombu.compression.rst0000644000076500000000000000056712064115765022364 0ustar asksolwheel00000000000000.. currentmodule:: kombu.compression .. automodule:: kombu.compression .. contents:: :local: Encoding/decoding ----------------- .. autofunction:: compress .. autofunction:: decompress Registry -------- .. autofunction:: encoders .. autofunction:: get_encoder .. autofunction:: get_decoder .. autofunction:: register kombu-3.0.7/docs/reference/kombu.connection.rst0000644000076500000000000000140112223041316022131 0ustar asksolwheel00000000000000 .. currentmodule:: kombu.connection .. automodule:: kombu.connection .. contents:: :local: Connection ---------- .. autoclass:: Connection :members: :undoc-members: Pools ----- .. seealso:: The shortcut methods :meth:`Connection.Pool` and :meth:`Connection.ChannelPool` is the recommended way to instantiate these classes. .. autoclass:: ConnectionPool .. autoattribute:: LimitExceeded .. automethod:: acquire .. automethod:: release .. automethod:: force_close_all .. autoclass:: ChannelPool .. autoattribute:: LimitExceeded .. automethod:: acquire .. automethod:: release .. automethod:: force_close_all kombu-3.0.7/docs/reference/kombu.exceptions.rst0000644000076500000000000000053512064115765022177 0ustar asksolwheel00000000000000.. currentmodule:: kombu.exceptions .. automodule:: kombu.exceptions .. contents:: :local: .. autoexception:: NotBoundError .. autoexception:: MessageStateError .. autoexception:: TimeoutError .. autoexception:: LimitExceeded .. autoexception:: ConnectionLimitExceeded .. autoexception:: ChannelLimitExceeded kombu-3.0.7/docs/reference/kombu.five.rst0000644000076500000000000000043012237554371020743 0ustar asksolwheel00000000000000========================================================== Python2 to Python3 utilities - kombu.five ========================================================== .. contents:: :local: .. currentmodule:: kombu.five .. automodule:: kombu.five :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.log.rst0000644000076500000000000000040012064115765020566 0ustar asksolwheel00000000000000========================================================== Logging - kombu.log ========================================================== .. contents:: :local: .. currentmodule:: kombu.log .. automodule:: kombu.log :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.message.rst0000644000076500000000000000042412237554371021441 0ustar asksolwheel00000000000000========================================================== Message Objects - kombu.message ========================================================== .. contents:: :local: .. currentmodule:: kombu.message .. automodule:: kombu.message :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.mixins.rst0000644000076500000000000000041712064115765021324 0ustar asksolwheel00000000000000========================================================== Mixin Classes - kombu.mixins ========================================================== .. contents:: :local: .. currentmodule:: kombu.mixins .. automodule:: kombu.mixins :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.pidbox.rst0000644000076500000000000000425512064115765021306 0ustar asksolwheel00000000000000.. currentmodule:: kombu.pidbox .. automodule:: kombu.pidbox .. contents:: :local: Introduction ------------ Creating the applications Mailbox ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: python >>> mailbox = pidbox.Mailbox("celerybeat", type="direct") >>> @mailbox.handler >>> def reload_schedule(state, **kwargs): ... state["beat"].reload_schedule() >>> @mailbox.handler >>> def connection_info(state, **kwargs): ... return {"connection": state["connection"].info()} Example Node ~~~~~~~~~~~~ .. code-block:: python >>> connection = kombu.Connection() >>> state = {"beat": beat, "connection": connection} >>> consumer = mailbox(connection).Node(hostname).listen() >>> try: ... while True: ... connection.drain_events(timeout=1) ... finally: ... consumer.cancel() Example Client ~~~~~~~~~~~~~~ .. code-block:: python >>> mailbox.cast("reload_schedule") # cast is async. >>> info = celerybeat.call("connection_info", timeout=1) Mailbox ------- .. autoclass:: Mailbox .. autoattribute:: namespace .. autoattribute:: connection .. autoattribute:: type .. autoattribute:: exchange .. autoattribute:: reply_exchange .. automethod:: Node .. automethod:: call .. automethod:: cast .. automethod:: abcast .. automethod:: multi_call .. automethod:: get_reply_queue .. automethod:: get_queue Node ---- .. autoclass:: Node .. autoattribute:: hostname .. autoattribute:: mailbox .. autoattribute:: handlers .. autoattribute:: state .. autoattribute:: channel .. automethod:: Consumer .. automethod:: handler .. automethod:: listen .. automethod:: dispatch .. automethod:: dispatch_from_message .. automethod:: handle_call .. automethod:: handle_cast .. automethod:: handle .. automethod:: handle_message .. automethod:: reply kombu-3.0.7/docs/reference/kombu.pools.rst0000644000076500000000000000041412064115765021146 0ustar asksolwheel00000000000000========================================================== General Pools - kombu.pools ========================================================== .. contents:: :local: .. currentmodule:: kombu.pools .. automodule:: kombu.pools :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.rst0000644000076500000000000001332712237554371020024 0ustar asksolwheel00000000000000.. currentmodule:: kombu .. contents:: :local: .. automodule:: kombu .. autofunction:: enable_insecure_serializers .. autofunction:: disable_insecure_serializers Connection ---------- .. autoclass:: Connection .. admonition:: Attributes .. autoattribute:: hostname .. autoattribute:: port .. autoattribute:: userid .. autoattribute:: password .. autoattribute:: virtual_host .. autoattribute:: ssl .. autoattribute:: login_method .. autoattribute:: failover_strategy .. autoattribute:: connect_timeout .. autoattribute:: heartbeat .. autoattribute:: default_channel .. autoattribute:: connected .. autoattribute:: recoverable_connection_errors .. autoattribute:: recoverable_channel_errors .. autoattribute:: connection_errors .. autoattribute:: channel_errors .. autoattribute:: transport .. autoattribute:: connection .. autoattribute:: uri_prefix .. autoattribute:: declared_entities .. autoattribute:: cycle .. autoattribute:: host .. autoattribute:: manager .. autoattribute:: supports_heartbeats .. autoattribute:: is_evented .. admonition:: Methods .. automethod:: as_uri .. automethod:: connect .. automethod:: channel .. automethod:: drain_events .. automethod:: release .. automethod:: autoretry .. automethod:: ensure_connection .. automethod:: ensure .. automethod:: revive .. automethod:: create_transport .. automethod:: get_transport_cls .. automethod:: clone .. automethod:: info .. automethod:: switch .. automethod:: maybe_switch_next .. automethod:: heartbeat_check .. automethod:: maybe_close_channel .. automethod:: register_with_event_loop .. automethod:: close .. automethod:: _close .. automethod:: completes_cycle .. automethod:: get_manager .. automethod:: Producer .. automethod:: Consumer .. automethod:: Pool .. automethod:: ChannelPool .. automethod:: SimpleQueue .. automethod:: SimpleBuffer Exchange -------- Example creating an exchange declaration:: >>> news_exchange = Exchange('news', type='topic') For now `news_exchange` is just a declaration, you can't perform actions on it. It just describes the name and options for the exchange. The exchange can be bound or unbound. Bound means the exchange is associated with a channel and operations can be performed on it. To bind the exchange you call the exchange with the channel as argument:: >>> bound_exchange = news_exchange(channel) Now you can perform operations like :meth:`declare` or :meth:`delete`:: >>> bound_exchange.declare() >>> message = bound_exchange.Message('Cure for cancer found!') >>> bound_exchange.publish(message, routing_key='news.science') >>> bound_exchange.delete() .. autoclass:: Exchange :members: :undoc-members: .. automethod:: maybe_bind Queue ----- Example creating a queue using our exchange in the :class:`Exchange` example:: >>> science_news = Queue('science_news', ... exchange=news_exchange, ... routing_key='news.science') For now `science_news` is just a declaration, you can't perform actions on it. It just describes the name and options for the queue. The queue can be bound or unbound. Bound means the queue is associated with a channel and operations can be performed on it. To bind the queue you call the queue instance with the channel as an argument:: >>> bound_science_news = science_news(channel) Now you can perform operations like :meth:`declare` or :meth:`purge`: .. code-block:: python >>> bound_science_news.declare() >>> bound_science_news.purge() >>> bound_science_news.delete() .. autoclass:: Queue :members: :undoc-members: .. automethod:: maybe_bind Message Producer ---------------- .. autoclass:: Producer .. autoattribute:: channel .. autoattribute:: exchange .. autoattribute:: routing_key .. autoattribute:: serializer .. autoattribute:: compression .. autoattribute:: auto_declare .. autoattribute:: on_return .. autoattribute:: connection .. automethod:: declare .. automethod:: maybe_declare .. automethod:: publish .. automethod:: revive Message Consumer ---------------- .. autoclass:: Consumer .. autoattribute:: channel .. autoattribute:: queues .. autoattribute:: no_ack .. autoattribute:: auto_declare .. autoattribute:: callbacks .. autoattribute:: on_message .. autoattribute:: on_decode_error .. autoattribute:: connection .. automethod:: declare .. automethod:: register_callback .. automethod:: add_queue .. automethod:: add_queue_from_dict .. automethod:: consume .. automethod:: cancel .. automethod:: cancel_by_queue .. automethod:: consuming_from .. automethod:: purge .. automethod:: flow .. automethod:: qos .. automethod:: recover .. automethod:: receive .. automethod:: revive kombu-3.0.7/docs/reference/kombu.serialization.rst0000644000076500000000000000204512064115765022671 0ustar asksolwheel00000000000000.. currentmodule:: kombu.serialization .. automodule:: kombu.serialization .. contents:: :local: Overview -------- Centralized support for encoding/decoding of data structures. Contains json, pickle, msgpack, and yaml serializers. Optionally installs support for YAML if the `PyYAML`_ package is installed. Optionally installs support for `msgpack`_ if the `msgpack-python`_ package is installed. Exceptions ---------- .. autoexception:: SerializerNotInstalled Serialization ------------- .. autofunction:: encode .. autofunction:: decode .. autofunction:: raw_encode Registry -------- .. autofunction:: register .. autodata:: registry .. _`cjson`: http://pypi.python.org/pypi/python-cjson/ .. _`simplejson`: http://code.google.com/p/simplejson/ .. _`Python 2.6+`: http://docs.python.org/library/json.html .. _`PyYAML`: http://pyyaml.org/ .. _`msgpack`: http://msgpack.sourceforge.net/ .. _`msgpack-python`: http://pypi.python.org/pypi/msgpack-python/ kombu-3.0.7/docs/reference/kombu.simple.rst0000644000076500000000000000356512223041316021300 0ustar asksolwheel00000000000000.. currentmodule:: kombu.simple .. automodule:: kombu.simple .. contents:: :local: Persistent ---------- .. autoclass:: SimpleQueue .. attribute:: channel Current channel .. attribute:: producer :class:`~kombu.Producer` used to publish messages. .. attribute:: consumer :class:`~kombu.Consumer` used to receive messages. .. attribute:: no_ack flag to enable/disable acknowledgements. .. attribute:: queue :class:`~kombu.Queue` to consume from (if consuming). .. attribute:: queue_opts Additional options for the queue declaration. .. attribute:: exchange_opts Additional options for the exchange declaration. .. automethod:: get .. automethod:: get_nowait .. automethod:: put .. automethod:: clear .. automethod:: __len__ .. automethod:: qsize .. automethod:: close Buffer ------ .. autoclass:: SimpleBuffer .. attribute:: channel Current channel .. attribute:: producer :class:`~kombu.Producer` used to publish messages. .. attribute:: consumer :class:`~kombu.Consumer` used to receive messages. .. attribute:: no_ack flag to enable/disable acknowledgements. .. attribute:: queue :class:`~kombu.Queue` to consume from (if consuming). .. attribute:: queue_opts Additional options for the queue declaration. .. attribute:: exchange_opts Additional options for the exchange declaration. .. automethod:: get .. automethod:: get_nowait .. automethod:: put .. automethod:: clear .. automethod:: __len__ .. automethod:: qsize .. automethod:: close kombu-3.0.7/docs/reference/kombu.syn.rst0000644000076500000000000000042012064115765020620 0ustar asksolwheel00000000000000========================================================== Async Utilities - kombu.syn ========================================================== .. contents:: :local: .. currentmodule:: kombu.syn .. automodule:: kombu.syn .. autofunction:: detect_environment kombu-3.0.7/docs/reference/kombu.transport.amqplib.rst0000644000076500000000000000103712064115765023474 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.amqplib .. automodule:: kombu.transport.amqplib .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Connection ---------- .. autoclass:: Connection :members: :undoc-members: :inherited-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: Message ------- .. autoclass:: Message :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.base.rst0000644000076500000000000000330312237554371022761 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.base .. automodule:: kombu.transport.base .. contents:: :local: Message ------- .. autoclass:: Message .. autoattribute:: payload .. autoattribute:: channel .. autoattribute:: delivery_tag .. autoattribute:: content_type .. autoattribute:: content_encoding .. autoattribute:: delivery_info .. autoattribute:: headers .. autoattribute:: properties .. autoattribute:: body .. autoattribute:: acknowledged .. automethod:: ack .. automethod:: reject .. automethod:: requeue .. automethod:: decode Transport --------- .. autoclass:: Transport .. autoattribute:: client .. autoattribute:: default_port .. attribute:: recoverable_connection_errors Optional list of connection related exceptions that can be recovered from, but where the connection must be closed and re-established first. If not defined then all :attr:`connection_errors` and :class:`channel_errors` will be regarded as recoverable, but needing to close the connection first. .. attribute:: recoverable_channel_errors Optional list of channel related exceptions that can be automatically recovered from without re-establishing the connection. .. autoattribute:: connection_errors .. autoattribute:: channel_errors .. automethod:: establish_connection .. automethod:: close_connection .. automethod:: create_channel .. automethod:: close_channel .. automethod:: drain_events kombu-3.0.7/docs/reference/kombu.transport.beanstalk.rst0000644000076500000000000000047712064115765024022 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.beanstalk .. automodule:: kombu.transport.beanstalk .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.couchdb.rst0000644000076500000000000000060312064115765023454 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.couchdb .. automodule:: kombu.transport.couchdb .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: Functions --------- .. autofunction:: create_message_view kombu-3.0.7/docs/reference/kombu.transport.django.management.commands.clean_kombu_messages.rst0000644000076500000000000000061212064115765033367 0ustar asksolwheel00000000000000========================================================== Django Management - clean_kombu_messages ========================================================== .. contents:: :local: .. currentmodule:: kombu.transport.django.management.commands.clean_kombu_messages .. automodule:: kombu.transport.django.management.commands.clean_kombu_messages :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.django.managers.rst0000644000076500000000000000051212064115765025102 0ustar asksolwheel00000000000000========================================================== Django Managers - kombu.transport.django.managers ========================================================== .. contents:: :local: .. currentmodule:: kombu.transport.django.managers .. automodule:: kombu.transport.django.managers :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.django.models.rst0000644000076500000000000000050212064115765024567 0ustar asksolwheel00000000000000========================================================== Django Models - kombu.transport.django.models ========================================================== .. contents:: :local: .. currentmodule:: kombu.transport.django.models .. automodule:: kombu.transport.django.models :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.django.rst0000644000076500000000000000064612064115765023316 0ustar asksolwheel00000000000000========================================= kombu.transport.django ========================================= .. currentmodule:: kombu.transport.django .. automodule:: kombu.transport.django .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.filesystem.rst0000644000076500000000000000050212064115765024227 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.filesystem .. automodule:: kombu.transport.filesystem .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.librabbitmq.rst0000644000076500000000000000104612064115765024337 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.librabbitmq .. automodule:: kombu.transport.librabbitmq .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Connection ---------- .. autoclass:: Connection :members: :undoc-members: :inherited-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: Message ------- .. autoclass:: Message :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.memory.rst0000644000076500000000000000047112064115765023360 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.memory .. automodule:: kombu.transport.memory .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.mongodb.rst0000644000076500000000000000047312064115765023477 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.mongodb .. automodule:: kombu.transport.mongodb .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.pyamqp.rst0000644000076500000000000000103512064115765023354 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.pyamqp .. automodule:: kombu.transport.pyamqp .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Connection ---------- .. autoclass:: Connection :members: :undoc-members: :inherited-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: Message ------- .. autoclass:: Message :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.pyro.rst0000644000076500000000000000046512237554371023046 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.pyro .. automodule:: kombu.transport.pyro .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.redis.rst0000644000076500000000000000046712064115765023163 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.redis .. automodule:: kombu.transport.redis .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.rst0000644000076500000000000000063212064115765022050 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport .. automodule:: kombu.transport .. contents:: :local: Data ---- .. data:: DEFAULT_TRANSPORT Default transport used when no transport specified. .. data:: TRANSPORT_ALIASES Mapping of transport aliases/class names. Functions --------- .. autofunction:: get_transport_cls .. autofunction:: resolve_transport kombu-3.0.7/docs/reference/kombu.transport.SLMQ.rst0000644000076500000000000000063112237554371022624 0ustar asksolwheel00000000000000====================================== kombu.transport.SLMQ ====================================== .. currentmodule:: kombu.transport.SLMQ .. automodule:: kombu.transport.SLMQ .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.sqlalchemy.models.rst0000644000076500000000000000077312237554371025503 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.sqlalchemy.models .. automodule:: kombu.transport.sqlalchemy.models .. contents:: :local: Models ------ .. autoclass:: Queue .. autoattribute:: Queue.id .. autoattribute:: Queue.name .. autoclass:: Message .. autoattribute:: Message.id .. autoattribute:: Message.visible .. autoattribute:: Message.sent_at .. autoattribute:: Message.payload .. autoattribute:: Message.version kombu-3.0.7/docs/reference/kombu.transport.sqlalchemy.rst0000644000076500000000000000065112064115765024212 0ustar asksolwheel00000000000000==================================== kombu.transport.sqlalchemy ==================================== .. currentmodule:: kombu.transport.sqlalchemy .. automodule:: kombu.transport.sqlalchemy .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.SQS.rst0000644000076500000000000000046312064115765022517 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.SQS .. automodule:: kombu.transport.SQS .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.virtual.exchange.rst0000644000076500000000000000103312064115765025312 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.virtual.exchange .. automodule:: kombu.transport.virtual.exchange .. contents:: :local: Direct ------ .. autoclass:: DirectExchange :members: :undoc-members: Topic ----- .. autoclass:: TopicExchange :members: :undoc-members: Fanout ------ .. autoclass:: FanoutExchange :members: :undoc-members: Interface --------- .. autoclass:: ExchangeType :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.virtual.rst0000644000076500000000000000405012064115765023533 0ustar asksolwheel00000000000000.. currentmodule:: kombu.transport.virtual .. automodule:: kombu.transport.virtual .. contents:: :local: Transports ---------- .. autoclass:: Transport .. autoattribute:: Channel .. autoattribute:: Cycle .. autoattribute:: polling_interval .. autoattribute:: default_port .. autoattribute:: state .. autoattribute:: cycle .. automethod:: establish_connection .. automethod:: close_connection .. automethod:: create_channel .. automethod:: close_channel .. automethod:: drain_events Channel ------- .. autoclass:: AbstractChannel :members: .. autoclass:: Channel .. autoattribute:: Message .. autoattribute:: state .. autoattribute:: qos .. autoattribute:: do_restore .. autoattribute:: exchange_types .. automethod:: exchange_declare .. automethod:: exchange_delete .. automethod:: queue_declare .. automethod:: queue_delete .. automethod:: queue_bind .. automethod:: queue_purge .. automethod:: basic_publish .. automethod:: basic_consume .. automethod:: basic_cancel .. automethod:: basic_get .. automethod:: basic_ack .. automethod:: basic_recover .. automethod:: basic_reject .. automethod:: basic_qos .. automethod:: get_table .. automethod:: typeof .. automethod:: drain_events .. automethod:: prepare_message .. automethod:: message_to_python .. automethod:: flow .. automethod:: close Message ------- .. autoclass:: Message :members: :undoc-members: :inherited-members: Quality Of Service ------------------ .. autoclass:: QoS :members: :undoc-members: :inherited-members: In-memory State --------------- .. autoclass:: BrokerState :members: :undoc-members: :inherited-members: kombu-3.0.7/docs/reference/kombu.transport.virtual.scheduling.rst0000644000076500000000000000024612064115765025662 0ustar asksolwheel00000000000000.. contents:: :local: .. currentmodule:: kombu.transport.virtual.scheduling .. automodule:: kombu.transport.virtual.scheduling :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.zmq.rst0000644000076500000000000000032412223041316022637 0ustar asksolwheel00000000000000===================== kombu.transport.zmq ===================== .. currentmodule:: kombu.transport.zmq .. automodule:: kombu.transport.zmq .. contents:: :local: :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.transport.zookeeper.rst0000644000076500000000000000062412223041316024036 0ustar asksolwheel00000000000000=========================== kombu.transport.zookeeper =========================== .. currentmodule:: kombu.transport.zookeeper .. automodule:: kombu.transport.zookeeper .. contents:: :local: Transport --------- .. autoclass:: Transport :members: :undoc-members: Channel ------- .. autoclass:: Channel :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.amq_manager.rst0000644000076500000000000000045712064115765023430 0ustar asksolwheel00000000000000==================================================== Generic RabbitMQ manager - kombu.utils.amq_manager ==================================================== .. contents:: :local: .. currentmodule:: kombu.utils.amq_manager .. automodule:: kombu.utils.amq_manager :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.compat.rst0000644000076500000000000000044512064115765022440 0ustar asksolwheel00000000000000========================================================== Compat. utilities - kombu.utils.compat ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.compat .. automodule:: kombu.utils.compat :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.debug.rst0000644000076500000000000000043212064115765022237 0ustar asksolwheel00000000000000========================================================== Debugging - kombu.utils.debug ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.debug .. automodule:: kombu.utils.debug :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.encoding.rst0000644000076500000000000000045112064115765022740 0ustar asksolwheel00000000000000========================================================== String Encoding - kombu.utils.encoding ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.encoding .. automodule:: kombu.utils.encoding :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.eventio.rst0000644000076500000000000000044212223041316022606 0ustar asksolwheel00000000000000========================================================== Evented I/O - kombu.utils.eventio ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.eventio .. automodule:: kombu.utils.eventio :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.functional.rst0000644000076500000000000000043512064115765023316 0ustar asksolwheel00000000000000========================================================== kombu.utils.functional ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.functional .. automodule:: kombu.utils.functional :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.limits.rst0000644000076500000000000000044112064115765022452 0ustar asksolwheel00000000000000========================================================== Rate limiting - kombu.utils.limits ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.limits .. automodule:: kombu.utils.limits :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.rst0000644000076500000000000000041012064115765021146 0ustar asksolwheel00000000000000========================================================== Utilities - kombu.utils ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils .. automodule:: kombu.utils :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.text.rst0000644000076500000000000000043512234207745022137 0ustar asksolwheel00000000000000========================================================== Text utilitites - kombu.utils.text ========================================================== .. contents:: :local: .. currentmodule:: kombu.utils.text .. automodule:: kombu.utils.text :members: :undoc-members: kombu-3.0.7/docs/reference/kombu.utils.url.rst0000644000076500000000000000036012064115765021753 0ustar asksolwheel00000000000000============================================== kombu.utils.url ============================================== .. contents:: :local: .. currentmodule:: kombu.utils.url .. automodule:: kombu.utils.url :members: :undoc-members: kombu-3.0.7/docs/userguide/0000755000076500000000000000000012247127370016201 5ustar asksolwheel00000000000000kombu-3.0.7/docs/userguide/connections.rst0000644000076500000000000001450612237554371021267 0ustar asksolwheel00000000000000.. _guide-connections: ============================ Connections and transports ============================ .. _connection-basics: Basics ====== To send and receive messages you need a transport and a connection. There are several transports to choose from (amqp, librabbitmq, redis, in-memory, etc.), and you can even create your own. The default transport is amqp. Create a connection using the default transport:: >>> from kombu import Connection >>> connection = Connection('amqp://guest:guest@localhost:5672//') The connection will not be established yet, as the connection is established when needed. If you want to explicitly establish the connection you have to call the :meth:`~kombu.Connection.connect` method:: >>> connection.connect() You can also check whether the connection is connected:: >>> connection.connected True Connections must always be closed after use:: >>> connection.close() But best practice is to release the connection instead, this will release the resource if the connection is associated with a connection pool, or close the connection if not, and makes it easier to do the transition to connection pools later:: >>> connection.release() .. seealso:: :ref:`guide-pools` Of course, the connection can be used as a context, and you are encouraged to do so as it makes it harder to forget releasing open resources:: with Connection() as connection: # work with connection .. _connection-urls: URLs ==== Connection parameters can be provided as an URL in the format:: transport://userid:password@hostname:port/virtual_host All of these are valid URLs:: # Specifies using the amqp transport only, default values # are taken from the keyword arguments. amqp:// # Using Redis redis://localhost:6379/ # Using Redis over a Unix socket redis+socket:///tmp/redis.sock # Using virtual host '/foo' amqp://localhost//foo # Using virtual host 'foo' amqp://localhost/foo The query part of the URL can also be used to set options, e.g.:: amqp://localhost/myvhost?ssl=1 See :ref:`connection-options` for a list of supported options. A connection without options will use the default connection settings, which is using the localhost host, default port, user name `guest`, password `guest` and virtual host "/". A connection without arguments is the same as:: >>> Connection('amqp://guest:guest@localhost:5672//') The default port is transport specific, for AMQP this is 5672. Other fields may also have different meaning depending on the transport used. For example, the Redis transport uses the `virtual_host` argument as the redis database number. .. _connection-options: Keyword arguments ================= The :class:`~kombu.Connection` class supports additional keyword arguments, these are: :hostname: Default host name if not provided in the URL. :userid: Default user name if not provided in the URL. :password: Default password if not provided in the URL. :virtual_host: Default virtual host if not provided in the URL. :port: Default port if not provided in the URL. :transport: Default transport if not provided in the URL. Can be a string specifying the path to the class. (e.g. ``kombu.transport.pyamqp:Transport``), or one of the aliases: ``pyamqp``, ``librabbitmq``, ``redis``, ``memory``, and so on. :ssl: Use SSL to connect to the server. Default is ``False``. Only supported by the amqp transport. :insist: Insist on connecting to a server. *No longer supported, relic from AMQP 0.8* :connect_timeout: Timeout in seconds for connecting to the server. May not be supported by the specified transport. :transport_options: A dict of additional connection arguments to pass to alternate kombu channel implementations. Consult the transport documentation for available options. AMQP Transports =============== There are 3 transports available for AMQP use. 1. ``pyamqp`` uses the pure Python library ``amqp``, automatically installed with Kombu. 2. ``librabbitmq`` uses the high performance transport written in C. This requires the ``librabbitmq`` Python package to be installed, which automatically compiles the C library. 3. ``amqp`` tries to use ``librabbitmq`` but falls back to ``pyamqp``. For the highest performance, you should install the ``librabbitmq`` package. To ensure librabbitmq is used, you can explicitly specify it in the transport URL, or use ``amqp`` to have the fallback. Transport Comparison ==================== +---------------+----------+------------+------------+---------------+ | **Client** | **Type** | **Direct** | **Topic** | **Fanout** | +---------------+----------+------------+------------+---------------+ | *amqp* | Native | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *redis* | Virtual | Yes | Yes | Yes (PUB/SUB) | +---------------+----------+------------+------------+---------------+ | *mongodb* | Virtual | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *beanstalk* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SQS* | Virtual | Yes | Yes [#f1]_ | Yes [#f2]_ | +---------------+----------+------------+------------+---------------+ | *couchdb* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *zookeeper* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *in-memory* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *django* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *sqlalchemy* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ .. [#f1] Declarations only kept in memory, so exchanges/queues must be declared by all clients that needs them. .. [#f2] Fanout supported via storing routing tables in SimpleDB. Disabled by default, but can be enabled by using the ``supports_fanout`` transport option. kombu-3.0.7/docs/userguide/consumers.rst0000644000076500000000000000521212237554371020755 0ustar asksolwheel00000000000000.. _guide-consumers: =========== Consumers =========== .. _consumer-basics: Basics ====== The :class:`Consumer` takes a connection (or channel) and a list of queues to consume from. Several consumers can be mixed to consume from different channels, as they all bind to the same connection, and ``drain_events`` will drain events from all channels on that connection. .. note:: Kombu since 3.0 will only accept json/binary or text messages by default, to allow deserialization of other formats you have to specify them in the ``accept`` argument:: Consumer(conn, accept=['json', 'pickle', 'msgpack', 'yaml']) Draining events from a single consumer: .. code-block:: python with Consumer(connection, queues, accept=['json']): connection.drain_events(timeout=1) Draining events from several consumers: .. code-block:: python from kombu.utils import nested with connection.channel(), connection.channel() as (channel1, channel2): consumers = [Consumer(channel1, queues1, accept=['json']), Consumer(channel2, queues2, accept=['json'])] with nested(\*consumers): connection.drain_events(timeout=1) Or using :class:`~kombu.mixins.ConsumerMixin`: .. code-block:: python from kombu.mixins import ConsumerMixin class C(ConsumerMixin): def __init__(self, connection): self.connection = connection def get_consumers(self, Consumer, channel): return [ Consumer(queues, callbacks=[self.on_message], accept=['json']), ] def on_message(self, body, message): print("RECEIVED MESSAGE: %r" % (body, )) message.ack() C(connection).run() and with multiple channels again: .. code-block:: python from kombu import Consumer from kombu.mixins import ConsumerMixin class C(ConsumerMixin): channel2 = None def __init__(self, connection): self.connection = connection def get_consumers(self, _, default_channel): self.channel2 = default_channel.connection.channel() return [Consumer(default_channel, queues1, callbacks=[self.on_message], accept=['json']), Consumer(self.channel2, queues2, callbacks=[self.on_special_message], accept=['json'])] def on_consumer_end(self, connection, default_channel): if self.channel2: self.channel2.close() C(connection).run() Reference ========= .. autoclass:: kombu.Consumer :noindex: :members: kombu-3.0.7/docs/userguide/examples.rst0000644000076500000000000000205712237554371020561 0ustar asksolwheel00000000000000.. _examples: ======================== Examples ======================== .. _hello-world-example: Hello World Example =================== Below example uses :ref:`guide-simple` to send helloworld message through message broker (rabbitmq) and print received message :file:`hello_publisher.py`: .. literalinclude:: ../../examples/hello_publisher.py :language: python :file:`hello_consumer.py`: .. literalinclude:: ../../examples/hello_consumer.py :language: python .. _task-queue-example: Task Queue Example ================== Very simple task queue using pickle, with primitive support for priorities using different queues. :file:`queues.py`: .. literalinclude:: ../../examples/simple_task_queue/queues.py :language: python :file:`worker.py`: .. literalinclude:: ../../examples/simple_task_queue/worker.py :language: python :file:`tasks.py`: .. literalinclude:: ../../examples/simple_task_queue/tasks.py :language: python .. code-block:: python :file:`client.py`: .. literalinclude:: ../../examples/simple_task_queue/client.py kombu-3.0.7/docs/userguide/index.rst0000644000076500000000000000033312064115765020043 0ustar asksolwheel00000000000000============ User Guide ============ :Release: |version| :Date: |today| .. toctree:: :maxdepth: 2 introduction connections producers consumers examples simple pools serialization kombu-3.0.7/docs/userguide/introduction.rst0000644000076500000000000000630112064115765021456 0ustar asksolwheel00000000000000.. _guide-intro: ============== Introduction ============== .. _intro-messaging: What is messaging? ================== In times long ago people didn't have email. They had the postal service, which with great courage would deliver mail from hand to hand all over the globe. Soldiers deployed at wars far away could only communicate with their families through the postal service, and posting a letter would mean that the recipient wouldn't actually receive the letter until weeks or months, sometimes years later. It's hard to imagine this today when people are expected to be available for phone calls every minute of the day. So humans need to communicate with each other, this shouldn't be news to anyone, but why would applications? One example is banks. When you transfer money from one bank to another, your bank sends a message to the banks messaging central. The messaging central then record and coordinate the transaction. Banks need to send and receive millions and millions of messages every day, and losing a single message would mean either losing your money (bad) or the banks money (very bad) Another example is the stock exchanges, which also have a need for very high message throughputs and have strict reliability requirements. Email is a great way for people to communicate. It is much faster than using the postal service, but still using email as a means for programs to communicate would be like the soldier above, waiting for signs of life from his girlfriend back home. .. _messaging-scenarios: Messaging Scenarios =================== * Request/Reply The request/reply pattern works like the postal service example. A message is addressed to a single recipient, with a return address printed on the back. The recipient may or may not reply to the message by sending it back to the original sender. Request-Reply is achieved using *direct* exchanges. * Broadcast In a broadcast scenario a message is sent to all parties. This could be none, one or many recipients. Broadcast is achieved using *fanout* exchanges. * Publish/Subscribe In a publish/subscribe scenario producers publish messages to topics, and consumers subscribe to the topics they are interested in. If no consumers subscribe to the topic, then the message will not be delivered to anyone. If several consumers subscribe to the topic, then the message will be delivered to all of them. Pub-sub is achieved using *topic* exchanges. .. _messaging-reliability: Reliability =========== For some applications reliability is very important. Losing a message is a critical situation that must never happen. For other applications losing a message is fine, it can maybe recover in other ways, or the message is resent anyway as periodic updates. AMQP defines two built-in delivery modes: * persistent Messages are written to disk and survives a broker restart. * transient Messages may or may not be written to disk, as the broker sees fit to optimize memory contents. The messages will not survive a broker restart. Transient messaging is by far the fastest way to send and receive messages, so having persistent messages comes with a price, but for some applications this is a necessary cost. kombu-3.0.7/docs/userguide/pools.rst0000644000076500000000000001235212223041316020057 0ustar asksolwheel00000000000000.. _guide-pools: =============================== Connection and Producer Pools =============================== .. _default-pools: Default Pools ============= Kombu ships with two global pools: one connection pool, and one producer pool. These are convenient and the fact that they are global may not be an issue as connections should often be limited at the process level, rather than per thread/application and so on, but if you need custom pools per thread see :ref:`custom-pool-groups`. .. _default-connections: The connection pool group ------------------------- The connection pools are available as :attr:`kombu.pools.connections`. This is a pool group, which means you give it a connection instance, and you get a pool instance back. We have one pool per connection instance to support multiple connections in the same app. All connection instances with the same connection parameters will get the same pool:: >>> from kombu import Connection >>> from kombu.pools import connections >>> connections[Connection('redis://localhost:6379')] >>> connections[Connection('redis://localhost:6379')] Let's acquire and release a connection: .. code-block:: python from kombu import Connection from kombu.pools import connections connection = Connection('redis://localhost:6379') with connections[connection].acquire(block=True) as conn: print('Got connection: %r' % (connection.as_uri(), )) .. note:: The ``block=True`` here means that the acquire call will block until a connection is available in the pool. Note that this will block forever in case there is a deadlock in your code where a connection is not released. There is a ``timeout`` argument you can use to safeguard against this (see :meth:`kombu.connection.Resource.acquire`). If blocking is disabled and there aren't any connections left in the pool an :class:`kombu.exceptions.ConnectionLimitExceeded` exception will be raised. That's about it. If you need to connect to multiple brokers at once you can do that too: .. code-block:: python from kombu import Connection from kombu.pools import connections c1 = Connection('amqp://') c2 = Connection('redis://') with connections[c1].acquire(block=True) as conn1: with connections[c2].acquire(block=True) as conn2: # .... .. _default-producers: The producer pool group ======================= This is a pool group just like the connections, except that it manages :class:`~kombu.Producer` instances used to publish messages. Here is an example using the producer pool to publish a message to the ``news`` exchange: .. code-block:: python from kombu import Connection, Exchange from kombu.common import maybe_declare from kombu.pools import producers # The exchange we send our news articles to. news_exchange = Exchange('news') # The article we want to send article = {'title': 'No cellular coverage on the tube for 2012', 'ingress': 'yadda yadda yadda'} # The broker where our exchange is. connection = Connection('amqp://guest:guest@localhost:5672//') with producers[connection].acquire(block=True) as producer: # maybe_declare knows what entities have already been declared # so we don't have to do so multiple times in the same process. maybe_declare(news_exchange) producer.publish(article, routing_key='domestic', serializer='json', compression='zlib') .. _default-pool-limits: Setting pool limits ------------------- By default every connection instance has a limit of 200 connections. You can change this limit using :func:`kombu.pools.set_limit`. You are able to grow the pool at runtime, but you can't shrink it, so it is best to set the limit as early as possible after your application starts:: >>> from kombu import pools >>> pools.set_limit() Resetting all pools ------------------- You can close all active connections and reset all pool groups by using the :func:`kombu.pools.reset` function. Note that this will not respect anything currently using these connections, so will just drag the connections away from under their feet: you should be very careful before you use this. Kombu will reset the pools if the process is forked, so that forked processes start with clean pool groups. .. _custom-pool-groups: Custom Pool Groups ================== To maintain your own pool groups you should create your own :class:`~kombu.pools.Connections` and :class:`kombu.pools.Producers` instances: .. code-block:: python from kombu import pools from kombu import Connection connections = pools.Connection(limit=100) producers = pools.Producers(limit=connections.limit) connection = Connection('amqp://guest:guest@localhost:5672//') with connections[connection].acquire(block=True): # ... If you want to use the global limit that can be set with :func:`~kombu.pools.set_limit` you can use a special value as the ``limit`` argument: .. code-block:: python from kombu import pools connections = pools.Connections(limit=pools.use_default_limit) kombu-3.0.7/docs/userguide/producers.rst0000644000076500000000000000035612223041316020732 0ustar asksolwheel00000000000000.. _guide-producers: =========== Producers =========== .. _producer-basics: Basics ====== Serialization ============= See :ref:`guide-serialization`. Reference ========= .. autoclass:: kombu.Producer :noindex: :members: kombu-3.0.7/docs/userguide/serialization.rst0000644000076500000000000001405512237554371021621 0ustar asksolwheel00000000000000.. _guide-serialization: =============== Serialization =============== .. _serializers: Serializers =========== By default every message is encoded using `JSON`_, so sending Python data structures like dictionaries and lists works. `YAML`_, `msgpack`_ and Python's built-in `pickle` module is also supported, and if needed you can register any custom serialization scheme you want to use. By default Kombu will only load JSON messages, so if you want to use other serialization format you must explicitly enable them in your consumer by using the ``accept`` argument: .. code-block:: python Consumer(conn, [queue], accept=['json', 'pickle', 'msgpack']) The accept argument can also include MIME-types. .. _`JSON`: http://www.json.org/ .. _`YAML`: http://yaml.org/ .. _`msgpack`: http://msgpack.sourceforge.net/ Each option has its advantages and disadvantages. `json` -- JSON is supported in many programming languages, is now a standard part of Python (since 2.6), and is fairly fast to decode using the modern Python libraries such as `cjson` or `simplejson`. The primary disadvantage to `JSON` is that it limits you to the following data types: strings, Unicode, floats, boolean, dictionaries, and lists. Decimals and dates are notably missing. Also, binary data will be transferred using Base64 encoding, which will cause the transferred data to be around 34% larger than an encoding which supports native binary types. However, if your data fits inside the above constraints and you need cross-language support, the default setting of `JSON` is probably your best choice. `pickle` -- If you have no desire to support any language other than Python, then using the `pickle` encoding will gain you the support of all built-in Python data types (except class instances), smaller messages when sending binary files, and a slight speedup over `JSON` processing. .. admonition:: Pickle and Security The pickle format is very convenient as it can serialize and deserialize almost any object, but this is also a concern for security. Carefully crafted pickle payloads can do almost anything a regular Python program can do, so if you let your consumer automatically decode pickled objects you must make sure to limit access to the broker so that untrusted parties do not have the ability to send messages! By default Kombu uses pickle protocol 2, but this can be changed using the :envvar:`PICKLE_PROTOCOL` environment variable or by changing the global :data:`kombu.serialization.pickle_protocol` flag. `yaml` -- YAML has many of the same characteristics as `json`, except that it natively supports more data types (including dates, recursive references, etc.) However, the Python libraries for YAML are a good bit slower than the libraries for JSON. If you need a more expressive set of data types and need to maintain cross-language compatibility, then `YAML` may be a better fit than the above. To instruct `Kombu` to use an alternate serialization method, use one of the following options. 1. Set the serialization option on a per-producer basis:: >>> producer = Producer(channel, ... exchange=exchange, ... serializer="yaml") 2. Set the serialization option per message:: >>> producer.publish(message, routing_key=rkey, ... serializer="pickle") Note that a `Consumer` do not need the serialization method specified. They can auto-detect the serialization method as the content-type is sent as a message header. .. _sending-raw-data: Sending raw data without Serialization ====================================== In some cases, you don't need your message data to be serialized. If you pass in a plain string or Unicode object as your message, then `Kombu` will not waste cycles serializing/deserializing the data. You can optionally specify a `content_type` and `content_encoding` for the raw data:: >>> with open("~/my_picture.jpg", "rb") as fh: ... producer.publish(fh.read(), content_type="image/jpeg", content_encoding="binary", routing_key=rkey) The `Message` object returned by the `Consumer` class will have a `content_type` and `content_encoding` attribute. .. _serialization-entrypoints: Creating extensions using Setuptools entry-points ================================================= A package can also register new serializers using Setuptools entry-points. The entry-point must provide the name of the serializer along with the path to a tuple providing the rest of the args: ``decoder_function, encoder_function, content_type, content_encoding``. An example entrypoint could be: .. code-block:: python from setuptools import setup setup( entry_points={ 'kombu.serializers': [ 'my_serializer = my_module.serializer:register_args' ] } ) Then the module ``my_module.serializer`` would look like: .. code-block:: python register_args = (my_decoder, my_encoder, 'application/x-mimetype', 'utf-8') When this package is installed the new 'my_serializer' serializer will be supported by Kombu. .. admonition:: Buffer Objects The decoder function of custom serializer must support both strings and Python's old-style buffer objects. Python pickle and json modules usually don't do this via its ``loads`` function, but you can easily add support by making a wrapper around the ``load`` function that takes file objects instead of strings. Here's an example wrapping :func:`pickle.loads` in such a way: .. code-block:: python import pickle from kombu.serialization import BytesIO, register def loads(s): return pickle.load(BytesIO(s)) register('my_pickle', loads, pickle.dumps, content_type='application/x-pickle2', content_encoding='binary') kombu-3.0.7/docs/userguide/simple.rst0000644000076500000000000000752012237554371020234 0ustar asksolwheel00000000000000.. _guide-simple: ================== Simple Interface ================== .. contents:: :local: :mod:`kombu.simple` is a simple interface to AMQP queueing. It is only slightly different from the :class:`~Queue.Queue` class in the Python Standard Library, which makes it excellent for users with basic messaging needs. Instead of defining exchanges and queues, the simple classes only requires two arguments, a connection channel and a name. The name is used as the queue, exchange and routing key. If the need arises, you can specify a :class:`~kombu.Queue` as the name argument instead. In addition, the :class:`~kombu.Connection` comes with shortcuts to create simple queues using the current connection: .. code-block:: python >>> queue = connection.SimpleQueue('myqueue') >>> # ... do something with queue >>> queue.close() This is equivalent to: .. code-block:: python >>> from kombu import SimpleQueue, SimpleBuffer >>> channel = connection.channel() >>> queue = SimpleBuffer(channel) >>> # ... do something with queue >>> channel.close() >>> queue.close() .. _simple-send-receive: Sending and receiving messages ============================== The simple interface defines two classes; :class:`~kombu.simple.SimpleQueue`, and :class:`~kombu.simple.SimpleBuffer`. The former is used for persistent messages, and the latter is used for transient, buffer-like queues. They both have the same interface, so you can use them interchangeably. Here is an example using the :class:`~kombu.simple.SimpleQueue` class to produce and consume logging messages: .. code-block:: python import socket import datetime from time import time from kombu import Connection class Logger(object): def __init__(self, connection, queue_name='log_queue', serializer='json', compression=None): self.queue = connection.SimpleQueue(queue_name) self.serializer = serializer self.compression = compression def log(self, message, level='INFO', context={}): self.queue.put({'message': message, 'level': level, 'context': context, 'hostname': socket.gethostname(), 'timestamp': time()}, serializer=self.serializer, compression=self.compression) def process(self, callback, n=1, timeout=1): for i in xrange(n): log_message = self.queue.get(block=True, timeout=1) entry = log_message.payload # deserialized data. callback(entry) log_message.ack() # remove message from queue def close(self): self.queue.close() if __name__ == '__main__': from contextlib import closing with Connection('amqp://guest:guest@localhost:5672//') as conn: with closing(Logger(conn)) as logger: # Send message logger.log('Error happened while encoding video', level='ERROR', context={'filename': 'cutekitten.mpg'}) # Consume and process message # This is the callback called when a log message is # received. def dump_entry(entry): date = datetime.datetime.fromtimestamp(entry['timestamp']) print('[%s %s %s] %s %r' % (date, entry['hostname'], entry['level'], entry['message'], entry['context'])) # Process a single message using the callback above. logger.process(dump_entry, n=1) kombu-3.0.7/examples/0000755000076500000000000000000012247127370015073 5ustar asksolwheel00000000000000kombu-3.0.7/examples/complete_receive.py0000644000076500000000000000351112237554371020763 0ustar asksolwheel00000000000000""" Example of simple consumer that waits for a single message, acknowledges it and exits. """ from kombu import Connection, Exchange, Queue, Consumer, eventloop from pprint import pformat #: By default messages sent to exchanges are persistent (delivery_mode=2), #: and queues and exchanges are durable. def pretty(obj): return pformat(obj, indent=4) #: This is the callback applied when a message is received. def handle_message(body, message): print('Received message: %r' % (body, )) print(' properties:\n%s' % (pretty(message.properties), )) print(' delivery_info:\n%s' % (pretty(message.delivery_info), )) message.ack() #: Create a connection and a channel. #: If hostname, userid, password and virtual_host is not specified #: the values below are the default, but listed here so it can #: be easily changed. with Connection('pyamqp://guest:guest@localhost:5672//') as connection: # The configuration of the message flow is as follows: # gateway_kombu_exchange -> internal_kombu_exchange -> kombu_demo queue gateway_exchange = Exchange('gateway_kombu_demo')(connection) exchange = Exchange('internal_kombu_demo')(connection) gateway_exchange.declare() exchange.declare() exchange.bind_to(gateway_exchange, routing_key='kombu_demo') queue = Queue('kombu_demo', exchange, routing_key='kombu_demo') #: Create consumer using our callback and queue. #: Second argument can also be a list to consume from #: any number of queues. with Consumer(connection, queue, callbacks=[handle_message]): #: Each iteration waits for a single event. Note that this #: event may not be a message, or a message that is to be #: delivered to the consumers channel, but any event received #: on the connection. for _ in eventloop(connection): pass kombu-3.0.7/examples/complete_send.py0000644000076500000000000000220012237554371020264 0ustar asksolwheel00000000000000""" Example producer that sends a single message and exits. You can use `complete_receive.py` to receive the message sent. """ from kombu import Connection, Producer, Exchange, Queue #: By default messages sent to exchanges are persistent (delivery_mode=2), #: and queues and exchanges are durable. exchange = Exchange('kombu_demo', type='direct') queue = Queue('kombu_demo', exchange, routing_key='kombu_demo') with Connection('amqp://guest:guest@localhost:5672//') as connection: #: Producers are used to publish messages. #: a default exchange and routing key can also be specifed #: as arguments the Producer, but we rather specify this explicitly #: at the publish call. producer = Producer(connection) #: Publish the message using the json serializer (which is the default), #: and zlib compression. The kombu consumer will automatically detect #: encoding, serialization and compression used and decode accordingly. producer.publish({'hello': 'world'}, exchange=exchange, routing_key='kombu_demo', serializer='json', compression='zlib') kombu-3.0.7/examples/hello_consumer.py0000644000076500000000000000043712237554371020473 0ustar asksolwheel00000000000000from kombu import Connection with Connection('amqp://guest:guest@localhost:5672//') as conn: simple_queue = conn.SimpleQueue('simple_queue') message = simple_queue.get(block=True, timeout=1) print("Received: %s" % message.payload) message.ack() simple_queue.close() kombu-3.0.7/examples/hello_publisher.py0000644000076500000000000000047312237554371020635 0ustar asksolwheel00000000000000from kombu import Connection import datetime with Connection('amqp://guest:guest@localhost:5672//') as conn: simple_queue = conn.SimpleQueue('simple_queue') message = 'helloword, sent at %s' % datetime.datetime.today() simple_queue.put(message) print('Sent: %s' % message) simple_queue.close() kombu-3.0.7/examples/simple_eventlet_receive.py0000644000076500000000000000224312237554371022353 0ustar asksolwheel00000000000000""" Example that sends a single message and exits using the simple interface. You can use `simple_receive.py` (or `complete_receive.py`) to receive the message sent. """ import eventlet from kombu import Connection eventlet.monkey_patch() def wait_many(timeout=1): #: Create connection #: If hostname, userid, password and virtual_host is not specified #: the values below are the default, but listed here so it can #: be easily changed. with Connection('amqp://guest:guest@localhost:5672//') as connection: #: SimpleQueue mimics the interface of the Python Queue module. #: First argument can either be a queue name or a kombu.Queue object. #: If a name, then the queue will be declared with the name as the #: queue name, exchange name and routing key. with connection.SimpleQueue('kombu_demo') as queue: while True: try: message = queue.get(block=False, timeout=timeout) except queue.Empty: break else: message.ack() print(message.payload) eventlet.spawn(wait_many).wait() kombu-3.0.7/examples/simple_eventlet_send.py0000644000076500000000000000217612237554371021667 0ustar asksolwheel00000000000000""" Example that sends a single message and exits using the simple interface. You can use `simple_receive.py` (or `complete_receive.py`) to receive the message sent. """ import eventlet from kombu import Connection eventlet.monkey_patch() def send_many(n): #: Create connection #: If hostname, userid, password and virtual_host is not specified #: the values below are the default, but listed here so it can #: be easily changed. with Connection('amqp://guest:guest@localhost:5672//') as connection: #: SimpleQueue mimics the interface of the Python Queue module. #: First argument can either be a queue name or a kombu.Queue object. #: If a name, then the queue will be declared with the name as the #: queue name, exchange name and routing key. with connection.SimpleQueue('kombu_demo') as queue: def send_message(i): queue.put({'hello': 'world%s' % (i, )}) pool = eventlet.GreenPool(10) for i in range(n): pool.spawn(send_message, i) pool.waitall() if __name__ == '__main__': send_many(10) kombu-3.0.7/examples/simple_receive.py0000644000076500000000000000160512237554371020446 0ustar asksolwheel00000000000000""" Example receiving a message using the SimpleQueue interface. """ from kombu import Connection #: Create connection #: If hostname, userid, password and virtual_host is not specified #: the values below are the default, but listed here so it can #: be easily changed. with Connection('amqp://guest:guest@localhost:5672//') as conn: #: SimpleQueue mimics the interface of the Python Queue module. #: First argument can either be a queue name or a kombu.Queue object. #: If a name, then the queue will be declared with the name as the queue #: name, exchange name and routing key. with conn.SimpleQueue('kombu_demo') as queue: message = queue.get(block=True, timeout=10) message.ack() print(message.payload) #### #: If you don't use the with statement then you must aways # remember to close objects after use: # queue.close() # connection.close() kombu-3.0.7/examples/simple_send.py0000644000076500000000000000170412237554371017755 0ustar asksolwheel00000000000000""" Example that sends a single message and exits using the simple interface. You can use `simple_receive.py` (or `complete_receive.py`) to receive the message sent. """ from kombu import Connection #: Create connection #: If hostname, userid, password and virtual_host is not specified #: the values below are the default, but listed here so it can #: be easily changed. with Connection('amqp://guest:guest@localhost:5672//') as conn: #: SimpleQueue mimics the interface of the Python Queue module. #: First argument can either be a queue name or a kombu.Queue object. #: If a name, then the queue will be declared with the name as the queue #: name, exchange name and routing key. with conn.SimpleQueue('kombu_demo') as queue: queue.put({'hello': 'world'}, serializer='json', compression='zlib') ##### # If you don't use the with statement, you must always # remember to close objects. # queue.close() # connection.close() kombu-3.0.7/examples/simple_task_queue/0000755000076500000000000000000012247127370020612 5ustar asksolwheel00000000000000kombu-3.0.7/examples/simple_task_queue/__init__.py0000644000076500000000000000000012064115765022713 0ustar asksolwheel00000000000000kombu-3.0.7/examples/simple_task_queue/client.py0000644000076500000000000000174412237554371022454 0ustar asksolwheel00000000000000from kombu.pools import producers from .queues import task_exchange priority_to_routing_key = {'high': 'hipri', 'mid': 'midpri', 'low': 'lopri'} def send_as_task(connection, fun, args=(), kwargs={}, priority='mid'): payload = {'fun': fun, 'args': args, 'kwargs': kwargs} routing_key = priority_to_routing_key[priority] with producers[connection].acquire(block=True) as producer: producer.publish(payload, serializer='pickle', compression='bzip2', exchange=task_exchange, declare=[task_exchange], routing_key=routing_key) if __name__ == '__main__': from kombu import Connection from .tasks import hello_task connection = Connection('amqp://guest:guest@localhost:5672//') send_as_task(connection, fun=hello_task, args=('Kombu', ), kwargs={}, priority='high') kombu-3.0.7/examples/simple_task_queue/queues.py0000644000076500000000000000043712064115765022501 0ustar asksolwheel00000000000000from kombu import Exchange, Queue task_exchange = Exchange('tasks', type='direct') task_queues = [Queue('hipri', task_exchange, routing_key='hipri'), Queue('midpri', task_exchange, routing_key='midpri'), Queue('lopri', task_exchange, routing_key='lopri')] kombu-3.0.7/examples/simple_task_queue/tasks.py0000644000076500000000000000007512213641062022303 0ustar asksolwheel00000000000000def hello_task(who="world"): print("Hello %s" % (who, )) kombu-3.0.7/examples/simple_task_queue/worker.py0000644000076500000000000000233312237554371022502 0ustar asksolwheel00000000000000from kombu.mixins import ConsumerMixin from kombu.log import get_logger from kombu.utils import kwdict, reprcall from .queues import task_queues logger = get_logger(__name__) class Worker(ConsumerMixin): def __init__(self, connection): self.connection = connection def get_consumers(self, Consumer, channel): return [Consumer(queues=task_queues, accept=['pickle', 'json'], callbacks=[self.process_task])] def process_task(self, body, message): fun = body['fun'] args = body['args'] kwargs = body['kwargs'] logger.info('Got task: %s', reprcall(fun.__name__, args, kwargs)) try: fun(*args, **kwdict(kwargs)) except Exception as exc: logger.error('task raised exception: %r', exc) message.ack() if __name__ == '__main__': from kombu import Connection from kombu.utils.debug import setup_logging # setup root logger setup_logging(loglevel='INFO', loggers=['']) with Connection('amqp://guest:guest@localhost:5672//') as conn: try: worker = Worker(conn) worker.run() except KeyboardInterrupt: print('bye bye') kombu-3.0.7/extra/0000755000076500000000000000000012247127370014400 5ustar asksolwheel00000000000000kombu-3.0.7/extra/doc2ghpages0000755000076500000000000000042612064115765016520 0ustar asksolwheel00000000000000#!/bin/bash git checkout master (cd docs; rm -rf .build; make html; (cd .build/html; sphinx-to-github;)) git checkout gh-pages cp -r docs/.build/html/* . git commit . -m "Autogenerated documentation for github." git push origin gh-pages git checkout master kombu-3.0.7/extra/release/0000755000076500000000000000000012247127370016020 5ustar asksolwheel00000000000000kombu-3.0.7/extra/release/bump_version.py0000755000076500000000000001054112237554371021112 0ustar asksolwheel00000000000000#!/usr/bin/env python from __future__ import absolute_import import errno import os import re import sys import subprocess from contextlib import contextmanager from tempfile import NamedTemporaryFile rq = lambda s: s.strip("\"'") str_t = str if sys.version_info[0] >= 3 else basestring def cmd(*args): return subprocess.Popen(args, stdout=subprocess.PIPE).communicate()[0] @contextmanager def no_enoent(): try: yield except OSError as exc: if exc.errno != errno.ENOENT: raise class StringVersion(object): def decode(self, s): s = rq(s) text = "" major, minor, release = s.split(".") if not release.isdigit(): pos = release.index(re.split("\d+", release)[1][0]) release, text = release[:pos], release[pos:] return int(major), int(minor), int(release), text def encode(self, v): return ".".join(map(str, v[:3])) + v[3] to_str = StringVersion().encode from_str = StringVersion().decode class TupleVersion(object): def decode(self, s): v = list(map(rq, s.split(", "))) return (tuple(map(int, v[0:3])) + tuple(["".join(v[3:])])) def encode(self, v): v = list(v) def quote(lit): if isinstance(lit, str_t): return '"%s"' % (lit, ) return str(lit) if not v[-1]: v.pop() return ", ".join(map(quote, v)) class VersionFile(object): def __init__(self, filename): self.filename = filename self._kept = None def _as_orig(self, version): return self.wb % {"version": self.type.encode(version), "kept": self._kept} def write(self, version): pattern = self.regex with no_enoent(): with NamedTemporaryFile() as dest: with open(self.filename) as orig: for line in orig: if pattern.match(line): dest.write(self._as_orig(version)) else: dest.write(line) os.rename(dest.name, self.filename) def parse(self): pattern = self.regex gpos = 0 with open(self.filename) as fh: for line in fh: m = pattern.match(line) if m: if "?P" in pattern.pattern: self._kept, gpos = m.groupdict()["keep"], 1 return self.type.decode(m.groups()[gpos]) class PyVersion(VersionFile): regex = re.compile(r'^VERSION\s*=\s*\((.+?)\)') wb = "VERSION = (%(version)s)\n" type = TupleVersion() class SphinxVersion(VersionFile): regex = re.compile(r'^:[Vv]ersion:\s*(.+?)$') wb = ':Version: %(version)s\n' type = StringVersion() class CPPVersion(VersionFile): regex = re.compile(r'^\#\s*define\s*(?P\w*)VERSION\s+(.+)') wb = '#define %(kept)sVERSION "%(version)s"\n' type = StringVersion() _filetype_to_type = {"py": PyVersion, "rst": SphinxVersion, "c": CPPVersion, "h": CPPVersion} def filetype_to_type(filename): _, _, suffix = filename.rpartition(".") return _filetype_to_type[suffix](filename) def bump(*files, **kwargs): version = kwargs.get("version") files = [filetype_to_type(f) for f in files] versions = [v.parse() for v in files] current = list(reversed(sorted(versions)))[0] # find highest if version: next = from_str(version) else: major, minor, release, text = current if text: raise Exception("Can't bump alpha releases") next = (major, minor, release + 1, text) print("Bump version from %s -> %s" % (to_str(current), to_str(next))) for v in files: print(" writing %r..." % (v.filename, )) v.write(next) print(cmd("git", "commit", "-m", "Bumps version to %s" % (to_str(next), ), *[f.filename for f in files])) print(cmd("git", "tag", "v%s" % (to_str(next), ))) def main(argv=sys.argv, version=None): if not len(argv) > 1: print("Usage: distdir [docfile] -- ") sys.exit(0) if "--" in argv: c = argv.index('--') version = argv[c + 1] argv = argv[:c] bump(*argv[1:], version=version) if __name__ == "__main__": main() kombu-3.0.7/extra/release/doc4allmods0000755000076500000000000000200412223041316020134 0ustar asksolwheel00000000000000#!/bin/bash PACKAGE="$1" SKIP_PACKAGES="$PACKAGE tests management urls" SKIP_FILES="kombu.entity.rst kombu.messaging.rst kombu.transport.django.migrations.rst kombu.transport.django.migrations.0001_initial.rst kombu.transport.django.management.rst kombu.transport.django.management.commands.rst" modules=$(find "$PACKAGE" -name "*.py") failed=0 for module in $modules; do dotted=$(echo $module | sed 's/\//\./g') name=${dotted%.__init__.py} name=${name%.py} rst=$name.rst skip=0 for skip_package in $SKIP_PACKAGES; do [ $(echo "$name" | cut -d. -f 2) == "$skip_package" ] && skip=1 done for skip_file in $SKIP_FILES; do [ "$skip_file" == "$rst" ] && skip=1 done if [ $skip -eq 0 ]; then if [ ! -f "docs/reference/$rst" ]; then if [ ! -f "docs/internals/reference/$rst" ]; then echo $rst :: FAIL failed=1 fi fi fi done exit $failed kombu-3.0.7/extra/release/flakeplus.py0000755000076500000000000000754312237554371020400 0ustar asksolwheel00000000000000#!/usr/bin/env python from __future__ import absolute_import import os import re import sys from collections import defaultdict from unipath import Path RE_COMMENT = r'^\s*\#' RE_NOQA = r'.+?\#\s+noqa+' RE_MULTILINE_COMMENT_O = r'^\s*(?:\'\'\'|""").+?(?:\'\'\'|""")' RE_MULTILINE_COMMENT_S = r'^\s*(?:\'\'\'|""")' RE_MULTILINE_COMMENT_E = r'(?:^|.+?)(?:\'\'\'|""")' RE_WITH = r'(?:^|\s+)with\s+' RE_WITH_IMPORT = r'''from\s+ __future__\s+ import\s+ with_statement''' RE_PRINT = r'''(?:^|\s+)print\((?:"|')(?:\W+?)?[A-Z0-9:]{2,}''' RE_ABS_IMPORT = r'''from\s+ __future__\s+ import\s+ absolute_import''' acc = defaultdict(lambda: {"abs": False, "print": False}) def compile(regex): return re.compile(regex, re.VERBOSE) class FlakePP(object): re_comment = compile(RE_COMMENT) re_ml_comment_o = compile(RE_MULTILINE_COMMENT_O) re_ml_comment_s = compile(RE_MULTILINE_COMMENT_S) re_ml_comment_e = compile(RE_MULTILINE_COMMENT_E) re_abs_import = compile(RE_ABS_IMPORT) re_print = compile(RE_PRINT) re_with_import = compile(RE_WITH_IMPORT) re_with = compile(RE_WITH) re_noqa = compile(RE_NOQA) map = {"abs": True, "print": False, "with": False, "with-used": False} def __init__(self, verbose=False): self.verbose = verbose self.steps = (("abs", self.re_abs_import), ("with", self.re_with_import), ("with-used", self.re_with), ("print", self.re_print)) def analyze_fh(self, fh): steps = self.steps filename = fh.name acc = dict(self.map) index = 0 errors = [0] def error(fmt, **kwargs): errors[0] += 1 self.announce(fmt, **dict(kwargs, filename=filename)) for index, line in enumerate(self.strip_comments(fh)): for key, pattern in steps: if pattern.match(line): acc[key] = True if index: if not acc["abs"]: error("%(filename)s: missing abs import") if acc["with-used"] and not acc["with"]: error("%(filename)s: missing with import") if acc["print"]: error("%(filename)s: left over print statement") return filename, errors[0], acc def analyze_file(self, filename): with open(filename) as fh: return self.analyze_fh(fh) def analyze_tree(self, dir): for dirpath, _, filenames in os.walk(dir): for path in (Path(dirpath, f) for f in filenames): if path.endswith(".py"): yield self.analyze_file(path) def analyze(self, *paths): for path in map(Path, paths): if path.isdir(): for res in self.analyze_tree(path): yield res else: yield self.analyze_file(path) def strip_comments(self, fh): re_comment = self.re_comment re_ml_comment_o = self.re_ml_comment_o re_ml_comment_s = self.re_ml_comment_s re_ml_comment_e = self.re_ml_comment_e re_noqa = self.re_noqa in_ml = False for line in fh.readlines(): if in_ml: if re_ml_comment_e.match(line): in_ml = False else: if re_noqa.match(line) or re_ml_comment_o.match(line): pass elif re_ml_comment_s.match(line): in_ml = True elif re_comment.match(line): pass else: yield line def announce(self, fmt, **kwargs): sys.stderr.write((fmt + "\n") % kwargs) def main(argv=sys.argv, exitcode=0): for _, errors, _ in FlakePP(verbose=True).analyze(*argv[1:]): if errors: exitcode = 1 return exitcode if __name__ == "__main__": sys.exit(main()) kombu-3.0.7/extra/release/jython-run-tests0000755000076500000000000000034512064115765021227 0ustar asksolwheel00000000000000#!/bin/bash base=${1:-.} nosetests --with-xunit \ --xunit-file="$base/nosetests.xml" # coverage doesn't with with jython echo "" > "$base/coverage.html" mkdir -p "$base/cover" touch "$base/cover/index.html" kombu-3.0.7/extra/release/removepyc.sh0000755000076500000000000000014012064115765020365 0ustar asksolwheel00000000000000#!/bin/bash (cd "${1:-.}"; find . -name "*.pyc" | xargs rm -- 2>/dev/null) || echo "ok" kombu-3.0.7/extra/release/verify-reference-index.sh0000755000076500000000000000065412064115765022733 0ustar asksolwheel00000000000000#!/bin/bash verify_index() { modules=$(grep "kombu." "$1" | \ perl -ple's/^\s*|\s*$//g;s{\.}{/}g;') retval=0 for module in $modules; do if [ ! -f "$module.py" ]; then if [ ! -f "$module/__init__.py" ]; then echo "Outdated reference: $module" retval=1 fi fi done return $retval } verify_index docs/reference/index.rst kombu-3.0.7/FAQ0000644000076500000000000000000012064115765017607 1kombu-3.0.7/docs/faq.rstustar asksolwheel00000000000000kombu-3.0.7/funtests/0000755000076500000000000000000012247127370015130 5ustar asksolwheel00000000000000kombu-3.0.7/funtests/__init__.py0000644000076500000000000000012412064115765017240 0ustar asksolwheel00000000000000import os import sys sys.path.insert(0, os.pardir) sys.path.insert(0, os.getcwd()) kombu-3.0.7/funtests/setup.cfg0000644000076500000000000000007412064115765016754 0ustar asksolwheel00000000000000[nosetests] verbosity = 1 detailed-errors = 1 where = tests kombu-3.0.7/funtests/setup.py0000644000076500000000000000313712234207745016647 0ustar asksolwheel00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- try: from setuptools import setup from setuptools.command.install import install except ImportError: from ez_setup import use_setuptools use_setuptools() from setuptools import setup # noqa from setuptools.command.install import install # noqa class no_install(install): def run(self, *args, **kwargs): import sys sys.stderr.write(""" ---------------------------------------------------- The Kombu functional test suite cannot be installed. ---------------------------------------------------- But you can execute the tests by running the command: $ python setup.py test """) setup( name='kombu-funtests', version='DEV', description='Functional test suite for Kombu', author='Ask Solem', author_email='ask@celeryproject.org', url='http://github.com/celery/kombu', platforms=['any'], packages=[], data_files=[], zip_safe=False, cmdclass={'install': no_install}, test_suite='nose.collector', build_requires=[ 'nose', 'nose-cover3', 'unittest2', 'coverage>=3.0', 'simplejson', 'PyYAML', 'msgpack-python', 'pymongo', 'couchdb', 'kazoo', 'beanstalkc', 'kombu-sqlalchemy', 'django', 'django-kombu', ], classifiers=[ 'Operating System :: OS Independent', 'Programming Language :: Python', 'License :: OSI Approved :: BSD License', 'Intended Audience :: Developers', ], long_description='Do not install this package', ) kombu-3.0.7/funtests/tests/0000755000076500000000000000000012247127370016272 5ustar asksolwheel00000000000000kombu-3.0.7/funtests/tests/__init__.py0000644000076500000000000000022512234207745020403 0ustar asksolwheel00000000000000import os import sys sys.path.insert(0, os.path.join(os.getcwd(), os.pardir)) print(sys.path[0]) sys.path.insert(0, os.getcwd()) print(sys.path[0]) kombu-3.0.7/funtests/tests/test_amqp.py0000644000076500000000000000017412234207745020644 0ustar asksolwheel00000000000000from funtests import transport class test_pyamqp(transport.TransportCase): transport = 'pyamqp' prefix = 'pyamqp' kombu-3.0.7/funtests/tests/test_amqplib.py0000644000076500000000000000047112234207745021333 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_amqplib(transport.TransportCase): transport = 'amqplib' prefix = 'amqplib' def before_connect(self): try: import amqplib # noqa except ImportError: raise SkipTest('amqplib not installed') kombu-3.0.7/funtests/tests/test_beanstalk.py0000644000076500000000000000071212234207745021650 0ustar asksolwheel00000000000000from funtests import transport from nose import SkipTest class test_beanstalk(transport.TransportCase): transport = 'beanstalk' prefix = 'beanstalk' event_loop_max = 10 message_size_limit = 47662 def before_connect(self): try: import beanstalkc # noqa except ImportError: raise SkipTest('beanstalkc not installed') def after_connect(self, connection): connection.channel().client kombu-3.0.7/funtests/tests/test_couchdb.py0000644000076500000000000000064012234207745021313 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_couchdb(transport.TransportCase): transport = 'couchdb' prefix = 'couchdb' event_loop_max = 100 def before_connect(self): try: import couchdb # noqa except ImportError: raise SkipTest('couchdb not installed') def after_connect(self, connection): connection.channel().client kombu-3.0.7/funtests/tests/test_django.py0000644000076500000000000000213312237554371021150 0ustar asksolwheel00000000000000from nose import SkipTest from kombu.tests.case import redirect_stdouts from funtests import transport class test_django(transport.TransportCase): transport = 'django' prefix = 'django' event_loop_max = 10 def before_connect(self): @redirect_stdouts def setup_django(stdout, stderr): try: import django # noqa except ImportError: raise SkipTest('django not installed') from django.conf import settings if not settings.configured: settings.configure( DATABASE_ENGINE='sqlite3', DATABASE_NAME=':memory:', DATABASES={ 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': ':memory:', }, }, INSTALLED_APPS=('kombu.transport.django', ), ) from django.core.management import call_command call_command('syncdb') setup_django() kombu-3.0.7/funtests/tests/test_librabbitmq.py0000644000076500000000000000051512234207745022175 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_librabbitmq(transport.TransportCase): transport = 'librabbitmq' prefix = 'librabbitmq' def before_connect(self): try: import librabbitmq # noqa except ImportError: raise SkipTest('librabbitmq not installed') kombu-3.0.7/funtests/tests/test_mongodb.py0000644000076500000000000000464412237554371021344 0ustar asksolwheel00000000000000from nose import SkipTest from kombu import Consumer, Producer, Exchange, Queue from kombu.five import range from kombu.utils import nested from funtests import transport class test_mongodb(transport.TransportCase): transport = 'mongodb' prefix = 'mongodb' event_loop_max = 100 def before_connect(self): try: import pymongo # noqa except ImportError: raise SkipTest('pymongo not installed') def after_connect(self, connection): connection.channel().client # evaluate connection. self.c = self.connection # shortcut def test_fanout(self, name='test_mongodb_fanout'): if not self.verify_alive(): return c = self.connection self.e = Exchange(name, type='fanout') self.q = Queue(name, exchange=self.e, routing_key=name) self.q2 = Queue(name + '2', exchange=self.e, routing_key=name + '2') channel = c.default_channel producer = Producer(channel, self.e) consumer1 = Consumer(channel, self.q) consumer2 = Consumer(channel, self.q2) self.q2(channel).declare() for i in range(10): producer.publish({'foo': i}, routing_key=name) for i in range(10): producer.publish({'foo': i}, routing_key=name + '2') _received1 = [] _received2 = [] def callback1(message_data, message): _received1.append(message) message.ack() def callback2(message_data, message): _received2.append(message) message.ack() consumer1.register_callback(callback1) consumer2.register_callback(callback2) with nested(consumer1, consumer2): while 1: if len(_received1) + len(_received2) == 20: break c.drain_events(timeout=60) self.assertEqual(len(_received1) + len(_received2), 20) # queue.delete for i in range(10): producer.publish({'foo': i}, routing_key=name) self.assertTrue(self.q(channel).get()) self.q(channel).delete() self.q(channel).declare() self.assertIsNone(self.q(channel).get()) # queue.purge for i in range(10): producer.publish({'foo': i}, routing_key=name + '2') self.assertTrue(self.q2(channel).get()) self.q2(channel).purge() self.assertIsNone(self.q2(channel).get()) kombu-3.0.7/funtests/tests/test_pyamqp.py0000644000076500000000000000017412234207745021215 0ustar asksolwheel00000000000000from funtests import transport class test_pyamqp(transport.TransportCase): transport = 'pyamqp' prefix = 'pyamqp' kombu-3.0.7/funtests/tests/test_redis.py0000644000076500000000000000110512234207745021007 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_redis(transport.TransportCase): transport = 'redis' prefix = 'redis' def before_connect(self): try: import redis # noqa except ImportError: raise SkipTest('redis not installed') def after_connect(self, connection): client = connection.channel().client client.info() def test_cant_connect_raises_connection_error(self): conn = self.get_connection(port=65534) self.assertRaises(conn.connection_errors, conn.connect) kombu-3.0.7/funtests/tests/test_SLMQ.py0000644000076500000000000000167512237554371020474 0ustar asksolwheel00000000000000 from funtests import transport from nose import SkipTest import os class test_SLMQ(transport.TransportCase): transport = "SLMQ" prefix = "slmq" event_loop_max = 100 message_size_limit = 4192 reliable_purge = False suppress_disorder_warning = True # does not guarantee FIFO order, # even in simple cases. def before_connect(self): if "SLMQ_ACCOUNT" not in os.environ: raise SkipTest("Missing envvar SLMQ_ACCOUNT") if "SL_USERNAME" not in os.environ: raise SkipTest("Missing envvar SL_USERNAME") if "SL_API_KEY" not in os.environ: raise SkipTest("Missing envvar SL_API_KEY") if "SLMQ_HOST" not in os.environ: raise SkipTest("Missing envvar SLMQ_HOST") if "SLMQ_SECURE" not in os.environ: raise SkipTest("Missing envvar SLMQ_SECURE") def after_connect(self, connection): pass kombu-3.0.7/funtests/tests/test_sqla.py0000644000076500000000000000062212234207745020644 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_sqla(transport.TransportCase): transport = 'sqlalchemy' prefix = 'sqlalchemy' event_loop_max = 10 connection_options = {'hostname': 'sqla+sqlite://'} def before_connect(self): try: import sqlalchemy # noqa except ImportError: raise SkipTest('sqlalchemy not installed') kombu-3.0.7/funtests/tests/test_SQS.py0000644000076500000000000000153012234207745020351 0ustar asksolwheel00000000000000import os from nose import SkipTest from funtests import transport class test_SQS(transport.TransportCase): transport = 'SQS' prefix = 'sqs' event_loop_max = 100 message_size_limit = 4192 # SQS max body size / 2. reliable_purge = False suppress_disorder_warning = True # does not guarantee FIFO order, # even in simple cases. def before_connect(self): try: import boto # noqa except ImportError: raise SkipTest('boto not installed') if 'AWS_ACCESS_KEY_ID' not in os.environ: raise SkipTest('Missing envvar AWS_ACCESS_KEY_ID') if 'AWS_SECRET_ACCESS_KEY' not in os.environ: raise SkipTest('Missing envvar AWS_SECRET_ACCESS_KEY') def after_connect(self, connection): connection.channel().sqs kombu-3.0.7/funtests/tests/test_zookeeper.py0000644000076500000000000000064212234207745021711 0ustar asksolwheel00000000000000from nose import SkipTest from funtests import transport class test_zookeeper(transport.TransportCase): transport = 'zookeeper' prefix = 'zookeeper' event_loop_max = 100 def before_connect(self): try: import kazoo # noqa except ImportError: raise SkipTest('kazoo not installed') def after_connect(self, connection): connection.channel().client kombu-3.0.7/funtests/transport.py0000644000076500000000000002262412237554371017550 0ustar asksolwheel00000000000000from __future__ import absolute_import, print_function import random import socket import string import sys import time import unittest2 as unittest import warnings import weakref from nose import SkipTest from kombu import Connection from kombu import Exchange, Queue from kombu.five import range from kombu.tests.case import skip_if_quick if sys.version_info >= (2, 5): from hashlib import sha256 as _digest else: from sha import new as _digest # noqa def say(msg): print(msg, file=sys.stderr) def _nobuf(x): return [str(i) if isinstance(i, buffer) else i for i in x] def consumeN(conn, consumer, n=1, timeout=30): messages = [] def callback(message_data, message): messages.append(message_data) message.ack() prev, consumer.callbacks = consumer.callbacks, [callback] consumer.consume() seconds = 0 while True: try: conn.drain_events(timeout=1) except socket.timeout: seconds += 1 msg = 'Received %s/%s messages. %s seconds passed.' % ( len(messages), n, seconds) if seconds >= timeout: raise socket.timeout(msg) if seconds > 1: say(msg) if len(messages) >= n: break consumer.cancel() consumer.callback = prev return messages class TransportCase(unittest.TestCase): transport = None prefix = None sep = '.' userid = None password = None event_loop_max = 100 connection_options = {} suppress_disorder_warning = False reliable_purge = True connected = False skip_test_reason = None message_size_limit = None def before_connect(self): pass def after_connect(self, connection): pass def setUp(self): if self.transport: try: self.before_connect() except SkipTest as exc: self.skip_test_reason = str(exc) else: self.do_connect() self.exchange = Exchange(self.prefix, 'direct') self.queue = Queue(self.prefix, self.exchange, self.prefix) def purge(self, names): chan = self.connection.channel() total = 0 for queue in names: while 1: # ensure the queue is completly empty purged = chan.queue_purge(queue=queue) if not purged: break total += purged chan.close() return total def get_connection(self, **options): if self.userid: options.setdefault('userid', self.userid) if self.password: options.setdefault('password', self.password) return Connection(transport=self.transport, **options) def do_connect(self): self.connection = self.get_connection(**self.connection_options) try: self.connection.connect() self.after_connect(self.connection) except self.connection.connection_errors: self.skip_test_reason = '%s transport cannot connect' % ( self.transport, ) else: self.connected = True def verify_alive(self): if self.transport: if not self.connected: raise SkipTest(self.skip_test_reason) return True def purge_consumer(self, consumer): return self.purge([queue.name for queue in consumer.queues]) def test_produce__consume(self): if not self.verify_alive(): return chan1 = self.connection.channel() consumer = chan1.Consumer(self.queue) self.purge_consumer(consumer) producer = chan1.Producer(self.exchange) producer.publish({'foo': 'bar'}, routing_key=self.prefix) message = consumeN(self.connection, consumer) self.assertDictEqual(message[0], {'foo': 'bar'}) chan1.close() self.purge([self.queue.name]) def test_purge(self): if not self.verify_alive(): return chan1 = self.connection.channel() consumer = chan1.Consumer(self.queue) self.purge_consumer(consumer) producer = chan1.Producer(self.exchange) for i in range(10): producer.publish({'foo': 'bar'}, routing_key=self.prefix) if self.reliable_purge: self.assertEqual(consumer.purge(), 10) self.assertEqual(consumer.purge(), 0) else: purged = 0 while purged < 9: purged += self.purge_consumer(consumer) def _digest(self, data): return _digest(data).hexdigest() @skip_if_quick def test_produce__consume_large_messages( self, bytes=1048576, n=10, charset=string.punctuation + string.letters + string.digits): if not self.verify_alive(): return bytes = min(x for x in [bytes, self.message_size_limit] if x) messages = [''.join(random.choice(charset) for j in range(bytes)) + '--%s' % n for i in range(n)] digests = [] chan1 = self.connection.channel() consumer = chan1.Consumer(self.queue) self.purge_consumer(consumer) producer = chan1.Producer(self.exchange) for i, message in enumerate(messages): producer.publish({'text': message, 'i': i}, routing_key=self.prefix) digests.append(self._digest(message)) received = [(msg['i'], msg['text']) for msg in consumeN(self.connection, consumer, n)] self.assertEqual(len(received), n) ordering = [i for i, _ in received] if ordering != list(range(n)) and not self.suppress_disorder_warning: warnings.warn( '%s did not deliver messages in FIFO order: %r' % ( self.transport, ordering)) for i, text in received: if text != messages[i]: raise AssertionError('%i: %r is not %r' % ( i, text[-100:], messages[i][-100:])) self.assertEqual(self._digest(text), digests[i]) chan1.close() self.purge([self.queue.name]) def P(self, rest): return '%s%s%s' % (self.prefix, self.sep, rest) def test_produce__consume_multiple(self): if not self.verify_alive(): return chan1 = self.connection.channel() producer = chan1.Producer(self.exchange) b1 = Queue(self.P('b1'), self.exchange, 'b1')(chan1) b2 = Queue(self.P('b2'), self.exchange, 'b2')(chan1) b3 = Queue(self.P('b3'), self.exchange, 'b3')(chan1) [q.declare() for q in (b1, b2, b3)] self.purge([b1.name, b2.name, b3.name]) producer.publish('b1', routing_key='b1') producer.publish('b2', routing_key='b2') producer.publish('b3', routing_key='b3') chan1.close() chan2 = self.connection.channel() consumer = chan2.Consumer([b1, b2, b3]) messages = consumeN(self.connection, consumer, 3) self.assertItemsEqual(_nobuf(messages), ['b1', 'b2', 'b3']) chan2.close() self.purge([self.P('b1'), self.P('b2'), self.P('b3')]) def test_timeout(self): if not self.verify_alive(): return chan = self.connection.channel() self.purge([self.queue.name]) consumer = chan.Consumer(self.queue) self.assertRaises( socket.timeout, self.connection.drain_events, timeout=0.3, ) consumer.cancel() chan.close() def test_basic_get(self): if not self.verify_alive(): return chan1 = self.connection.channel() producer = chan1.Producer(self.exchange) chan2 = self.connection.channel() queue = Queue(self.P('basic_get'), self.exchange, 'basic_get') queue = queue(chan2) queue.declare() producer.publish({'basic.get': 'this'}, routing_key='basic_get') chan1.close() for i in range(self.event_loop_max): m = queue.get() if m: break time.sleep(0.1) self.assertEqual(m.payload, {'basic.get': 'this'}) self.purge([queue.name]) chan2.close() def test_cyclic_reference_transport(self): if not self.verify_alive(): return def _createref(): conn = self.get_connection() conn.transport conn.close() return weakref.ref(conn) self.assertIsNone(_createref()()) def test_cyclic_reference_connection(self): if not self.verify_alive(): return def _createref(): conn = self.get_connection() conn.connect() conn.close() return weakref.ref(conn) self.assertIsNone(_createref()()) def test_cyclic_reference_channel(self): if not self.verify_alive(): return def _createref(): conn = self.get_connection() conn.connect() chanrefs = [] try: for i in range(100): channel = conn.channel() chanrefs.append(weakref.ref(channel)) channel.close() finally: conn.close() return chanrefs for chanref in _createref(): self.assertIsNone(chanref()) def tearDown(self): if self.transport and self.connected: self.connection.close() kombu-3.0.7/INSTALL0000644000076500000000000000060312064115765014307 0ustar asksolwheel00000000000000Installation ============ You can install ``kombu`` either via the Python Package Index (PyPI) or from source. To install using ``pip``,:: $ pip install kombu To install using ``easy_install``,:: $ easy_install kombu If you have downloaded a source tarball you can install it by doing the following,:: $ python setup.py build # python setup.py install # as root kombu-3.0.7/kombu/0000755000076500000000000000000012247127370014372 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/__init__.py0000600000076500000240000000732412247127165016510 0ustar asksolstaff00000000000000"""Messaging library for Python""" from __future__ import absolute_import from collections import namedtuple version_info_t = namedtuple( 'version_info_t', ('major', 'minor', 'micro', 'releaselevel', 'serial'), ) VERSION = version_info_t(3, 0, 7, '', '') __version__ = '{0.major}.{0.minor}.{0.micro}{0.releaselevel}'.format(VERSION) __author__ = 'Ask Solem' __contact__ = 'ask@celeryproject.org' __homepage__ = 'http://kombu.readthedocs.org' __docformat__ = 'restructuredtext en' # -eof meta- import os import sys if sys.version_info < (2, 6): # pragma: no cover raise Exception('Kombu 3.1 requires Python versions 2.6 or later.') STATICA_HACK = True globals()['kcah_acitats'[::-1].upper()] = False if STATICA_HACK: # pragma: no cover # This is never executed, but tricks static analyzers (PyDev, PyCharm, # pylint, etc.) into knowing the types of these symbols, and what # they contain. from kombu.connection import Connection, BrokerConnection # noqa from kombu.entity import Exchange, Queue, binding # noqa from kombu.messaging import Consumer, Producer # noqa from kombu.pools import connections, producers # noqa from kombu.utils.url import parse_url # noqa from kombu.common import eventloop, uuid # noqa from kombu.serialization import ( # noqa enable_insecure_serializers, disable_insecure_serializers, ) # Lazy loading. # - See werkzeug/__init__.py for the rationale behind this. from types import ModuleType all_by_module = { 'kombu.connection': ['Connection', 'BrokerConnection'], 'kombu.entity': ['Exchange', 'Queue', 'binding'], 'kombu.messaging': ['Consumer', 'Producer'], 'kombu.pools': ['connections', 'producers'], 'kombu.utils.url': ['parse_url'], 'kombu.common': ['eventloop', 'uuid'], 'kombu.serialization': ['enable_insecure_serializers', 'disable_insecure_serializers'], } object_origins = {} for module, items in all_by_module.items(): for item in items: object_origins[item] = module class module(ModuleType): def __getattr__(self, name): if name in object_origins: module = __import__(object_origins[name], None, None, [name]) for extra_name in all_by_module[module.__name__]: setattr(self, extra_name, getattr(module, extra_name)) return getattr(module, name) return ModuleType.__getattribute__(self, name) def __dir__(self): result = list(new_module.__all__) result.extend(('__file__', '__path__', '__doc__', '__all__', '__docformat__', '__name__', '__path__', 'VERSION', '__package__', '__version__', '__author__', '__contact__', '__homepage__', '__docformat__')) return result # 2.5 does not define __package__ try: package = __package__ except NameError: # pragma: no cover package = 'kombu' # keep a reference to this module so that it's not garbage collected old_module = sys.modules[__name__] new_module = sys.modules[__name__] = module(__name__) new_module.__dict__.update({ '__file__': __file__, '__path__': __path__, '__doc__': __doc__, '__all__': tuple(object_origins), '__version__': __version__, '__author__': __author__, '__contact__': __contact__, '__homepage__': __homepage__, '__docformat__': __docformat__, '__package__': package, 'VERSION': VERSION}) if os.environ.get('KOMBU_LOG_DEBUG'): # pragma: no cover os.environ.update(KOMBU_LOG_CHANNEL='1', KOMBU_LOG_CONNECTION='1') from .utils import debug debug.setup_logging() kombu-3.0.7/kombu/abstract.py0000644000076500000000000000641112237554371016555 0ustar asksolwheel00000000000000""" kombu.abstract ============== Object utilities. """ from __future__ import absolute_import from copy import copy from .connection import maybe_channel from .exceptions import NotBoundError from .utils import ChannelPromise __all__ = ['Object', 'MaybeChannelBound'] def unpickle_dict(cls, kwargs): return cls(**kwargs) class Object(object): """Common base class supporting automatic kwargs->attributes handling, and cloning.""" attrs = () def __init__(self, *args, **kwargs): any = lambda v: v for name, type_ in self.attrs: value = kwargs.get(name) if value is not None: setattr(self, name, (type_ or any)(value)) else: try: getattr(self, name) except AttributeError: setattr(self, name, None) def as_dict(self, recurse=False): def f(obj, type): if recurse and isinstance(obj, Object): return obj.as_dict(recurse=True) return type(obj) if type else obj return dict( (attr, f(getattr(self, attr), type)) for attr, type in self.attrs ) def __reduce__(self): return unpickle_dict, (self.__class__, self.as_dict()) def __copy__(self): return self.__class__(**self.as_dict()) class MaybeChannelBound(Object): """Mixin for classes that can be bound to an AMQP channel.""" _channel = None _is_bound = False #: Defines whether maybe_declare can skip declaring this entity twice. can_cache_declaration = False def __call__(self, channel): """`self(channel) -> self.bind(channel)`""" return self.bind(channel) def bind(self, channel): """Create copy of the instance that is bound to a channel.""" return copy(self).maybe_bind(channel) def maybe_bind(self, channel): """Bind instance to channel if not already bound.""" if not self.is_bound and channel: self._channel = maybe_channel(channel) self.when_bound() self._is_bound = True return self def revive(self, channel): """Revive channel after the connection has been re-established. Used by :meth:`~kombu.Connection.ensure`. """ if self.is_bound: self._channel = channel self.when_bound() def when_bound(self): """Callback called when the class is bound.""" pass def __repr__(self, item=''): item = item or type(self).__name__ if self.is_bound: return '<{0} bound to chan:{1}>'.format( item or type(self).__name__, self.channel.channel_id) return ''.format(item) @property def is_bound(self): """Flag set if the channel is bound.""" return self._is_bound and self._channel is not None @property def channel(self): """Current channel if the object is bound.""" channel = self._channel if channel is None: raise NotBoundError( "Can't call method on {0} not bound to a channel".format( type(self).__name__)) if isinstance(channel, ChannelPromise): channel = self._channel = channel() return channel kombu-3.0.7/kombu/async/0000755000076500000000000000000012247127370015507 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/async/__init__.py0000644000076500000000000000046312243671543017625 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.async =========== Event loop implementation. """ from __future__ import absolute_import from .hub import Hub, get_event_loop, set_event_loop from kombu.utils.eventio import READ, WRITE, ERR __all__ = ['READ', 'WRITE', 'ERR', 'Hub', 'get_event_loop', 'set_event_loop'] kombu-3.0.7/kombu/async/hub.py0000644000076500000000000002634012243671543016646 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.async.hub =============== Event loop implementation. """ from __future__ import absolute_import import errno from collections import deque from contextlib import contextmanager from time import sleep from types import GeneratorType as generator from amqp import promise from kombu.five import Empty, items, range from kombu.log import get_logger from kombu.utils import cached_property, fileno, reprcall from kombu.utils.compat import get_errno from kombu.utils.eventio import READ, WRITE, ERR, poll from .timer import Timer __all__ = ['Hub', 'get_event_loop', 'set_event_loop'] logger = get_logger(__name__) _current_loop = None class Stop(BaseException): """Stops the event loop.""" def _raise_stop_error(): raise Stop() @contextmanager def _dummy_context(*args, **kwargs): yield def get_event_loop(): return _current_loop def set_event_loop(loop): global _current_loop _current_loop = loop return loop def repr_flag(flag): return '{0}{1}{2}'.format('R' if flag & READ else '', 'W' if flag & WRITE else '', '!' if flag & ERR else '') def _rcb(obj): if obj is None: return '' if isinstance(obj, str): return obj if isinstance(obj, tuple): cb, args = obj return reprcall(cb.__name__, args=args) return obj.__name__ class Hub(object): """Event loop object. :keyword timer: Specify timer object. """ #: Flag set if reading from an fd will not block. READ = READ #: Flag set if writing to an fd will not block. WRITE = WRITE #: Flag set on error, and the fd should be read from asap. ERR = ERR #: List of callbacks to be called when the loop is exiting, #: applied with the hub instance as sole argument. on_close = None def __init__(self, timer=None): self.timer = timer if timer is not None else Timer() self.readers = {} self.writers = {} self.on_tick = set() self.on_close = set() self._ready = deque() self._running = False self._loop = None # The eventloop (in celery.worker.loops) # will merge fds in this set and then instead of calling # the callback for each ready fd it will call the # :attr:`consolidate_callback` with the list of ready_fds # as an argument. This API is internal and is only # used by the multiprocessing pool to find inqueues # that are ready to write. self.consolidate = set() self.consolidate_callback = None self.propagate_errors = () self._create_poller() def reset(self): self.close() self._create_poller() def _create_poller(self): self.poller = poll() self._register_fd = self.poller.register self._unregister_fd = self.poller.unregister def _close_poller(self): if self.poller is not None: self.poller.close() def stop(self): self.call_soon(_raise_stop_error) def __repr__(self): return ''.format( id(self), len(self.readers), len(self.writers), ) def fire_timers(self, min_delay=1, max_delay=10, max_timers=10, propagate=()): timer = self.timer delay = None if timer and timer._queue: for i in range(max_timers): delay, entry = next(self.scheduler) if entry is None: break try: entry() except propagate: raise except (MemoryError, AssertionError): raise except OSError as exc: if get_errno(exc) == errno.ENOMEM: raise logger.error('Error in timer: %r', exc, exc_info=1) except Exception as exc: logger.error('Error in timer: %r', exc, exc_info=1) return min(max(delay or 0, min_delay), max_delay) def add(self, fd, callback, flags, args=(), consolidate=False): try: self.poller.register(fd, flags) except ValueError: self._discard(fd) raise else: dest = self.readers if flags & READ else self.writers if consolidate: self.consolidate.add(fd) dest[fileno(fd)] = None else: dest[fileno(fd)] = callback, args def remove(self, fd): fd = fileno(fd) self._unregister(fd) self._discard(fd) def run_forever(self): self._running = True try: while 1: try: self.run_once() except Stop: break finally: self._running = False def run_once(self): try: next(self.loop) except StopIteration: self._loop = None def call_soon(self, callback, *args): handle = promise(callback, args) self._ready.append(handle) return handle def call_later(self, delay, callback, *args): return self.timer.call_after(delay, callback, args) def call_at(self, when, callback, *args): return self.timer.call_at(when, callback, args) def call_repeatedly(self, delay, callback, *args): return self.timer.call_repeatedly(delay, callback, args) def add_reader(self, fds, callback, *args): return self.add(fds, callback, READ | ERR, args) def add_writer(self, fds, callback, *args): return self.add(fds, callback, WRITE, args) def remove_reader(self, fd): writable = fd in self.writers on_write = self.writers.get(fd) try: self._unregister(fd) self._discard(fd) finally: if writable: cb, args = on_write self.add(fd, cb, WRITE, args) def remove_writer(self, fd): readable = fd in self.readers on_read = self.readers.get(fd) try: self._unregister(fd) self._discard(fd) finally: if readable: cb, args = on_read self.add(fd, cb, READ | ERR, args) def _unregister(self, fd): try: self.poller.unregister(fd) except (KeyError, OSError): pass def close(self, *args): self._close_poller() [self._unregister(fd) for fd in self.readers] self.readers.clear() [self._unregister(fd) for fd in self.writers] self.writers.clear() self.consolidate.clear() for callback in self.on_close: callback(self) def _discard(self, fd): fd = fileno(fd) self.readers.pop(fd, None) self.writers.pop(fd, None) self.consolidate.discard(fd) def create_loop(self, generator=generator, sleep=sleep, min=min, next=next, Empty=Empty, StopIteration=StopIteration, KeyError=KeyError, READ=READ, WRITE=WRITE, ERR=ERR): readers, writers = self.readers, self.writers poll = self.poller.poll fire_timers = self.fire_timers hub_remove = self.remove scheduled = self.timer._queue consolidate = self.consolidate consolidate_callback = self.consolidate_callback on_tick = self.on_tick todo = self._ready propagate = self.propagate_errors while 1: for tick_callback in on_tick: tick_callback() while todo: item = todo.popleft() if item: item() poll_timeout = fire_timers(propagate=propagate) if scheduled else 1 #print('[[[HUB]]]: %s' % (self.repr_active(), )) if readers or writers: to_consolidate = [] try: events = poll(poll_timeout) #print('[EVENTS]: %s' % (self.repr_events(events or []), )) except ValueError: # Issue 882 raise StopIteration() for fileno, event in events or (): if fileno in consolidate and \ writers.get(fileno) is None: to_consolidate.append(fileno) continue cb = cbargs = None try: if event & READ: cb, cbargs = readers[fileno] elif event & WRITE: cb, cbargs = writers[fileno] elif event & ERR: try: cb, cbargs = (readers.get(fileno) or writers.get(fileno)) except TypeError: pass except (KeyError, Empty): hub_remove(fileno) continue if cb is None: continue if isinstance(cb, generator): try: next(cb) except OSError as exc: if get_errno(exc) != errno.EBADF: raise hub_remove(fileno) except StopIteration: pass except Exception: hub_remove(fileno) raise else: try: cb(*cbargs) except Empty: pass if to_consolidate: consolidate_callback(to_consolidate) else: # no sockets yet, startup is probably not done. sleep(min(poll_timeout, 0.1)) yield def repr_active(self): return ', '.join(self._repr_readers() + self._repr_writers()) def repr_events(self, events): return ', '.join( '{0}({1})->{2}'.format( _rcb(self._callback_for(fd, fl, '(GONE)')), fd, repr_flag(fl), ) for fd, fl in events ) def _repr_readers(self): return ['({0}){1}->{2}'.format(fd, _rcb(cb), repr_flag(READ | ERR)) for fd, cb in items(self.readers)] def _repr_writers(self): return ['({0}){1}->{2}'.format(fd, _rcb(cb), repr_flag(WRITE)) for fd, cb in items(self.writers)] def _callback_for(self, fd, flag, *default): try: if flag & READ: return self.readers[fd] if flag & WRITE: if fd in self.consolidate: return self.consolidate_callback return self.writers[fd] except KeyError: if default: return default[0] raise @cached_property def scheduler(self): return iter(self.timer) @property def loop(self): if self._loop is None: self._loop = self.create_loop() return self._loop kombu-3.0.7/kombu/async/semaphore.py0000644000076500000000000000552712243671543020057 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.async.semaphore ===================== Semaphores and concurrency primitives. """ from __future__ import absolute_import from collections import deque __all__ = ['DummyLock', 'LaxBoundedSemaphore'] class LaxBoundedSemaphore(object): """Asynchronous Bounded Semaphore. Lax means that the value will stay within the specified range even if released more times than it was acquired. Example: >>> from future import print_statement as printf # ^ ignore: just fooling stupid pyflakes >>> x = LaxBoundedSemaphore(2) >>> x.acquire(printf, 'HELLO 1') HELLO 1 >>> x.acquire(printf, 'HELLO 2') HELLO 2 >>> x.acquire(printf, 'HELLO 3') >>> x._waiters # private, do not access directly [print, ('HELLO 3', )] >>> x.release() HELLO 3 """ def __init__(self, value): self.initial_value = self.value = value self._waiting = deque() self._add_waiter = self._waiting.append self._pop_waiter = self._waiting.popleft def acquire(self, callback, *partial_args): """Acquire semaphore, applying ``callback`` if the resource is available. :param callback: The callback to apply. :param \*partial_args: partial arguments to callback. """ value = self.value if value <= 0: self._add_waiter((callback, partial_args)) return False else: self.value = max(value - 1, 0) callback(*partial_args) return True def release(self): """Release semaphore. If there are any waiters this will apply the first waiter that is waiting for the resource (FIFO order). """ self.value = min(self.value + 1, self.initial_value) try: waiter, args = self._pop_waiter() except IndexError: pass else: waiter(*args) def grow(self, n=1): """Change the size of the semaphore to accept more users.""" self.initial_value += n self.value += n [self.release() for _ in range(n)] def shrink(self, n=1): """Change the size of the semaphore to accept less users.""" self.initial_value = max(self.initial_value - n, 0) self.value = max(self.value - n, 0) def clear(self): """Reset the semaphore, which also wipes out any waiting callbacks.""" self._waiting.clear() self.value = self.initial_value def __repr__(self): return '<{0} at {1:#x} value:{2} waiting:{3}>'.format( self.__class__.__name__, id(self), self.value, len(self._waiting), ) class DummyLock(object): """Pretending to be a lock.""" def __enter__(self): return self def __exit__(self, *exc_info): pass kombu-3.0.7/kombu/async/timer.py0000644000076500000000000001444612243671543017214 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.async.timer ================= Timer scheduling Python callbacks. """ from __future__ import absolute_import import heapq import sys from collections import namedtuple from datetime import datetime from functools import wraps from time import time from weakref import proxy as weakrefproxy from kombu.five import monotonic from kombu.log import get_logger from kombu.utils.compat import timedelta_seconds try: from pytz import utc except ImportError: utc = None DEFAULT_MAX_INTERVAL = 2 EPOCH = datetime.utcfromtimestamp(0).replace(tzinfo=utc) IS_PYPY = hasattr(sys, 'pypy_version_info') logger = get_logger(__name__) __all__ = ['Entry', 'Timer', 'to_timestamp'] scheduled = namedtuple('scheduled', ('eta', 'priority', 'entry')) def to_timestamp(d, default_timezone=utc): if isinstance(d, datetime): if d.tzinfo is None: d = d.replace(tzinfo=default_timezone) return timedelta_seconds(d - EPOCH) return d class Entry(object): if not IS_PYPY: # pragma: no cover __slots__ = ( 'fun', 'args', 'kwargs', 'tref', 'cancelled', '_last_run', '__weakref__', ) def __init__(self, fun, args=None, kwargs=None): self.fun = fun self.args = args or [] self.kwargs = kwargs or {} self.tref = weakrefproxy(self) self._last_run = None self.cancelled = False def __call__(self): return self.fun(*self.args, **self.kwargs) def cancel(self): try: self.tref.cancelled = True except ReferenceError: # pragma: no cover pass def __repr__(self): return ' hash(other) def __eq__(self, other): return hash(self) == hash(other) def __ne__(self, other): return not self.__eq__(other) class Timer(object): """ETA scheduler.""" Entry = Entry on_error = None def __init__(self, max_interval=None, on_error=None, **kwargs): self.max_interval = float(max_interval or DEFAULT_MAX_INTERVAL) self.on_error = on_error or self.on_error self._queue = [] def __enter__(self): return self def __exit__(self, *exc_info): self.stop() def call_at(self, eta, fun, args=(), kwargs={}, priority=0): return self.enter_at(self.Entry(fun, args, kwargs), eta, priority) def call_after(self, secs, fun, args=(), kwargs={}, priority=0): return self.enter_after(secs, self.Entry(fun, args, kwargs), priority) def call_repeatedly(self, secs, fun, args=(), kwargs={}, priority=0): tref = self.Entry(fun, args, kwargs) @wraps(fun) def _reschedules(*args, **kwargs): last, now = tref._last_run, monotonic() lsince = (now - tref._last_run) if last else secs try: if lsince and lsince >= secs: tref._last_run = now return fun(*args, **kwargs) finally: if not tref.cancelled: last = tref._last_run next = secs - (now - last) if last else secs self.enter_after(next, tref, priority) tref.fun = _reschedules tref._last_run = None return self.enter_after(secs, tref, priority) def enter_at(self, entry, eta=None, priority=0, time=time): """Enter function into the scheduler. :param entry: Item to enter. :keyword eta: Scheduled time as a :class:`datetime.datetime` object. :keyword priority: Unused. """ if eta is None: eta = time() if isinstance(eta, datetime): try: eta = to_timestamp(eta) except Exception as exc: if not self.handle_error(exc): raise return return self._enter(eta, priority, entry) def enter_after(self, secs, entry, priority=0, time=time): return self.enter_at(entry, time() + secs, priority) def _enter(self, eta, priority, entry, push=heapq.heappush): push(self._queue, scheduled(eta, priority, entry)) return entry def apply_entry(self, entry): try: entry() except Exception as exc: if not self.handle_error(exc): logger.error('Error in timer: %r', exc, exc_info=True) def handle_error(self, exc_info): if self.on_error: self.on_error(exc_info) return True def stop(self): pass def __iter__(self, min=min, nowfun=time, pop=heapq.heappop, push=heapq.heappush): """This iterator yields a tuple of ``(entry, wait_seconds)``, where if entry is :const:`None` the caller should wait for ``wait_seconds`` until it polls the schedule again.""" max_interval = self.max_interval queue = self._queue while 1: if queue: eventA = queue[0] now, eta = nowfun(), eventA[0] if now < eta: yield min(eta - now, max_interval), None else: eventB = pop(queue) if eventB is eventA: entry = eventA[2] if not entry.cancelled: yield None, entry continue else: push(queue, eventB) else: yield None, None def clear(self): self._queue[:] = [] # atomic, without creating a new list. def cancel(self, tref): tref.cancel() def __len__(self): return len(self._queue) def __nonzero__(self): return True @property def queue(self, _pop=heapq.heappop): """Snapshot of underlying datastructure.""" events = list(self._queue) return [_pop(v) for v in [events] * len(events)] @property def schedule(self): return self kombu-3.0.7/kombu/clocks.py0000644000076500000000000001061312237554371016227 0ustar asksolwheel00000000000000""" kombu.clocks ============ Logical Clocks and Synchronization. """ from __future__ import absolute_import from threading import Lock from itertools import islice from operator import itemgetter from .five import zip __all__ = ['LamportClock', 'timetuple'] R_CLOCK = '_lamport(clock={0}, timestamp={1}, id={2} {3!r})' class timetuple(tuple): """Tuple of event clock information. Can be used as part of a heap to keep events ordered. :param clock: Event clock value. :param timestamp: Event UNIX timestamp value. :param id: Event host id (e.g. ``hostname:pid``). :param obj: Optional obj to associate with this event. """ __slots__ = () def __new__(cls, clock, timestamp, id, obj=None): return tuple.__new__(cls, (clock, timestamp, id, obj)) def __repr__(self): return R_CLOCK.format(*self) def __getnewargs__(self): return tuple(self) def __lt__(self, other): # 0: clock 1: timestamp 3: process id try: A, B = self[0], other[0] # uses logical clock value first if A and B: # use logical clock if available if A == B: # equal clocks use lower process id return self[2] < other[2] return A < B return self[1] < other[1] # ... or use timestamp except IndexError: return NotImplemented __gt__ = lambda self, other: other < self __le__ = lambda self, other: not other < self __ge__ = lambda self, other: not self < other clock = property(itemgetter(0)) timestamp = property(itemgetter(1)) id = property(itemgetter(2)) obj = property(itemgetter(3)) class LamportClock(object): """Lamport's logical clock. From Wikipedia: A Lamport logical clock is a monotonically incrementing software counter maintained in each process. It follows some simple rules: * A process increments its counter before each event in that process; * When a process sends a message, it includes its counter value with the message; * On receiving a message, the receiver process sets its counter to be greater than the maximum of its own value and the received value before it considers the message received. Conceptually, this logical clock can be thought of as a clock that only has meaning in relation to messages moving between processes. When a process receives a message, it resynchronizes its logical clock with the sender. .. seealso:: * `Lamport timestamps`_ * `Lamports distributed mutex`_ .. _`Lamport Timestamps`: http://en.wikipedia.org/wiki/Lamport_timestamps .. _`Lamports distributed mutex`: http://bit.ly/p99ybE *Usage* When sending a message use :meth:`forward` to increment the clock, when receiving a message use :meth:`adjust` to sync with the time stamp of the incoming message. """ #: The clocks current value. value = 0 def __init__(self, initial_value=0, Lock=Lock): self.value = initial_value self.mutex = Lock() def adjust(self, other): with self.mutex: self.value = max(self.value, other) + 1 return self.value def forward(self): with self.mutex: self.value += 1 return self.value def sort_heap(self, h): """List of tuples containing at least two elements, representing an event, where the first element is the event's scalar clock value, and the second element is the id of the process (usually ``"hostname:pid"``): ``sh([(clock, processid, ...?), (...)])`` The list must already be sorted, which is why we refer to it as a heap. The tuple will not be unpacked, so more than two elements can be present. Will return the latest event. """ if h[0][0] == h[1][0]: same = [] for PN in zip(h, islice(h, 1, None)): if PN[0][0] != PN[1][0]: break # Prev and Next's clocks differ same.append(PN[0]) # return first item sorted by process id return sorted(same, key=lambda event: event[1])[0] # clock values unique, return first item return h[0] def __str__(self): return str(self.value) def __repr__(self): return ''.format(self) kombu-3.0.7/kombu/common.py0000644000076500000000000002663712243671543016254 0ustar asksolwheel00000000000000""" kombu.common ============ Common Utilities. """ from __future__ import absolute_import import os import socket import threading import uuid as _uuid from collections import deque from contextlib import contextmanager from functools import partial from itertools import count from .entity import Exchange, Queue from .exceptions import ChannelError from .five import range from .log import get_logger from .messaging import Consumer as _Consumer from .serialization import registry as serializers from .utils import uuid try: from _thread import get_ident except ImportError: # pragma: no cover try: # noqa from thread import get_ident # noqa except ImportError: # pragma: no cover from dummy_thread import get_ident # noqa __all__ = ['Broadcast', 'maybe_declare', 'uuid', 'itermessages', 'send_reply', 'collect_replies', 'insured', 'drain_consumer', 'eventloop'] #: Prefetch count can't exceed short. PREFETCH_COUNT_MAX = 0xFFFF logger = get_logger(__name__) _nodeid = _uuid.getnode() def generate_oid(node_id, process_id, thread_id, instance): ent = '%x-%x-%x-%x' % (node_id, process_id, thread_id, id(instance)) return str(_uuid.uuid3(_uuid.NAMESPACE_OID, ent)) def oid_from(instance): return generate_oid(_nodeid, os.getpid(), get_ident(), instance) class Broadcast(Queue): """Convenience class used to define broadcast queues. Every queue instance will have a unique name, and both the queue and exchange is configured with auto deletion. :keyword name: This is used as the name of the exchange. :keyword queue: By default a unique id is used for the queue name for every consumer. You can specify a custom queue name here. :keyword \*\*kwargs: See :class:`~kombu.Queue` for a list of additional keyword arguments supported. """ def __init__(self, name=None, queue=None, **kwargs): return super(Broadcast, self).__init__( name=queue or 'bcast.%s' % (uuid(), ), **dict({'alias': name, 'auto_delete': True, 'exchange': Exchange(name, type='fanout')}, **kwargs)) def declaration_cached(entity, channel): return entity in channel.connection.client.declared_entities def maybe_declare(entity, channel=None, retry=False, **retry_policy): if not entity.is_bound: assert channel entity = entity.bind(channel) if retry: return _imaybe_declare(entity, **retry_policy) return _maybe_declare(entity) def _maybe_declare(entity): channel = entity.channel if not channel.connection: raise ChannelError('channel disconnected') if entity.can_cache_declaration: declared = channel.connection.client.declared_entities ident = hash(entity) if ident not in declared: entity.declare() declared.add(ident) return True return False entity.declare() return True def _imaybe_declare(entity, **retry_policy): return entity.channel.connection.client.ensure( entity, _maybe_declare, **retry_policy)(entity) def drain_consumer(consumer, limit=1, timeout=None, callbacks=None): acc = deque() def on_message(body, message): acc.append((body, message)) consumer.callbacks = [on_message] + (callbacks or []) with consumer: for _ in eventloop(consumer.channel.connection.client, limit=limit, timeout=timeout, ignore_timeouts=True): try: yield acc.popleft() except IndexError: pass def itermessages(conn, channel, queue, limit=1, timeout=None, Consumer=_Consumer, callbacks=None, **kwargs): return drain_consumer(Consumer(channel, queues=[queue], **kwargs), limit=limit, timeout=timeout, callbacks=callbacks) def eventloop(conn, limit=None, timeout=None, ignore_timeouts=False): """Best practice generator wrapper around ``Connection.drain_events``. Able to drain events forever, with a limit, and optionally ignoring timeout errors (a timeout of 1 is often used in environments where the socket can get "stuck", and is a best practice for Kombu consumers). **Examples** ``eventloop`` is a generator:: from kombu.common import eventloop def run(connection): it = eventloop(connection, timeout=1, ignore_timeouts=True) next(it) # one event consumed, or timed out. for _ in eventloop(connection, timeout=1, ignore_timeouts=True): pass # loop forever. It also takes an optional limit parameter, and timeout errors are propagated by default:: for _ in eventloop(connection, limit=1, timeout=1): pass .. seealso:: :func:`itermessages`, which is an event loop bound to one or more consumers, that yields any messages received. """ for i in limit and range(limit) or count(): try: yield conn.drain_events(timeout=timeout) except socket.timeout: if timeout and not ignore_timeouts: # pragma: no cover raise except socket.error: # pragma: no cover pass def send_reply(exchange, req, msg, producer=None, retry=False, retry_policy=None, **props): """Send reply for request. :param exchange: Reply exchange :param req: Original request, a message with a ``reply_to`` property. :param producer: Producer instance :param retry: If true must retry according to ``reply_policy`` argument. :param retry_policy: Retry settings. :param props: Extra properties """ producer.publish( msg, exchange=exchange, retry=retry, retry_policy=retry_policy, **dict({'routing_key': req.properties['reply_to'], 'correlation_id': req.properties.get('correlation_id'), 'serializer': serializers.type_to_name[req.content_type], 'content_encoding': req.content_encoding}, **props) ) def collect_replies(conn, channel, queue, *args, **kwargs): """Generator collecting replies from ``queue``""" no_ack = kwargs.setdefault('no_ack', True) received = False try: for body, message in itermessages(conn, channel, queue, *args, **kwargs): if not no_ack: message.ack() received = True yield body finally: if received: channel.after_reply_message_received(queue.name) def _ensure_errback(exc, interval): logger.error( 'Connection error: %r. Retry in %ss\n', exc, interval, exc_info=True, ) @contextmanager def _ignore_errors(conn): try: yield except conn.connection_errors + conn.channel_errors: pass def ignore_errors(conn, fun=None, *args, **kwargs): """Ignore connection and channel errors. The first argument must be a connection object, or any other object with ``connection_error`` and ``channel_error`` attributes. Can be used as a function: .. code-block:: python def example(connection): ignore_errors(connection, consumer.channel.close) or as a context manager: .. code-block:: python def example(connection): with ignore_errors(connection): consumer.channel.close() .. note:: Connection and channel errors should be properly handled, and not ignored. Using this function is only acceptable in a cleanup phase, like when a connection is lost or at shutdown. """ if fun: with _ignore_errors(conn): return fun(*args, **kwargs) return _ignore_errors(conn) def revive_connection(connection, channel, on_revive=None): if on_revive: on_revive(channel) def insured(pool, fun, args, kwargs, errback=None, on_revive=None, **opts): """Ensures function performing broker commands completes despite intermittent connection failures.""" errback = errback or _ensure_errback with pool.acquire(block=True) as conn: conn.ensure_connection(errback=errback) # we cache the channel for subsequent calls, this has to be # reset on revival. channel = conn.default_channel revive = partial(revive_connection, conn, on_revive=on_revive) insured = conn.autoretry(fun, channel, errback=errback, on_revive=revive, **opts) retval, _ = insured(*args, **dict(kwargs, connection=conn)) return retval class QoS(object): """Thread safe increment/decrement of a channels prefetch_count. :param callback: Function used to set new prefetch count, e.g. ``consumer.qos`` or ``channel.basic_qos``. Will be called with a single ``prefetch_count`` keyword argument. :param initial_value: Initial prefetch count value. **Example usage** .. code-block:: python >>> from kombu import Consumer, Connection >>> connection = Connection('amqp://') >>> consumer = Consumer(connection) >>> qos = QoS(consumer.qos, initial_prefetch_count=2) >>> qos.update() # set initial >>> qos.value 2 >>> def in_some_thread(): ... qos.increment_eventually() >>> def in_some_other_thread(): ... qos.decrement_eventually() >>> while 1: ... if qos.prev != qos.value: ... qos.update() # prefetch changed so update. It can be used with any function supporting a ``prefetch_count`` keyword argument:: >>> channel = connection.channel() >>> QoS(channel.basic_qos, 10) >>> def set_qos(prefetch_count): ... print('prefetch count now: %r' % (prefetch_count, )) >>> QoS(set_qos, 10) """ prev = None def __init__(self, callback, initial_value): self.callback = callback self._mutex = threading.RLock() self.value = initial_value or 0 def increment_eventually(self, n=1): """Increment the value, but do not update the channels QoS. The MainThread will be responsible for calling :meth:`update` when necessary. """ with self._mutex: if self.value: self.value = self.value + max(n, 0) return self.value def decrement_eventually(self, n=1): """Decrement the value, but do not update the channels QoS. The MainThread will be responsible for calling :meth:`update` when necessary. """ with self._mutex: if self.value: self.value -= n if self.value < 1: self.value = 1 return self.value def set(self, pcount): """Set channel prefetch_count setting.""" if pcount != self.prev: new_value = pcount if pcount > PREFETCH_COUNT_MAX: logger.warn('QoS: Disabled: prefetch_count exceeds %r', PREFETCH_COUNT_MAX) new_value = 0 logger.debug('basic.qos: prefetch_count->%s', new_value) self.callback(prefetch_count=new_value) self.prev = pcount return pcount def update(self): """Update prefetch count with current value.""" with self._mutex: return self.set(self.value) kombu-3.0.7/kombu/compat.py0000644000076500000000000001464312237554371016243 0ustar asksolwheel00000000000000""" kombu.compat ============ Carrot compatible interface for :class:`Publisher` and :class:`Producer`. See http://packages.python.org/pypi/carrot for documentation. """ from __future__ import absolute_import from itertools import count from . import messaging from .entity import Exchange, Queue from .five import items __all__ = ['Publisher', 'Consumer'] # XXX compat attribute entry_to_queue = Queue.from_dict def _iterconsume(connection, consumer, no_ack=False, limit=None): consumer.consume(no_ack=no_ack) for iteration in count(0): # for infinity if limit and iteration >= limit: raise StopIteration yield connection.drain_events() class Publisher(messaging.Producer): exchange = '' exchange_type = 'direct' routing_key = '' durable = True auto_delete = False _closed = False def __init__(self, connection, exchange=None, routing_key=None, exchange_type=None, durable=None, auto_delete=None, channel=None, **kwargs): if channel: connection = channel self.exchange = exchange or self.exchange self.exchange_type = exchange_type or self.exchange_type self.routing_key = routing_key or self.routing_key if auto_delete is not None: self.auto_delete = auto_delete if durable is not None: self.durable = durable if not isinstance(self.exchange, Exchange): self.exchange = Exchange(name=self.exchange, type=self.exchange_type, routing_key=self.routing_key, auto_delete=self.auto_delete, durable=self.durable) super(Publisher, self).__init__(connection, self.exchange, **kwargs) def send(self, *args, **kwargs): return self.publish(*args, **kwargs) def close(self): super(Publisher, self).close() self._closed = True def __enter__(self): return self def __exit__(self, *exc_info): self.close() @property def backend(self): return self.channel class Consumer(messaging.Consumer): queue = '' exchange = '' routing_key = '' exchange_type = 'direct' durable = True exclusive = False auto_delete = False exchange_type = 'direct' _closed = False def __init__(self, connection, queue=None, exchange=None, routing_key=None, exchange_type=None, durable=None, exclusive=None, auto_delete=None, **kwargs): self.backend = connection.channel() if durable is not None: self.durable = durable if exclusive is not None: self.exclusive = exclusive if auto_delete is not None: self.auto_delete = auto_delete self.queue = queue or self.queue self.exchange = exchange or self.exchange self.exchange_type = exchange_type or self.exchange_type self.routing_key = routing_key or self.routing_key exchange = Exchange(self.exchange, type=self.exchange_type, routing_key=self.routing_key, auto_delete=self.auto_delete, durable=self.durable) queue = Queue(self.queue, exchange=exchange, routing_key=self.routing_key, durable=self.durable, exclusive=self.exclusive, auto_delete=self.auto_delete) super(Consumer, self).__init__(self.backend, queue, **kwargs) def revive(self, channel): self.backend = channel super(Consumer, self).revive(channel) def close(self): self.cancel() self.backend.close() self._closed = True def __enter__(self): return self def __exit__(self, *exc_info): self.close() def __iter__(self): return self.iterqueue(infinite=True) def fetch(self, no_ack=None, enable_callbacks=False): if no_ack is None: no_ack = self.no_ack message = self.queues[0].get(no_ack) if message: if enable_callbacks: self.receive(message.payload, message) return message def process_next(self): raise NotImplementedError('Use fetch(enable_callbacks=True)') def discard_all(self, filterfunc=None): if filterfunc is not None: raise NotImplementedError( 'discard_all does not implement filters') return self.purge() def iterconsume(self, limit=None, no_ack=None): return _iterconsume(self.connection, self, no_ack, limit) def wait(self, limit=None): it = self.iterconsume(limit) return list(it) def iterqueue(self, limit=None, infinite=False): for items_since_start in count(): # for infinity item = self.fetch() if (not infinite and item is None) or \ (limit and items_since_start >= limit): raise StopIteration yield item class ConsumerSet(messaging.Consumer): def __init__(self, connection, from_dict=None, consumers=None, channel=None, **kwargs): if channel: self._provided_channel = True self.backend = channel else: self._provided_channel = False self.backend = connection.channel() queues = [] if consumers: for consumer in consumers: queues.extend(consumer.queues) if from_dict: for queue_name, queue_options in items(from_dict): queues.append(Queue.from_dict(queue_name, **queue_options)) super(ConsumerSet, self).__init__(self.backend, queues, **kwargs) def iterconsume(self, limit=None, no_ack=False): return _iterconsume(self.connection, self, no_ack, limit) def discard_all(self): return self.purge() def add_consumer_from_dict(self, queue, **options): return self.add_queue_from_dict(queue, **options) def add_consumer(self, consumer): for queue in consumer.queues: self.add_queue(queue) def revive(self, channel): self.backend = channel super(ConsumerSet, self).revive(channel) def close(self): self.cancel() if not self._provided_channel: self.channel.close() kombu-3.0.7/kombu/compression.py0000644000076500000000000000373612237554371017322 0ustar asksolwheel00000000000000""" kombu.compression ================= Compression utilities. """ from __future__ import absolute_import from kombu.utils.encoding import ensure_bytes, bytes_to_str import zlib _aliases = {} _encoders = {} _decoders = {} __all__ = ['register', 'encoders', 'get_encoder', 'get_decoder', 'compress', 'decompress'] def register(encoder, decoder, content_type, aliases=[]): """Register new compression method. :param encoder: Function used to compress text. :param decoder: Function used to decompress previously compressed text. :param content_type: The mime type this compression method identifies as. :param aliases: A list of names to associate with this compression method. """ _encoders[content_type] = encoder _decoders[content_type] = decoder _aliases.update((alias, content_type) for alias in aliases) def encoders(): """Return a list of available compression methods.""" return list(_encoders) def get_encoder(t): """Get encoder by alias name.""" t = _aliases.get(t, t) return _encoders[t], t def get_decoder(t): """Get decoder by alias name.""" return _decoders[_aliases.get(t, t)] def compress(body, content_type): """Compress text. :param body: The text to compress. :param content_type: mime-type of compression method to use. """ encoder, content_type = get_encoder(content_type) return encoder(ensure_bytes(body)), content_type def decompress(body, content_type): """Decompress compressed text. :param body: Previously compressed text to uncompress. :param content_type: mime-type of compression method used. """ return bytes_to_str(get_decoder(content_type)(body)) register(zlib.compress, zlib.decompress, 'application/x-gzip', aliases=['gzip', 'zlib']) try: import bz2 except ImportError: pass # Jython? else: register(bz2.compress, bz2.decompress, 'application/x-bz2', aliases=['bzip2', 'bzip']) kombu-3.0.7/kombu/connection.py0000644000076500000000000011345712243752157017121 0ustar asksolwheel00000000000000""" kombu.connection ================ Broker connection and pools. """ from __future__ import absolute_import import os import socket from contextlib import contextmanager from functools import partial from itertools import count, cycle from operator import itemgetter try: from urllib.parse import quote except ImportError: # Py2 from urllib import quote # noqa # jython breaks on relative import for .exceptions for some reason # (Issue #112) from kombu import exceptions from .five import Empty, range, string_t, text_t, LifoQueue as _LifoQueue from .log import get_logger from .transport import get_transport_cls, supports_librabbitmq from .utils import cached_property, retry_over_time, shufflecycle from .utils.compat import OrderedDict from .utils.functional import lazy from .utils.url import parse_url, urlparse __all__ = ['Connection', 'ConnectionPool', 'ChannelPool'] RESOLVE_ALIASES = {'pyamqp': 'amqp', 'librabbitmq': 'amqp'} _LOG_CONNECTION = os.environ.get('KOMBU_LOG_CONNECTION', False) _LOG_CHANNEL = os.environ.get('KOMBU_LOG_CHANNEL', False) logger = get_logger(__name__) roundrobin_failover = cycle failover_strategies = { 'round-robin': roundrobin_failover, 'shuffle': shufflecycle, } class Connection(object): """A connection to the broker. :param URL: Broker URL, or a list of URLs, e.g. .. code-block:: python Connection('amqp://guest:guest@localhost:5672//') Connection('amqp://foo;amqp://bar', failover_strategy='round-robin') Connection('redis://', transport_options={ 'visibility_timeout': 3000, }) import ssl Connection('amqp://', login_method='EXTERNAL', ssl={ 'ca_certs': '/etc/pki/tls/certs/something.crt', 'keyfile': '/etc/something/system.key', 'certfile': '/etc/something/system.cert', 'cert_reqs': ssl.CERT_REQUIRED, }) .. admonition:: SSL compatibility SSL currently only works with the py-amqp & amqplib transports. For other transports you can use stunnel. :keyword hostname: Default host name/address if not provided in the URL. :keyword userid: Default user name if not provided in the URL. :keyword password: Default password if not provided in the URL. :keyword virtual_host: Default virtual host if not provided in the URL. :keyword port: Default port if not provided in the URL. :keyword ssl: Use SSL to connect to the server. Default is ``False``. May not be supported by the specified transport. :keyword transport: Default transport if not specified in the URL. :keyword connect_timeout: Timeout in seconds for connecting to the server. May not be supported by the specified transport. :keyword transport_options: A dict of additional connection arguments to pass to alternate kombu channel implementations. Consult the transport documentation for available options. :keyword heartbeat: Heartbeat interval in int/float seconds. Note that if heartbeats are enabled then the :meth:`heartbeat_check` method must be called at an interval twice the frequency of the heartbeat: e.g. if the heartbeat is 10, then the heartbeats must be checked every 5 seconds (the rate can also be controlled by the ``rate`` argument to :meth:`heartbeat_check``). .. note:: The connection is established lazily when needed. If you need the connection to be established, then force it by calling :meth:`connect`:: >>> conn = Connection('amqp://') >>> conn.connect() and always remember to close the connection:: >>> conn.release() """ port = None virtual_host = '/' connect_timeout = 5 _closed = None _connection = None _default_channel = None _transport = None _logger = False uri_prefix = None #: The cache of declared entities is per connection, #: in case the server loses data. declared_entities = None #: Iterator returning the next broker URL to try in the event #: of connection failure (initialized by :attr:`failover_strategy`). cycle = None #: Additional transport specific options, #: passed on to the transport instance. transport_options = None #: Strategy used to select new hosts when reconnecting after connection #: failure. One of "round-robin", "shuffle" or any custom iterator #: constantly yielding new URLs to try. failover_strategy = 'round-robin' #: Heartbeat value, currently only supported by the py-amqp transport. heartbeat = None hostname = userid = password = ssl = login_method = None def __init__(self, hostname='localhost', userid=None, password=None, virtual_host=None, port=None, insist=False, ssl=False, transport=None, connect_timeout=5, transport_options=None, login_method=None, uri_prefix=None, heartbeat=0, failover_strategy='round-robin', alternates=None, **kwargs): alt = [] if alternates is None else alternates # have to spell the args out, just to get nice docstrings :( params = self._initial_params = { 'hostname': hostname, 'userid': userid, 'password': password, 'virtual_host': virtual_host, 'port': port, 'insist': insist, 'ssl': ssl, 'transport': transport, 'connect_timeout': connect_timeout, 'login_method': login_method, 'heartbeat': heartbeat } if hostname and not isinstance(hostname, string_t): alt.extend(hostname) hostname = alt[0] if hostname and '://' in hostname: if ';' in hostname: alt.extend(hostname.split(';')) hostname = alt[0] if '+' in hostname[:hostname.index('://')]: # e.g. sqla+mysql://root:masterkey@localhost/ params['transport'], params['hostname'] = \ hostname.split('+', 1) transport = self.uri_prefix = params['transport'] else: transport = transport or urlparse(hostname).scheme if get_transport_cls(transport).can_parse_url: # set the transport so that the default is not used. params['transport'] = transport else: # we must parse the URL params.update(parse_url(hostname)) self._init_params(**params) # fallback hosts self.alt = alt self.failover_strategy = failover_strategies.get( failover_strategy or 'round-robin') or failover_strategy if self.alt: self.cycle = self.failover_strategy(self.alt) next(self.cycle) # skip first entry if transport_options is None: transport_options = {} self.transport_options = transport_options if _LOG_CONNECTION: # pragma: no cover self._logger = True if uri_prefix: self.uri_prefix = uri_prefix self.declared_entities = set() def switch(self, url): """Switch connection parameters to use a new URL (does not reconnect)""" self.close() self._closed = False self._init_params(**dict(self._initial_params, **parse_url(url))) def maybe_switch_next(self): """Switch to next URL given by the current failover strategy (if any).""" if self.cycle: self.switch(next(self.cycle)) def _init_params(self, hostname, userid, password, virtual_host, port, insist, ssl, transport, connect_timeout, login_method, heartbeat): transport = transport or 'amqp' if transport == 'amqp' and supports_librabbitmq(): transport = 'librabbitmq' self.hostname = hostname self.userid = userid self.password = password self.login_method = login_method self.virtual_host = virtual_host or self.virtual_host self.port = port or self.port self.insist = insist self.connect_timeout = connect_timeout self.ssl = ssl self.transport_cls = transport self.heartbeat = heartbeat and float(heartbeat) def register_with_event_loop(self, loop): self.transport.register_with_event_loop(self.connection, loop) def _debug(self, msg, *args, **kwargs): if self._logger: # pragma: no cover fmt = '[Kombu connection:0x{id:x}] {msg}' logger.debug(fmt.format(id=id(self), msg=text_t(msg)), *args, **kwargs) def connect(self): """Establish connection to server immediately.""" self._closed = False return self.connection def channel(self): """Create and return a new channel.""" self._debug('create channel') chan = self.transport.create_channel(self.connection) if _LOG_CHANNEL: # pragma: no cover from .utils.debug import Logwrapped return Logwrapped(chan, 'kombu.channel', '[Kombu channel:{0.channel_id}] ') return chan def heartbeat_check(self, rate=2): """Verify that heartbeats are sent and received. If the current transport does not support heartbeats then this is a noop operation. :keyword rate: Rate is how often the tick is called compared to the actual heartbeat value. E.g. if the heartbeat is set to 3 seconds, and the tick is called every 3 / 2 seconds, then the rate is 2. """ return self.transport.heartbeat_check(self.connection, rate=rate) def drain_events(self, **kwargs): """Wait for a single event from the server. :keyword timeout: Timeout in seconds before we give up. :raises :exc:`socket.timeout`: if the timeout is exceeded. """ return self.transport.drain_events(self.connection, **kwargs) def maybe_close_channel(self, channel): """Close given channel, but ignore connection and channel errors.""" try: channel.close() except (self.connection_errors + self.channel_errors): pass def _do_close_self(self): # Close only connection and channel(s), but not transport. self.declared_entities.clear() if self._default_channel: self.maybe_close_channel(self._default_channel) if self._connection: try: self.transport.close_connection(self._connection) except self.connection_errors + (AttributeError, socket.error): pass self._connection = None def _close(self): """Really close connection, even if part of a connection pool.""" self._do_close_self() if self._transport: self._transport.client = None self._transport = None self._debug('closed') self._closed = True def collect(self, socket_timeout=None): # amqp requires communication to close, we don't need that just # to clear out references, Transport._collect can also be implemented # by other transports that want fast after fork try: gc_transport = self._transport._collect except AttributeError: _timeo = socket.getdefaulttimeout() socket.setdefaulttimeout(socket_timeout) try: self._close() except socket.timeout: pass finally: socket.setdefaulttimeout(_timeo) else: gc_transport(self._connection) if self._transport: self._transport.client = None self._transport = None self.declared_entities.clear() self._connection = None def release(self): """Close the connection (if open).""" self._close() close = release def ensure_connection(self, errback=None, max_retries=None, interval_start=2, interval_step=2, interval_max=30, callback=None): """Ensure we have a connection to the server. If not retry establishing the connection with the settings specified. :keyword errback: Optional callback called each time the connection can't be established. Arguments provided are the exception raised and the interval that will be slept ``(exc, interval)``. :keyword max_retries: Maximum number of times to retry. If this limit is exceeded the connection error will be re-raised. :keyword interval_start: The number of seconds we start sleeping for. :keyword interval_step: How many seconds added to the interval for each retry. :keyword interval_max: Maximum number of seconds to sleep between each retry. :keyword callback: Optional callback that is called for every internal iteration (1 s) """ def on_error(exc, intervals, retries, interval=0): round = self.completes_cycle(retries) if round: interval = next(intervals) if errback: errback(exc, interval) self.maybe_switch_next() # select next host return interval if round else 0 retry_over_time(self.connect, self.recoverable_connection_errors, (), {}, on_error, max_retries, interval_start, interval_step, interval_max, callback) return self def completes_cycle(self, retries): """Return true if the cycle is complete after number of `retries`.""" return not (retries + 1) % len(self.alt) if self.alt else True def revive(self, new_channel): """Revive connection after connection re-established.""" if self._default_channel: self.maybe_close_channel(self._default_channel) self._default_channel = None def _default_ensure_callback(self, exc, interval): logger.error("Ensure: Operation error: %r. Retry in %ss", exc, interval, exc_info=True) def ensure(self, obj, fun, errback=None, max_retries=None, interval_start=1, interval_step=1, interval_max=1, on_revive=None): """Ensure operation completes, regardless of any channel/connection errors occurring. Will retry by establishing the connection, and reapplying the function. :param fun: Method to apply. :keyword errback: Optional callback called each time the connection can't be established. Arguments provided are the exception raised and the interval that will be slept ``(exc, interval)``. :keyword max_retries: Maximum number of times to retry. If this limit is exceeded the connection error will be re-raised. :keyword interval_start: The number of seconds we start sleeping for. :keyword interval_step: How many seconds added to the interval for each retry. :keyword interval_max: Maximum number of seconds to sleep between each retry. **Example** This is an example ensuring a publish operation:: >>> from kombu import Connection, Producer >>> conn = Connection('amqp://') >>> producer = Producer(conn) >>> def errback(exc, interval): ... logger.error('Error: %r', exc, exc_info=1) ... logger.info('Retry in %s seconds.', interval) >>> publish = conn.ensure(producer, producer.publish, ... errback=errback, max_retries=3) >>> publish({'hello': 'world'}, routing_key='dest') """ def _ensured(*args, **kwargs): got_connection = 0 conn_errors = self.recoverable_connection_errors chan_errors = self.recoverable_channel_errors has_modern_errors = hasattr( self.transport, 'recoverable_connection_errors', ) for retries in count(0): # for infinity try: return fun(*args, **kwargs) except conn_errors as exc: if got_connection and not has_modern_errors: # transport can not distinguish between # recoverable/irrecoverable errors, so we propagate # the error if it persists after a new connection was # successfully established. raise if max_retries is not None and retries > max_retries: raise self._debug('ensure connection error: %r', exc, exc_info=1) self._connection = None self._do_close_self() errback and errback(exc, 0) remaining_retries = None if max_retries is not None: remaining_retries = max(max_retries - retries, 1) self.ensure_connection(errback, remaining_retries, interval_start, interval_step, interval_max) new_channel = self.channel() self.revive(new_channel) obj.revive(new_channel) if on_revive: on_revive(new_channel) got_connection += 1 except chan_errors as exc: if max_retries is not None and retries > max_retries: raise self._debug('ensure channel error: %r', exc, exc_info=1) errback and errback(exc, 0) _ensured.__name__ = "%s(ensured)" % fun.__name__ _ensured.__doc__ = fun.__doc__ _ensured.__module__ = fun.__module__ return _ensured def autoretry(self, fun, channel=None, **ensure_options): """Decorator for functions supporting a ``channel`` keyword argument. The resulting callable will retry calling the function if it raises connection or channel related errors. The return value will be a tuple of ``(retval, last_created_channel)``. If a ``channel`` is not provided, then one will be automatically acquired (remember to close it afterwards). See :meth:`ensure` for the full list of supported keyword arguments. Example usage:: channel = connection.channel() try: ret, channel = connection.autoretry(publish_messages, channel) finally: channel.close() """ channels = [channel] create_channel = self.channel class Revival(object): __name__ = fun.__name__ __module__ = fun.__module__ __doc__ = fun.__doc__ def revive(self, channel): channels[0] = channel def __call__(self, *args, **kwargs): if channels[0] is None: self.revive(create_channel()) return fun(*args, channel=channels[0], **kwargs), channels[0] revive = Revival() return self.ensure(revive, revive, **ensure_options) def create_transport(self): return self.get_transport_cls()(client=self) def get_transport_cls(self): """Get the currently used transport class.""" transport_cls = self.transport_cls if not transport_cls or isinstance(transport_cls, string_t): transport_cls = get_transport_cls(transport_cls) return transport_cls def clone(self, **kwargs): """Create a copy of the connection with the same connection settings.""" return self.__class__(**dict(self._info(resolve=False), **kwargs)) def _info(self, resolve=True): transport_cls = self.transport_cls if resolve: transport_cls = RESOLVE_ALIASES.get(transport_cls, transport_cls) D = self.transport.default_connection_params hostname = self.hostname or D.get('hostname') if self.uri_prefix: hostname = '%s+%s' % (self.uri_prefix, hostname) info = ( ('hostname', hostname), ('userid', self.userid or D.get('userid')), ('password', self.password or D.get('password')), ('virtual_host', self.virtual_host or D.get('virtual_host')), ('port', self.port or D.get('port')), ('insist', self.insist), ('ssl', self.ssl), ('transport', transport_cls), ('connect_timeout', self.connect_timeout), ('transport_options', self.transport_options), ('login_method', self.login_method or D.get('login_method')), ('uri_prefix', self.uri_prefix), ('heartbeat', self.heartbeat), ('alternates', self.alt), ) return info def info(self): """Get connection info.""" return OrderedDict(self._info()) def __eqhash__(self): return hash('%s|%s|%s|%s|%s|%s' % ( self.transport_cls, self.hostname, self.userid, self.password, self.virtual_host, self.port)) def as_uri(self, include_password=False, mask=''): """Convert connection parameters to URL form.""" hostname = self.hostname or 'localhost' if self.transport.can_parse_url: if self.uri_prefix: return '%s+%s' % (self.uri_prefix, hostname) return self.hostname quoteS = partial(quote, safe='') # strict quote fields = self.info() port, userid, password, transport = itemgetter( 'port', 'userid', 'password', 'transport' )(fields) url = '%s://' % transport if userid or password: if userid: url += quoteS(userid) if password: if include_password: url += ':' + quoteS(password) else: url += ':' + mask if mask else '' url += '@' url += quoteS(fields['hostname']) if port: url += ':%s' % (port, ) url += '/' + quote(fields['virtual_host']) if self.uri_prefix: return '%s+%s' % (self.uri_prefix, url) return url def Pool(self, limit=None, preload=None): """Pool of connections. See :class:`ConnectionPool`. :keyword limit: Maximum number of active connections. Default is no limit. :keyword preload: Number of connections to preload when the pool is created. Default is 0. *Example usage*:: >>> connection = Connection('amqp://') >>> pool = connection.Pool(2) >>> c1 = pool.acquire() >>> c2 = pool.acquire() >>> c3 = pool.acquire() Traceback (most recent call last): File "", line 1, in File "kombu/connection.py", line 354, in acquire raise ConnectionLimitExceeded(self.limit) kombu.exceptions.ConnectionLimitExceeded: 2 >>> c1.release() >>> c3 = pool.acquire() """ return ConnectionPool(self, limit, preload) def ChannelPool(self, limit=None, preload=None): """Pool of channels. See :class:`ChannelPool`. :keyword limit: Maximum number of active channels. Default is no limit. :keyword preload: Number of channels to preload when the pool is created. Default is 0. *Example usage*:: >>> connection = Connection('amqp://') >>> pool = connection.ChannelPool(2) >>> c1 = pool.acquire() >>> c2 = pool.acquire() >>> c3 = pool.acquire() Traceback (most recent call last): File "", line 1, in File "kombu/connection.py", line 354, in acquire raise ChannelLimitExceeded(self.limit) kombu.connection.ChannelLimitExceeded: 2 >>> c1.release() >>> c3 = pool.acquire() """ return ChannelPool(self, limit, preload) def Producer(self, channel=None, *args, **kwargs): """Create new :class:`kombu.Producer` instance using this connection.""" from .messaging import Producer return Producer(channel or self, *args, **kwargs) def Consumer(self, queues=None, channel=None, *args, **kwargs): """Create new :class:`kombu.Consumer` instance using this connection.""" from .messaging import Consumer return Consumer(channel or self, queues, *args, **kwargs) def SimpleQueue(self, name, no_ack=None, queue_opts=None, exchange_opts=None, channel=None, **kwargs): """Create new :class:`~kombu.simple.SimpleQueue`, using a channel from this connection. If ``name`` is a string, a queue and exchange will be automatically created using that name as the name of the queue and exchange, also it will be used as the default routing key. :param name: Name of the queue/or a :class:`~kombu.Queue`. :keyword no_ack: Disable acknowledgements. Default is false. :keyword queue_opts: Additional keyword arguments passed to the constructor of the automatically created :class:`~kombu.Queue`. :keyword exchange_opts: Additional keyword arguments passed to the constructor of the automatically created :class:`~kombu.Exchange`. :keyword channel: Custom channel to use. If not specified the connection default channel is used. """ from .simple import SimpleQueue return SimpleQueue(channel or self, name, no_ack, queue_opts, exchange_opts, **kwargs) def SimpleBuffer(self, name, no_ack=None, queue_opts=None, exchange_opts=None, channel=None, **kwargs): """Create new :class:`~kombu.simple.SimpleQueue` using a channel from this connection. Same as :meth:`SimpleQueue`, but configured with buffering semantics. The resulting queue and exchange will not be durable, also auto delete is enabled. Messages will be transient (not persistent), and acknowledgements are disabled (``no_ack``). """ from .simple import SimpleBuffer return SimpleBuffer(channel or self, name, no_ack, queue_opts, exchange_opts, **kwargs) def _establish_connection(self): self._debug('establishing connection...') conn = self.transport.establish_connection() self._debug('connection established: %r', conn) return conn def __repr__(self): """``x.__repr__() <==> repr(x)``""" return ''.format(self.as_uri(), id(self)) def __copy__(self): """``x.__copy__() <==> copy(x)``""" return self.clone() def __reduce__(self): return self.__class__, tuple(self.info().values()), None def __enter__(self): return self def __exit__(self, *args): self.release() @property def connected(self): """Return true if the connection has been established.""" return (not self._closed and self._connection is not None and self.transport.verify_connection(self._connection)) @property def connection(self): """The underlying connection object. .. warning:: This instance is transport specific, so do not depend on the interface of this object. """ if not self._closed: if not self.connected: self.declared_entities.clear() self._default_channel = None self._connection = self._establish_connection() self._closed = False return self._connection @property def default_channel(self): """Default channel, created upon access and closed when the connection is closed. Can be used for automatic channel handling when you only need one channel, and also it is the channel implicitly used if a connection is passed instead of a channel, to functions that require a channel. """ # make sure we're still connected, and if not refresh. self.connection if self._default_channel is None: self._default_channel = self.channel() return self._default_channel @property def host(self): """The host as a host name/port pair separated by colon.""" return ':'.join([self.hostname, str(self.port)]) @property def transport(self): if self._transport is None: self._transport = self.create_transport() return self._transport @cached_property def manager(self): """Experimental manager that can be used to manage/monitor the broker instance. Not available for all transports.""" return self.transport.manager def get_manager(self, *args, **kwargs): return self.transport.get_manager(*args, **kwargs) @cached_property def recoverable_connection_errors(self): """List of connection related exceptions that can be recovered from, but where the connection must be closed and re-established first.""" try: return self.transport.recoverable_connection_errors except AttributeError: # There were no such classification before, # and all errors were assumed to be recoverable, # so this is a fallback for transports that do # not support the new recoverable/irrecoverable classes. return self.connection_errors + self.channel_errors @cached_property def recoverable_channel_errors(self): """List of channel related exceptions that can be automatically recovered from without re-establishing the connection.""" try: return self.transport.recoverable_channel_errors except AttributeError: return () @cached_property def connection_errors(self): """List of exceptions that may be raised by the connection.""" return self.transport.connection_errors @cached_property def channel_errors(self): """List of exceptions that may be raised by the channel.""" return self.transport.channel_errors @property def supports_heartbeats(self): return self.transport.supports_heartbeats @property def is_evented(self): return self.transport.supports_ev BrokerConnection = Connection class Resource(object): LimitExceeded = exceptions.LimitExceeded def __init__(self, limit=None, preload=None): self.limit = limit self.preload = preload or 0 self._resource = _LifoQueue() self._dirty = set() self.setup() def setup(self): raise NotImplementedError('subclass responsibility') def _add_when_empty(self): if self.limit and len(self._dirty) >= self.limit: raise self.LimitExceeded(self.limit) # All taken, put new on the queue and # try get again, this way the first in line # will get the resource. self._resource.put_nowait(self.new()) def acquire(self, block=False, timeout=None): """Acquire resource. :keyword block: If the limit is exceeded, block until there is an available item. :keyword timeout: Timeout to wait if ``block`` is true. Default is :const:`None` (forever). :raises LimitExceeded: if block is false and the limit has been exceeded. """ if self.limit: while 1: try: R = self._resource.get(block=block, timeout=timeout) except Empty: self._add_when_empty() else: try: R = self.prepare(R) except BaseException: if isinstance(R, lazy): # no evaluated yet, just put it back self._resource.put_nowait(R) else: # evaluted so must try to release/close first. self.release(R) raise self._dirty.add(R) break else: R = self.prepare(self.new()) def release(): """Release resource so it can be used by another thread. The caller is responsible for discarding the object, and to never use the resource again. A new resource must be acquired if so needed. """ self.release(R) R.release = release return R def prepare(self, resource): return resource def close_resource(self, resource): resource.close() def release_resource(self, resource): pass def replace(self, resource): """Replace resource with a new instance. This can be used in case of defective resources.""" if self.limit: self._dirty.discard(resource) self.close_resource(resource) def release(self, resource): if self.limit: self._dirty.discard(resource) self._resource.put_nowait(resource) self.release_resource(resource) else: self.close_resource(resource) def collect_resource(self, resource): pass def force_close_all(self): """Close and remove all resources in the pool (also those in use). Can be used to close resources from parent processes after fork (e.g. sockets/connections). """ dirty = self._dirty resource = self._resource while 1: # - acquired try: dres = dirty.pop() except KeyError: break try: self.collect_resource(dres) except AttributeError: # Issue #78 pass while 1: # - available # deque supports '.clear', but lists do not, so for that # reason we use pop here, so that the underlying object can # be any object supporting '.pop' and '.append'. try: res = resource.queue.pop() except IndexError: break try: self.collect_resource(res) except AttributeError: pass # Issue #78 if os.environ.get('KOMBU_DEBUG_POOL'): # pragma: no cover _orig_acquire = acquire _orig_release = release _next_resource_id = 0 def acquire(self, *args, **kwargs): # noqa import traceback id = self._next_resource_id = self._next_resource_id + 1 print('+{0} ACQUIRE {1}'.format(id, self.__class__.__name__)) r = self._orig_acquire(*args, **kwargs) r._resource_id = id print('-{0} ACQUIRE {1}'.format(id, self.__class__.__name__)) if not hasattr(r, 'acquired_by'): r.acquired_by = [] r.acquired_by.append(traceback.format_stack()) return r def release(self, resource): # noqa id = resource._resource_id print('+{0} RELEASE {1}'.format(id, self.__class__.__name__)) r = self._orig_release(resource) print('-{0} RELEASE {1}'.format(id, self.__class__.__name__)) self._next_resource_id -= 1 return r class ConnectionPool(Resource): LimitExceeded = exceptions.ConnectionLimitExceeded def __init__(self, connection, limit=None, preload=None): self.connection = connection super(ConnectionPool, self).__init__(limit=limit, preload=preload) def new(self): return self.connection.clone() def release_resource(self, resource): try: resource._debug('released') except AttributeError: pass def close_resource(self, resource): resource._close() def collect_resource(self, resource, socket_timeout=0.1): return resource.collect(socket_timeout) @contextmanager def acquire_channel(self, block=False): with self.acquire(block=block) as connection: yield connection, connection.default_channel def setup(self): if self.limit: for i in range(self.limit): if i < self.preload: conn = self.new() conn.connect() else: conn = lazy(self.new) self._resource.put_nowait(conn) def prepare(self, resource): if callable(resource): resource = resource() resource._debug('acquired') return resource class ChannelPool(Resource): LimitExceeded = exceptions.ChannelLimitExceeded def __init__(self, connection, limit=None, preload=None): self.connection = connection super(ChannelPool, self).__init__(limit=limit, preload=preload) def new(self): return lazy(self.connection.channel) def setup(self): channel = self.new() if self.limit: for i in range(self.limit): self._resource.put_nowait( i < self.preload and channel() or lazy(channel)) def prepare(self, channel): if callable(channel): channel = channel() return channel def maybe_channel(channel): """Return the default channel if argument is a connection instance, otherwise just return the channel given.""" if isinstance(channel, Connection): return channel.default_channel return channel def is_connection(obj): return isinstance(obj, Connection) kombu-3.0.7/kombu/entity.py0000644000076500000000000006326112237554371016274 0ustar asksolwheel00000000000000""" kombu.entity ================ Exchange and Queue declarations. """ from __future__ import absolute_import from .abstract import MaybeChannelBound from .exceptions import ContentDisallowed from .serialization import prepare_accept_content TRANSIENT_DELIVERY_MODE = 1 PERSISTENT_DELIVERY_MODE = 2 DELIVERY_MODES = {'transient': TRANSIENT_DELIVERY_MODE, 'persistent': PERSISTENT_DELIVERY_MODE} __all__ = ['Exchange', 'Queue', 'binding'] def pretty_bindings(bindings): return '[%s]' % (', '.join(map(str, bindings))) class Exchange(MaybeChannelBound): """An Exchange declaration. :keyword name: See :attr:`name`. :keyword type: See :attr:`type`. :keyword channel: See :attr:`channel`. :keyword durable: See :attr:`durable`. :keyword auto_delete: See :attr:`auto_delete`. :keyword delivery_mode: See :attr:`delivery_mode`. :keyword arguments: See :attr:`arguments`. .. attribute:: name Name of the exchange. Default is no name (the default exchange). .. attribute:: type *This description of AMQP exchange types was shamelessly stolen from the blog post `AMQP in 10 minutes: Part 4`_ by Rajith Attapattu. Reading this article is recommended if you're new to amqp.* "AMQP defines four default exchange types (routing algorithms) that covers most of the common messaging use cases. An AMQP broker can also define additional exchange types, so see your broker manual for more information about available exchange types. * `direct` (*default*) Direct match between the routing key in the message, and the routing criteria used when a queue is bound to this exchange. * `topic` Wildcard match between the routing key and the routing pattern specified in the exchange/queue binding. The routing key is treated as zero or more words delimited by `"."` and supports special wildcard characters. `"*"` matches a single word and `"#"` matches zero or more words. * `fanout` Queues are bound to this exchange with no arguments. Hence any message sent to this exchange will be forwarded to all queues bound to this exchange. * `headers` Queues are bound to this exchange with a table of arguments containing headers and values (optional). A special argument named "x-match" determines the matching algorithm, where `"all"` implies an `AND` (all pairs must match) and `"any"` implies `OR` (at least one pair must match). :attr:`arguments` is used to specify the arguments. .. _`AMQP in 10 minutes: Part 4`: http://bit.ly/amqp-exchange-types .. attribute:: channel The channel the exchange is bound to (if bound). .. attribute:: durable Durable exchanges remain active when a server restarts. Non-durable exchanges (transient exchanges) are purged when a server restarts. Default is :const:`True`. .. attribute:: auto_delete If set, the exchange is deleted when all queues have finished using it. Default is :const:`False`. .. attribute:: delivery_mode The default delivery mode used for messages. The value is an integer, or alias string. * 1 or `"transient"` The message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts. * 2 or "persistent" (*default*) The message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts. The default value is 2 (persistent). .. attribute:: arguments Additional arguments to specify when the exchange is declared. """ TRANSIENT_DELIVERY_MODE = TRANSIENT_DELIVERY_MODE PERSISTENT_DELIVERY_MODE = PERSISTENT_DELIVERY_MODE name = '' type = 'direct' durable = True auto_delete = False passive = False delivery_mode = PERSISTENT_DELIVERY_MODE attrs = ( ('name', None), ('type', None), ('arguments', None), ('durable', bool), ('passive', bool), ('auto_delete', bool), ('delivery_mode', lambda m: DELIVERY_MODES.get(m) or m), ) def __init__(self, name='', type='', channel=None, **kwargs): super(Exchange, self).__init__(**kwargs) self.name = name or self.name self.type = type or self.type self.maybe_bind(channel) def __hash__(self): return hash('E|%s' % (self.name, )) def declare(self, nowait=False, passive=None): """Declare the exchange. Creates the exchange on the broker. :keyword nowait: If set the server will not respond, and a response will not be waited for. Default is :const:`False`. """ passive = self.passive if passive is None else passive if self.name: return self.channel.exchange_declare( exchange=self.name, type=self.type, durable=self.durable, auto_delete=self.auto_delete, arguments=self.arguments, nowait=nowait, passive=passive, ) def bind_to(self, exchange='', routing_key='', arguments=None, nowait=False, **kwargs): """Binds the exchange to another exchange. :keyword nowait: If set the server will not respond, and the call will not block waiting for a response. Default is :const:`False`. """ if isinstance(exchange, Exchange): exchange = exchange.name return self.channel.exchange_bind(destination=self.name, source=exchange, routing_key=routing_key, nowait=nowait, arguments=arguments) def unbind_from(self, source='', routing_key='', nowait=False, arguments=None): """Delete previously created exchange binding from the server.""" if isinstance(source, Exchange): source = source.name return self.channel.exchange_unbind(destination=self.name, source=source, routing_key=routing_key, nowait=nowait, arguments=arguments) def Message(self, body, delivery_mode=None, priority=None, content_type=None, content_encoding=None, properties=None, headers=None): """Create message instance to be sent with :meth:`publish`. :param body: Message body. :keyword delivery_mode: Set custom delivery mode. Defaults to :attr:`delivery_mode`. :keyword priority: Message priority, 0 to 9. (currently not supported by RabbitMQ). :keyword content_type: The messages content_type. If content_type is set, no serialization occurs as it is assumed this is either a binary object, or you've done your own serialization. Leave blank if using built-in serialization as our library properly sets content_type. :keyword content_encoding: The character set in which this object is encoded. Use "binary" if sending in raw binary objects. Leave blank if using built-in serialization as our library properly sets content_encoding. :keyword properties: Message properties. :keyword headers: Message headers. """ properties = {} if properties is None else properties dm = delivery_mode or self.delivery_mode properties['delivery_mode'] = \ DELIVERY_MODES[dm] if (dm != 2 and dm != 1) else dm return self.channel.prepare_message(body, properties=properties, priority=priority, content_type=content_type, content_encoding=content_encoding, headers=headers) def publish(self, message, routing_key=None, mandatory=False, immediate=False, exchange=None): """Publish message. :param message: :meth:`Message` instance to publish. :param routing_key: Routing key. :param mandatory: Currently not supported. :param immediate: Currently not supported. """ exchange = exchange or self.name return self.channel.basic_publish(message, exchange=exchange, routing_key=routing_key, mandatory=mandatory, immediate=immediate) def delete(self, if_unused=False, nowait=False): """Delete the exchange declaration on server. :keyword if_unused: Delete only if the exchange has no bindings. Default is :const:`False`. :keyword nowait: If set the server will not respond, and a response will not be waited for. Default is :const:`False`. """ return self.channel.exchange_delete(exchange=self.name, if_unused=if_unused, nowait=nowait) def binding(self, routing_key='', arguments=None, unbind_arguments=None): return binding(self, routing_key, arguments, unbind_arguments) def __eq__(self, other): if isinstance(other, Exchange): return (self.name == other.name and self.type == other.type and self.arguments == other.arguments and self.durable == other.durable and self.auto_delete == other.auto_delete and self.delivery_mode == other.delivery_mode) return NotImplemented def __ne__(self, other): return not self.__eq__(other) def __repr__(self): return super(Exchange, self).__repr__(str(self)) def __str__(self): return 'Exchange %s(%s)' % (self.name or repr(''), self.type) @property def can_cache_declaration(self): return self.durable and not self.auto_delete class binding(object): """Represents a queue or exchange binding. :keyword exchange: Exchange to bind to. :keyword routing_key: Routing key used as binding key. :keyword arguments: Arguments for bind operation. :keyword unbind_arguments: Arguments for unbind operation. """ def __init__(self, exchange=None, routing_key='', arguments=None, unbind_arguments=None): self.exchange = exchange self.routing_key = routing_key self.arguments = arguments self.unbind_arguments = unbind_arguments def declare(self, channel, nowait=False): """Declare destination exchange.""" if self.exchange and self.exchange.name: ex = self.exchange(channel) ex.declare(nowait=nowait) def bind(self, entity, nowait=False): """Bind entity to this binding.""" entity.bind_to(exchange=self.exchange, routing_key=self.routing_key, arguments=self.arguments, nowait=nowait) def unbind(self, entity, nowait=False): """Unbind entity from this binding.""" entity.unbind_from(self.exchange, routing_key=self.routing_key, arguments=self.unbind_arguments, nowait=nowait) def __repr__(self): return '' % (self, ) def __str__(self): return '%s->%s' % (self.exchange.name, self.routing_key) class Queue(MaybeChannelBound): """A Queue declaration. :keyword name: See :attr:`name`. :keyword exchange: See :attr:`exchange`. :keyword routing_key: See :attr:`routing_key`. :keyword channel: See :attr:`channel`. :keyword durable: See :attr:`durable`. :keyword exclusive: See :attr:`exclusive`. :keyword auto_delete: See :attr:`auto_delete`. :keyword queue_arguments: See :attr:`queue_arguments`. :keyword binding_arguments: See :attr:`binding_arguments`. :keyword on_declared: See :attr:`on_declared` .. attribute:: name Name of the queue. Default is no name (default queue destination). .. attribute:: exchange The :class:`Exchange` the queue binds to. .. attribute:: routing_key The routing key (if any), also called *binding key*. The interpretation of the routing key depends on the :attr:`Exchange.type`. * direct exchange Matches if the routing key property of the message and the :attr:`routing_key` attribute are identical. * fanout exchange Always matches, even if the binding does not have a key. * topic exchange Matches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (`"."`, like domain names), and two special characters are available; star (`"*"`) and hash (`"#"`). The star matches any word, and the hash matches zero or more words. For example `"*.stock.#"` matches the routing keys `"usd.stock"` and `"eur.stock.db"` but not `"stock.nasdaq"`. .. attribute:: channel The channel the Queue is bound to (if bound). .. attribute:: durable Durable queues remain active when a server restarts. Non-durable queues (transient queues) are purged if/when a server restarts. Note that durable queues do not necessarily hold persistent messages, although it does not make sense to send persistent messages to a transient queue. Default is :const:`True`. .. attribute:: exclusive Exclusive queues may only be consumed from by the current connection. Setting the 'exclusive' flag always implies 'auto-delete'. Default is :const:`False`. .. attribute:: auto_delete If set, the queue is deleted when all consumers have finished using it. Last consumer can be cancelled either explicitly or because its channel is closed. If there was no consumer ever on the queue, it won't be deleted. .. attribute:: queue_arguments Additional arguments used when declaring the queue. .. attribute:: binding_arguments Additional arguments used when binding the queue. .. attribute:: alias Unused in Kombu, but applications can take advantage of this. For example to give alternate names to queues with automatically generated queue names. .. attribute:: on_declared Optional callback to be applied when the queue has been declared (the ``queue_declare`` operation is complete). This must be a function with a signature that accepts at least 3 positional arguments: ``(name, messages, consumers)``. """ ContentDisallowed = ContentDisallowed name = '' exchange = Exchange('') routing_key = '' durable = True exclusive = False auto_delete = False no_ack = False attrs = ( ('name', None), ('exchange', None), ('routing_key', None), ('queue_arguments', None), ('binding_arguments', None), ('durable', bool), ('exclusive', bool), ('auto_delete', bool), ('no_ack', None), ('alias', None), ('bindings', list), ) def __init__(self, name='', exchange=None, routing_key='', channel=None, bindings=None, on_declared=None, **kwargs): super(Queue, self).__init__(**kwargs) self.name = name or self.name self.exchange = exchange or self.exchange self.routing_key = routing_key or self.routing_key self.bindings = set(bindings or []) self.on_declared = on_declared # allows Queue('name', [binding(...), binding(...), ...]) if isinstance(exchange, (list, tuple, set)): self.bindings |= set(exchange) if self.bindings: self.exchange = None # exclusive implies auto-delete. if self.exclusive: self.auto_delete = True self.maybe_bind(channel) def bind(self, channel): on_declared = self.on_declared bound = super(Queue, self).bind(channel) bound.on_declared = on_declared return bound def __hash__(self): return hash('Q|%s' % (self.name, )) def when_bound(self): if self.exchange: self.exchange = self.exchange(self.channel) def declare(self, nowait=False): """Declares the queue, the exchange and binds the queue to the exchange.""" # - declare main binding. if self.exchange: self.exchange.declare(nowait) self.queue_declare(nowait, passive=False) if self.exchange and self.exchange.name: self.queue_bind(nowait) # - declare extra/multi-bindings. for B in self.bindings: B.declare(self.channel) B.bind(self, nowait=nowait) return self.name def queue_declare(self, nowait=False, passive=False): """Declare queue on the server. :keyword nowait: Do not wait for a reply. :keyword passive: If set, the server will not create the queue. The client can use this to check whether a queue exists without modifying the server state. """ ret = self.channel.queue_declare(queue=self.name, passive=passive, durable=self.durable, exclusive=self.exclusive, auto_delete=self.auto_delete, arguments=self.queue_arguments, nowait=nowait) if not self.name: self.name = ret[0] if self.on_declared: self.on_declared(*ret) return ret def queue_bind(self, nowait=False): """Create the queue binding on the server.""" return self.bind_to(self.exchange, self.routing_key, self.binding_arguments, nowait=nowait) def bind_to(self, exchange='', routing_key='', arguments=None, nowait=False): if isinstance(exchange, Exchange): exchange = exchange.name return self.channel.queue_bind(queue=self.name, exchange=exchange, routing_key=routing_key, arguments=arguments, nowait=nowait) def get(self, no_ack=None, accept=None): """Poll the server for a new message. Must return the message if a message was available, or :const:`None` otherwise. :keyword no_ack: If enabled the broker will automatically ack messages. :keyword accept: Custom list of accepted content types. This method provides direct access to the messages in a queue using a synchronous dialogue, designed for specific types of applications where synchronous functionality is more important than performance. """ no_ack = self.no_ack if no_ack is None else no_ack message = self.channel.basic_get(queue=self.name, no_ack=no_ack) if message is not None: m2p = getattr(self.channel, 'message_to_python', None) if m2p: message = m2p(message) message.accept = prepare_accept_content(accept) return message def purge(self, nowait=False): """Remove all ready messages from the queue.""" return self.channel.queue_purge(queue=self.name, nowait=nowait) or 0 def consume(self, consumer_tag='', callback=None, no_ack=None, nowait=False): """Start a queue consumer. Consumers last as long as the channel they were created on, or until the client cancels them. :keyword consumer_tag: Unique identifier for the consumer. The consumer tag is local to a connection, so two clients can use the same consumer tags. If this field is empty the server will generate a unique tag. :keyword no_ack: If enabled the broker will automatically ack messages. :keyword nowait: Do not wait for a reply. :keyword callback: callback called for each delivered message """ if no_ack is None: no_ack = self.no_ack return self.channel.basic_consume(queue=self.name, no_ack=no_ack, consumer_tag=consumer_tag or '', callback=callback, nowait=nowait) def cancel(self, consumer_tag): """Cancel a consumer by consumer tag.""" return self.channel.basic_cancel(consumer_tag) def delete(self, if_unused=False, if_empty=False, nowait=False): """Delete the queue. :keyword if_unused: If set, the server will only delete the queue if it has no consumers. A channel error will be raised if the queue has consumers. :keyword if_empty: If set, the server will only delete the queue if it is empty. If it is not empty a channel error will be raised. :keyword nowait: Do not wait for a reply. """ return self.channel.queue_delete(queue=self.name, if_unused=if_unused, if_empty=if_empty, nowait=nowait) def queue_unbind(self, arguments=None, nowait=False): return self.unbind_from(self.exchange, self.routing_key, arguments, nowait) def unbind_from(self, exchange='', routing_key='', arguments=None, nowait=False): """Unbind queue by deleting the binding from the server.""" return self.channel.queue_unbind(queue=self.name, exchange=exchange.name, routing_key=routing_key, arguments=arguments, nowait=nowait) def __eq__(self, other): if isinstance(other, Queue): return (self.name == other.name and self.exchange == other.exchange and self.routing_key == other.routing_key and self.queue_arguments == other.queue_arguments and self.binding_arguments == other.binding_arguments and self.durable == other.durable and self.exclusive == other.exclusive and self.auto_delete == other.auto_delete) return NotImplemented def __ne__(self, other): return not self.__eq__(other) def __repr__(self): s = super(Queue, self).__repr__ if self.bindings: return s('Queue {0.name!r} -> {bindings}'.format( self, bindings=pretty_bindings(self.bindings), )) return s( 'Queue {0.name!r} -> {0.exchange!r} -> {0.routing_key}'.format( self)) @property def can_cache_declaration(self): return self.durable and not self.auto_delete @classmethod def from_dict(self, queue, **options): binding_key = options.get('binding_key') or options.get('routing_key') e_durable = options.get('exchange_durable') if e_durable is None: e_durable = options.get('durable') e_auto_delete = options.get('exchange_auto_delete') if e_auto_delete is None: e_auto_delete = options.get('auto_delete') q_durable = options.get('queue_durable') if q_durable is None: q_durable = options.get('durable') q_auto_delete = options.get('queue_auto_delete') if q_auto_delete is None: q_auto_delete = options.get('auto_delete') e_arguments = options.get('exchange_arguments') q_arguments = options.get('queue_arguments') b_arguments = options.get('binding_arguments') bindings = options.get('bindings') exchange = Exchange(options.get('exchange'), type=options.get('exchange_type'), delivery_mode=options.get('delivery_mode'), routing_key=options.get('routing_key'), durable=e_durable, auto_delete=e_auto_delete, arguments=e_arguments) return Queue(queue, exchange=exchange, routing_key=binding_key, durable=q_durable, exclusive=options.get('exclusive'), auto_delete=q_auto_delete, no_ack=options.get('no_ack'), queue_arguments=q_arguments, binding_arguments=b_arguments, bindings=bindings) kombu-3.0.7/kombu/exceptions.py0000644000076500000000000000302112237554371017125 0ustar asksolwheel00000000000000""" kombu.exceptions ================ Exceptions. """ from __future__ import absolute_import import socket from amqp import ChannelError, ConnectionError, ResourceError __all__ = ['NotBoundError', 'MessageStateError', 'TimeoutError', 'LimitExceeded', 'ConnectionLimitExceeded', 'ChannelLimitExceeded', 'ConnectionError', 'ChannelError', 'VersionMismatch', 'SerializerNotInstalled', 'ResourceError'] TimeoutError = socket.timeout class KombuError(Exception): """Common subclass for all Kombu exceptions.""" class NotBoundError(KombuError): """Trying to call channel dependent method on unbound entity.""" pass class MessageStateError(KombuError): """The message has already been acknowledged.""" pass class LimitExceeded(KombuError): """Limit exceeded.""" pass class ConnectionLimitExceeded(LimitExceeded): """Maximum number of simultaneous connections exceeded.""" pass class ChannelLimitExceeded(LimitExceeded): """Maximum number of simultaneous channels exceeded.""" pass class VersionMismatch(KombuError): pass class SerializerNotInstalled(KombuError): """Support for the requested serialization type is not installed""" pass class ContentDisallowed(SerializerNotInstalled): """Consumer does not allow this content-type.""" pass class InconsistencyError(ConnectionError): """Data or environment has been found to be inconsistent, depending on the cause it may be possible to retry the operation.""" pass kombu-3.0.7/kombu/five.py0000644000076500000000000001325212243671543015702 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ celery.five ~~~~~~~~~~~ Compatibility implementations of features only available in newer Python versions. """ from __future__ import absolute_import ############## py3k ######################################################### import sys PY3 = sys.version_info[0] == 3 try: reload = reload # noqa except NameError: # pragma: no cover from imp import reload # noqa try: from collections import UserList # noqa except ImportError: # pragma: no cover from UserList import UserList # noqa try: from collections import UserDict # noqa except ImportError: # pragma: no cover from UserDict import UserDict # noqa try: bytes_t = bytes except NameError: # pragma: no cover bytes_t = str # noqa ############## time.monotonic ################################################ if sys.version_info < (3, 3): import platform SYSTEM = platform.system() if SYSTEM == 'Darwin': import ctypes from ctypes.util import find_library libSystem = ctypes.CDLL('libSystem.dylib') CoreServices = ctypes.CDLL(find_library('CoreServices'), use_errno=True) mach_absolute_time = libSystem.mach_absolute_time mach_absolute_time.restype = ctypes.c_uint64 absolute_to_nanoseconds = CoreServices.AbsoluteToNanoseconds absolute_to_nanoseconds.restype = ctypes.c_uint64 absolute_to_nanoseconds.argtypes = [ctypes.c_uint64] def _monotonic(): return absolute_to_nanoseconds(mach_absolute_time()) * 1e-9 elif SYSTEM == 'Linux': # from stackoverflow: # questions/1205722/how-do-i-get-monotonic-time-durations-in-python import ctypes import os CLOCK_MONOTONIC = 1 # see class timespec(ctypes.Structure): _fields_ = [ ('tv_sec', ctypes.c_long), ('tv_nsec', ctypes.c_long), ] librt = ctypes.CDLL('librt.so.1', use_errno=True) clock_gettime = librt.clock_gettime clock_gettime.argtypes = [ ctypes.c_int, ctypes.POINTER(timespec), ] def _monotonic(): # noqa t = timespec() if clock_gettime(CLOCK_MONOTONIC, ctypes.pointer(t)) != 0: errno_ = ctypes.get_errno() raise OSError(errno_, os.strerror(errno_)) return t.tv_sec + t.tv_nsec * 1e-9 else: from time import time as _monotonic try: from time import monotonic except ImportError: monotonic = _monotonic # noqa ############## Py3 <-> Py2 ################################################### if PY3: # pragma: no cover import builtins from queue import Queue, Empty, Full, LifoQueue from itertools import zip_longest from io import StringIO, BytesIO map = map zip = zip string = str string_t = str long_t = int text_t = str range = range int_types = (int, ) module_name_t = str open_fqdn = 'builtins.open' def items(d): return d.items() def keys(d): return d.keys() def values(d): return d.values() def nextfun(it): return it.__next__ exec_ = getattr(builtins, 'exec') def reraise(tp, value, tb=None): if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value class WhateverIO(StringIO): def write(self, data): if isinstance(data, bytes): data = data.encode() StringIO.write(self, data) else: import __builtin__ as builtins # noqa from Queue import Queue, Empty, Full, LifoQueue # noqa from itertools import ( # noqa imap as map, izip as zip, izip_longest as zip_longest, ) try: from cStringIO import StringIO # noqa except ImportError: # pragma: no cover from StringIO import StringIO # noqa string = unicode # noqa string_t = basestring # noqa text_t = unicode long_t = long # noqa range = xrange int_types = (int, long) module_name_t = str open_fqdn = '__builtin__.open' def items(d): # noqa return d.iteritems() def keys(d): # noqa return d.iterkeys() def values(d): # noqa return d.itervalues() def nextfun(it): # noqa return it.next def exec_(code, globs=None, locs=None): # pragma: no cover """Execute code in a namespace.""" if globs is None: frame = sys._getframe(1) globs = frame.f_globals if locs is None: locs = frame.f_locals del frame elif locs is None: locs = globs exec("""exec code in globs, locs""") exec_("""def reraise(tp, value, tb=None): raise tp, value, tb""") BytesIO = WhateverIO = StringIO # noqa def with_metaclass(Type, skip_attrs=set(['__dict__', '__weakref__'])): """Class decorator to set metaclass. Works with both Python 3 and Python 3 and it does not add an extra class in the lookup order like ``six.with_metaclass`` does (that is -- it copies the original class instead of using inheritance). """ def _clone_with_metaclass(Class): attrs = dict((key, value) for key, value in items(vars(Class)) if key not in skip_attrs) return Type(Class.__name__, Class.__bases__, attrs) return _clone_with_metaclass kombu-3.0.7/kombu/log.py0000644000076500000000000000764012237554371015540 0ustar asksolwheel00000000000000from __future__ import absolute_import import os import logging import sys from logging.handlers import WatchedFileHandler from .five import string_t from .utils import cached_property from .utils.encoding import safe_repr, safe_str from .utils.functional import maybe_evaluate __all__ = ['LogMixin', 'LOG_LEVELS', 'get_loglevel', 'setup_logging'] LOG_LEVELS = dict(logging._levelNames) LOG_LEVELS['FATAL'] = logging.FATAL LOG_LEVELS[logging.FATAL] = 'FATAL' DISABLE_TRACEBACKS = os.environ.get('DISABLE_TRACEBACKS') class NullHandler(logging.Handler): def emit(self, record): pass def get_logger(logger): if isinstance(logger, string_t): logger = logging.getLogger(logger) if not logger.handlers: logger.addHandler(NullHandler()) return logger def get_loglevel(level): if isinstance(level, string_t): return LOG_LEVELS[level] return level def naive_format_parts(fmt): parts = fmt.split('%') for i, e in enumerate(parts[1:]): yield None if not e or not parts[i - 1] else e[0] def safeify_format(fmt, args, filters={'s': safe_str, 'r': safe_repr}): for index, type in enumerate(naive_format_parts(fmt)): filt = filters.get(type) yield filt(args[index]) if filt else args[index] class LogMixin(object): def debug(self, *args, **kwargs): return self.log(logging.DEBUG, *args, **kwargs) def info(self, *args, **kwargs): return self.log(logging.INFO, *args, **kwargs) def warn(self, *args, **kwargs): return self.log(logging.WARN, *args, **kwargs) def error(self, *args, **kwargs): return self._error(logging.ERROR, *args, **kwargs) def critical(self, *args, **kwargs): return self._error(logging.CRITICAL, *args, **kwargs) def _error(self, severity, *args, **kwargs): kwargs.setdefault('exc_info', True) if DISABLE_TRACEBACKS: kwargs.pop('exc_info', None) return self.log(severity, *args, **kwargs) def annotate(self, text): return '%s - %s' % (self.logger_name, text) def log(self, severity, *args, **kwargs): if self.logger.isEnabledFor(severity): log = self.logger.log if len(args) > 1 and isinstance(args[0], string_t): expand = [maybe_evaluate(arg) for arg in args[1:]] return log(severity, self.annotate(args[0].replace('%r', '%s')), *list(safeify_format(args[0], expand)), **kwargs) else: return self.logger.log( severity, self.annotate(' '.join(map(safe_str, args))), **kwargs) def get_logger(self): return get_logger(self.logger_name) def is_enabled_for(self, level): return self.logger.isEnabledFor(self.get_loglevel(level)) def get_loglevel(self, level): if not isinstance(level, int): return LOG_LEVELS[level] return level @cached_property def logger(self): return self.get_logger() @property def logger_name(self): return self.__class__.__name__ class Log(LogMixin): def __init__(self, name, logger=None): self._logger_name = name self._logger = logger def get_logger(self): if self._logger: return self._logger return LogMixin.get_logger(self) @property def logger_name(self): return self._logger_name def setup_logging(loglevel=None, logfile=None): logger = logging.getLogger() loglevel = get_loglevel(loglevel or 'ERROR') logfile = logfile if logfile else sys.__stderr__ if not logger.handlers: if hasattr(logfile, 'write'): handler = logging.StreamHandler(logfile) else: handler = WatchedFileHandler(logfile) logger.addHandler(handler) logger.setLevel(loglevel) return logger kombu-3.0.7/kombu/message.py0000644000076500000000000001070012237554371016372 0ustar asksolwheel00000000000000""" kombu.transport.message ======================= Message class. """ from __future__ import absolute_import from .compression import decompress from .exceptions import MessageStateError from .five import text_t from .serialization import loads ACK_STATES = frozenset(['ACK', 'REJECTED', 'REQUEUED']) class Message(object): """Base class for received messages.""" __slots__ = ('_state', 'channel', 'delivery_tag', 'content_type', 'content_encoding', 'delivery_info', 'headers', 'properties', 'body', '_decoded_cache', 'accept', '__dict__') MessageStateError = MessageStateError def __init__(self, channel, body=None, delivery_tag=None, content_type=None, content_encoding=None, delivery_info={}, properties=None, headers=None, postencode=None, accept=None, **kwargs): self.channel = channel self.delivery_tag = delivery_tag self.content_type = content_type self.content_encoding = content_encoding self.delivery_info = delivery_info self.headers = headers or {} self.properties = properties or {} self._decoded_cache = None self._state = 'RECEIVED' self.accept = accept try: body = decompress(body, self.headers['compression']) except KeyError: pass if postencode and isinstance(body, text_t): body = body.encode(postencode) self.body = body def ack(self): """Acknowledge this message as being processed., This will remove the message from the queue. :raises MessageStateError: If the message has already been acknowledged/requeued/rejected. """ if self.channel.no_ack_consumers is not None: try: consumer_tag = self.delivery_info['consumer_tag'] except KeyError: pass else: if consumer_tag in self.channel.no_ack_consumers: return if self.acknowledged: raise self.MessageStateError( 'Message already acknowledged with state: {0._state}'.format( self)) self.channel.basic_ack(self.delivery_tag) self._state = 'ACK' def ack_log_error(self, logger, errors): try: self.ack() except errors as exc: logger.critical("Couldn't ack %r, reason:%r", self.delivery_tag, exc, exc_info=True) def reject_log_error(self, logger, errors, requeue=False): try: self.reject(requeue=requeue) except errors as exc: logger.critical("Couldn't reject %r, reason: %r", self.delivery_tag, exc, exc_info=True) def reject(self, requeue=False): """Reject this message. The message will be discarded by the server. :raises MessageStateError: If the message has already been acknowledged/requeued/rejected. """ if self.acknowledged: raise self.MessageStateError( 'Message already acknowledged with state: {0._state}'.format( self)) self.channel.basic_reject(self.delivery_tag, requeue=requeue) self._state = 'REJECTED' def requeue(self): """Reject this message and put it back on the queue. You must not use this method as a means of selecting messages to process. :raises MessageStateError: If the message has already been acknowledged/requeued/rejected. """ if self.acknowledged: raise self.MessageStateError( 'Message already acknowledged with state: {0._state}'.format( self)) self.channel.basic_reject(self.delivery_tag, requeue=True) self._state = 'REQUEUED' def decode(self): """Deserialize the message body, returning the original python structure sent by the publisher.""" return loads(self.body, self.content_type, self.content_encoding, accept=self.accept) @property def acknowledged(self): """Set to true if the message has been acknowledged.""" return self._state in ACK_STATES @property def payload(self): """The decoded message body.""" if not self._decoded_cache: self._decoded_cache = self.decode() return self._decoded_cache kombu-3.0.7/kombu/messaging.py0000644000076500000000000005173212237554371016735 0ustar asksolwheel00000000000000""" kombu.messaging =============== Sending and receiving messages. """ from __future__ import absolute_import from itertools import count from .compression import compress from .connection import maybe_channel, is_connection from .entity import Exchange, Queue, DELIVERY_MODES from .exceptions import ContentDisallowed from .five import int_types, text_t, values from .serialization import dumps, prepare_accept_content from .utils import ChannelPromise, maybe_list __all__ = ['Exchange', 'Queue', 'Producer', 'Consumer'] class Producer(object): """Message Producer. :param channel: Connection or channel. :keyword exchange: Optional default exchange. :keyword routing_key: Optional default routing key. :keyword serializer: Default serializer. Default is `"json"`. :keyword compression: Default compression method. Default is no compression. :keyword auto_declare: Automatically declare the default exchange at instantiation. Default is :const:`True`. :keyword on_return: Callback to call for undeliverable messages, when the `mandatory` or `immediate` arguments to :meth:`publish` is used. This callback needs the following signature: `(exception, exchange, routing_key, message)`. Note that the producer needs to drain events to use this feature. """ #: Default exchange exchange = None #: Default routing key. routing_key = '' #: Default serializer to use. Default is JSON. serializer = None #: Default compression method. Disabled by default. compression = None #: By default the exchange is declared at instantiation. #: If you want to declare manually then you can set this #: to :const:`False`. auto_declare = True #: Basic return callback. on_return = None #: Set if channel argument was a Connection instance (using #: default_channel). __connection__ = None def __init__(self, channel, exchange=None, routing_key=None, serializer=None, auto_declare=None, compression=None, on_return=None): self._channel = channel self.exchange = exchange self.routing_key = routing_key or self.routing_key self.serializer = serializer or self.serializer self.compression = compression or self.compression self.on_return = on_return or self.on_return self._channel_promise = None if self.exchange is None: self.exchange = Exchange('') if auto_declare is not None: self.auto_declare = auto_declare if self._channel: self.revive(self._channel) def __repr__(self): return ''.format(self) def __reduce__(self): return self.__class__, self.__reduce_args__() def __reduce_args__(self): return (None, self.exchange, self.routing_key, self.serializer, self.auto_declare, self.compression) def declare(self): """Declare the exchange. This happens automatically at instantiation if :attr:`auto_declare` is enabled. """ if self.exchange.name: self.exchange.declare() def maybe_declare(self, entity, retry=False, **retry_policy): """Declare the exchange if it hasn't already been declared during this session.""" if entity: from .common import maybe_declare return maybe_declare(entity, self.channel, retry, **retry_policy) def publish(self, body, routing_key=None, delivery_mode=None, mandatory=False, immediate=False, priority=0, content_type=None, content_encoding=None, serializer=None, headers=None, compression=None, exchange=None, retry=False, retry_policy=None, declare=[], **properties): """Publish message to the specified exchange. :param body: Message body. :keyword routing_key: Message routing key. :keyword delivery_mode: See :attr:`delivery_mode`. :keyword mandatory: Currently not supported. :keyword immediate: Currently not supported. :keyword priority: Message priority. A number between 0 and 9. :keyword content_type: Content type. Default is auto-detect. :keyword content_encoding: Content encoding. Default is auto-detect. :keyword serializer: Serializer to use. Default is auto-detect. :keyword compression: Compression method to use. Default is none. :keyword headers: Mapping of arbitrary headers to pass along with the message body. :keyword exchange: Override the exchange. Note that this exchange must have been declared. :keyword declare: Optional list of required entities that must have been declared before publishing the message. The entities will be declared using :func:`~kombu.common.maybe_declare`. :keyword retry: Retry publishing, or declaring entities if the connection is lost. :keyword retry_policy: Retry configuration, this is the keywords supported by :meth:`~kombu.Connection.ensure`. :keyword \*\*properties: Additional message properties, see AMQP spec. """ headers = {} if headers is None else headers retry_policy = {} if retry_policy is None else retry_policy routing_key = self.routing_key if routing_key is None else routing_key compression = self.compression if compression is None else compression exchange = exchange or self.exchange if isinstance(exchange, Exchange): delivery_mode = delivery_mode or exchange.delivery_mode exchange = exchange.name else: delivery_mode = delivery_mode or self.exchange.delivery_mode if not isinstance(delivery_mode, int_types): delivery_mode = DELIVERY_MODES[delivery_mode] properties['delivery_mode'] = delivery_mode body, content_type, content_encoding = self._prepare( body, serializer, content_type, content_encoding, compression, headers) publish = self._publish if retry: publish = self.connection.ensure(self, publish, **retry_policy) return publish(body, priority, content_type, content_encoding, headers, properties, routing_key, mandatory, immediate, exchange, declare) def _publish(self, body, priority, content_type, content_encoding, headers, properties, routing_key, mandatory, immediate, exchange, declare): channel = self.channel message = channel.prepare_message( body, priority, content_type, content_encoding, headers, properties, ) if declare: maybe_declare = self.maybe_declare [maybe_declare(entity) for entity in declare] return channel.basic_publish( message, exchange=exchange, routing_key=routing_key, mandatory=mandatory, immediate=immediate, ) def _get_channel(self): channel = self._channel if isinstance(channel, ChannelPromise): channel = self._channel = channel() self.exchange.revive(channel) if self.on_return: channel.events['basic_return'].add(self.on_return) return channel def _set_channel(self, channel): self._channel = channel channel = property(_get_channel, _set_channel) def revive(self, channel): """Revive the producer after connection loss.""" if is_connection(channel): connection = channel self.__connection__ = connection channel = ChannelPromise(lambda: connection.default_channel) if isinstance(channel, ChannelPromise): self._channel = channel self.exchange = self.exchange(channel) else: # Channel already concrete self._channel = channel if self.on_return: self._channel.events['basic_return'].add(self.on_return) self.exchange = self.exchange(channel) if self.auto_declare: # auto_decare is not recommended as this will force # evaluation of the channel. self.declare() def __enter__(self): return self def __exit__(self, *exc_info): self.release() def release(self): pass close = release def _prepare(self, body, serializer=None, content_type=None, content_encoding=None, compression=None, headers=None): # No content_type? Then we're serializing the data internally. if not content_type: serializer = serializer or self.serializer (content_type, content_encoding, body) = dumps(body, serializer=serializer) else: # If the programmer doesn't want us to serialize, # make sure content_encoding is set. if isinstance(body, text_t): if not content_encoding: content_encoding = 'utf-8' body = body.encode(content_encoding) # If they passed in a string, we can't know anything # about it. So assume it's binary data. elif not content_encoding: content_encoding = 'binary' if compression: body, headers['compression'] = compress(body, compression) return body, content_type, content_encoding @property def connection(self): try: return self.__connection__ or self.channel.connection.client except AttributeError: pass class Consumer(object): """Message consumer. :param channel: see :attr:`channel`. :param queues: see :attr:`queues`. :keyword no_ack: see :attr:`no_ack`. :keyword auto_declare: see :attr:`auto_declare` :keyword callbacks: see :attr:`callbacks`. :keyword on_message: See :attr:`on_message` :keyword on_decode_error: see :attr:`on_decode_error`. """ ContentDisallowed = ContentDisallowed #: The connection/channel to use for this consumer. channel = None #: A single :class:`~kombu.Queue`, or a list of queues to #: consume from. queues = None #: Flag for automatic message acknowledgment. #: If enabled the messages are automatically acknowledged by the #: broker. This can increase performance but means that you #: have no control of when the message is removed. #: #: Disabled by default. no_ack = None #: By default all entities will be declared at instantiation, if you #: want to handle this manually you can set this to :const:`False`. auto_declare = True #: List of callbacks called in order when a message is received. #: #: The signature of the callbacks must take two arguments: #: `(body, message)`, which is the decoded message body and #: the `Message` instance (a subclass of #: :class:`~kombu.transport.base.Message`). callbacks = None #: Optional function called whenever a message is received. #: #: When defined this function will be called instead of the #: :meth:`receive` method, and :attr:`callbacks` will be disabled. #: #: So this can be used as an alternative to :attr:`callbacks` when #: you don't want the body to be automatically decoded. #: Note that the message will still be decompressed if the message #: has the ``compression`` header set. #: #: The signature of the callback must take a single argument, #: which is the raw message object (a subclass of #: :class:`~kombu.transport.base.Message`). #: #: Also note that the ``message.body`` attribute, which is the raw #: contents of the message body, may in some cases be a read-only #: :class:`buffer` object. on_message = None #: Callback called when a message can't be decoded. #: #: The signature of the callback must take two arguments: `(message, #: exc)`, which is the message that can't be decoded and the exception #: that occurred while trying to decode it. on_decode_error = None #: List of accepted content-types. #: #: An exception will be raised if the consumer receives #: a message with an untrusted content type. #: By default all content-types are accepted, but not if #: :func:`kombu.disable_untrusted_serializers` was called, #: in which case only json is allowed. accept = None _tags = count(1) # global def __init__(self, channel, queues=None, no_ack=None, auto_declare=None, callbacks=None, on_decode_error=None, on_message=None, accept=None): self.channel = channel self.queues = self.queues or [] if queues is None else queues self.no_ack = self.no_ack if no_ack is None else no_ack self.callbacks = (self.callbacks or [] if callbacks is None else callbacks) self.on_message = on_message self._active_tags = {} if auto_declare is not None: self.auto_declare = auto_declare if on_decode_error is not None: self.on_decode_error = on_decode_error self.accept = prepare_accept_content(accept) if self.channel: self.revive(self.channel) def revive(self, channel): """Revive consumer after connection loss.""" self._active_tags.clear() channel = self.channel = maybe_channel(channel) self.queues = [queue(self.channel) for queue in maybe_list(self.queues)] for queue in self.queues: queue.revive(channel) if self.auto_declare: self.declare() def declare(self): """Declare queues, exchanges and bindings. This is done automatically at instantiation if :attr:`auto_declare` is set. """ for queue in self.queues: queue.declare() def register_callback(self, callback): """Register a new callback to be called when a message is received. The signature of the callback needs to accept two arguments: `(body, message)`, which is the decoded message body and the `Message` instance (a subclass of :class:`~kombu.transport.base.Message`. """ self.callbacks.append(callback) def __enter__(self): self.consume() return self def __exit__(self, *exc_info): try: self.cancel() except Exception: pass def add_queue(self, queue): """Add a queue to the list of queues to consume from. This will not start consuming from the queue, for that you will have to call :meth:`consume` after. """ queue = queue(self.channel) if self.auto_declare: queue.declare() self.queues.append(queue) return queue def add_queue_from_dict(self, queue, **options): """This method is deprecated. Instead please use:: consumer.add_queue(Queue.from_dict(d)) """ return self.add_queue(Queue.from_dict(queue, **options)) def consume(self, no_ack=None): """Start consuming messages. Can be called multiple times, but note that while it will consume from new queues added since the last call, it will not cancel consuming from removed queues ( use :meth:`cancel_by_queue`). :param no_ack: See :attr:`no_ack`. """ if self.queues: no_ack = self.no_ack if no_ack is None else no_ack H, T = self.queues[:-1], self.queues[-1] for queue in H: self._basic_consume(queue, no_ack=no_ack, nowait=True) self._basic_consume(T, no_ack=no_ack, nowait=False) def cancel(self): """End all active queue consumers. This does not affect already delivered messages, but it does mean the server will not send any more messages for this consumer. """ cancel = self.channel.basic_cancel for tag in values(self._active_tags): cancel(tag) self._active_tags.clear() close = cancel def cancel_by_queue(self, queue): """Cancel consumer by queue name.""" try: tag = self._active_tags.pop(queue) except KeyError: pass else: self.queues[:] = [q for q in self.queues if q.name != queue] self.channel.basic_cancel(tag) def consuming_from(self, queue): """Return :const:`True` if the consumer is currently consuming from queue'.""" name = queue if isinstance(queue, Queue): name = queue.name return name in self._active_tags def purge(self): """Purge messages from all queues. .. warning:: This will *delete all ready messages*, there is no undo operation. """ return sum(queue.purge() for queue in self.queues) def flow(self, active): """Enable/disable flow from peer. This is a simple flow-control mechanism that a peer can use to avoid overflowing its queues or otherwise finding itself receiving more messages than it can process. The peer that receives a request to stop sending content will finish sending the current content (if any), and then wait until flow is reactivated. """ self.channel.flow(active) def qos(self, prefetch_size=0, prefetch_count=0, apply_global=False): """Specify quality of service. The client can request that messages should be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. The prefetch window is Ignored if the :attr:`no_ack` option is set. :param prefetch_size: Specify the prefetch window in octets. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls within other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. :param prefetch_count: Specify the prefetch window in terms of whole messages. :param apply_global: Apply new settings globally on all channels. Currently not supported by RabbitMQ. """ return self.channel.basic_qos(prefetch_size, prefetch_count, apply_global) def recover(self, requeue=False): """Redeliver unacknowledged messages. Asks the broker to redeliver all unacknowledged messages on the specified channel. :keyword requeue: By default the messages will be redelivered to the original recipient. With `requeue` set to true, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ return self.channel.basic_recover(requeue=requeue) def receive(self, body, message): """Method called when a message is received. This dispatches to the registered :attr:`callbacks`. :param body: The decoded message body. :param message: The `Message` instance. :raises NotImplementedError: If no consumer callbacks have been registered. """ callbacks = self.callbacks if not callbacks: raise NotImplementedError('Consumer does not have any callbacks') [callback(body, message) for callback in callbacks] def _basic_consume(self, queue, consumer_tag=None, no_ack=no_ack, nowait=True): tag = self._active_tags.get(queue.name) if tag is None: tag = self._add_tag(queue, consumer_tag) queue.consume(tag, self._receive_callback, no_ack=no_ack, nowait=nowait) return tag def _add_tag(self, queue, consumer_tag=None): tag = consumer_tag or str(next(self._tags)) self._active_tags[queue.name] = tag return tag def _receive_callback(self, message): accept = self.accept on_m, channel, decoded = self.on_message, self.channel, None try: m2p = getattr(channel, 'message_to_python', None) if m2p: message = m2p(message) if accept is not None: message.accept = accept decoded = None if on_m else message.decode() except Exception as exc: if not self.on_decode_error: raise self.on_decode_error(message, exc) else: return on_m(message) if on_m else self.receive(decoded, message) def __repr__(self): return ''.format(self) @property def connection(self): try: return self.channel.connection.client except AttributeError: pass kombu-3.0.7/kombu/mixins.py0000644000076500000000000001746712237554371016276 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.mixins ============ Useful mixin classes. """ from __future__ import absolute_import import socket from contextlib import contextmanager from functools import partial from itertools import count from time import sleep from .common import ignore_errors from .five import range from .messaging import Consumer from .log import get_logger from .utils import cached_property, nested from .utils.encoding import safe_repr from .utils.limits import TokenBucket __all__ = ['ConsumerMixin'] logger = get_logger(__name__) debug, info, warn, error = logger.debug, logger.info, logger.warn, logger.error class ConsumerMixin(object): """Convenience mixin for implementing consumer programs. It can be used outside of threads, with threads, or greenthreads (eventlet/gevent) too. The basic class would need a :attr:`connection` attribute which must be a :class:`~kombu.Connection` instance, and define a :meth:`get_consumers` method that returns a list of :class:`kombu.Consumer` instances to use. Supporting multiple consumers is important so that multiple channels can be used for different QoS requirements. **Example**: .. code-block:: python class Worker(ConsumerMixin): task_queue = Queue('tasks', Exchange('tasks'), 'tasks')) def __init__(self, connection): self.connection = None def get_consumers(self, Consumer, channel): return [Consumer(queues=[self.task_queue], callback=[self.on_task])] def on_task(self, body, message): print('Got task: {0!r}'.format(body)) message.ack() **Additional handler methods**: * :meth:`extra_context` Optional extra context manager that will be entered after the connection and consumers have been set up. Takes arguments ``(connection, channel)``. * :meth:`on_connection_error` Handler called if the connection is lost/ or is unavailable. Takes arguments ``(exc, interval)``, where interval is the time in seconds when the connection will be retried. The default handler will log the exception. * :meth:`on_connection_revived` Handler called as soon as the connection is re-established after connection failure. Takes no arguments. * :meth:`on_consume_ready` Handler called when the consumer is ready to accept messages. Takes arguments ``(connection, channel, consumers)``. Also keyword arguments to ``consume`` are forwarded to this handler. * :meth:`on_consume_end` Handler called after the consumers are cancelled. Takes arguments ``(connection, channel)``. * :meth:`on_iteration` Handler called for every iteration while draining events. Takes no arguments. * :meth:`on_decode_error` Handler called if a consumer was unable to decode the body of a message. Takes arguments ``(message, exc)`` where message is the original message object. The default handler will log the error and acknowledge the message, so if you override make sure to call super, or perform these steps yourself. """ #: maximum number of retries trying to re-establish the connection, #: if the connection is lost/unavailable. connect_max_retries = None #: When this is set to true the consumer should stop consuming #: and return, so that it can be joined if it is the implementation #: of a thread. should_stop = False def get_consumers(self, Consumer, channel): raise NotImplementedError('Subclass responsibility') def on_connection_revived(self): pass def on_consume_ready(self, connection, channel, consumers, **kwargs): pass def on_consume_end(self, connection, channel): pass def on_iteration(self): pass def on_decode_error(self, message, exc): error("Can't decode message body: %r (type:%r encoding:%r raw:%r')", exc, message.content_type, message.content_encoding, safe_repr(message.body)) message.ack() def on_connection_error(self, exc, interval): warn('Broker connection error: %r. ' 'Trying again in %s seconds.', exc, interval) @contextmanager def extra_context(self, connection, channel): yield def run(self, _tokens=1): restart_limit = self.restart_limit errors = (self.connection.connection_errors + self.connection.channel_errors) while not self.should_stop: try: if restart_limit.can_consume(_tokens): for _ in self.consume(limit=None): # pragma: no cover pass else: sleep(restart_limit.expected_time(_tokens)) except errors: warn('Connection to broker lost. ' 'Trying to re-establish the connection...') @contextmanager def consumer_context(self, **kwargs): with self.Consumer() as (connection, channel, consumers): with self.extra_context(connection, channel): self.on_consume_ready(connection, channel, consumers, **kwargs) yield connection, channel, consumers def consume(self, limit=None, timeout=None, safety_interval=1, **kwargs): elapsed = 0 with self.consumer_context(**kwargs) as (conn, channel, consumers): for i in limit and range(limit) or count(): if self.should_stop: break self.on_iteration() try: conn.drain_events(timeout=safety_interval) except socket.timeout: elapsed += safety_interval if timeout and elapsed >= timeout: raise except socket.error: if not self.should_stop: raise else: yield elapsed = 0 debug('consume exiting') def maybe_conn_error(self, fun): """Use :func:`kombu.common.ignore_errors` instead.""" return ignore_errors(self, fun) def create_connection(self): return self.connection.clone() @contextmanager def establish_connection(self): with self.create_connection() as conn: conn.ensure_connection(self.on_connection_error, self.connect_max_retries) yield conn @contextmanager def Consumer(self): with self.establish_connection() as conn: self.on_connection_revived() info('Connected to %s', conn.as_uri()) channel = conn.default_channel cls = partial(Consumer, channel, on_decode_error=self.on_decode_error) with self._consume_from(*self.get_consumers(cls, channel)) as c: yield conn, channel, c debug('Consumers cancelled') self.on_consume_end(conn, channel) debug('Connection closed') def _consume_from(self, *consumers): return nested(*consumers) @cached_property def restart_limit(self): # the AttributeError that can be catched from amqplib # poses problems for the too often restarts protection # in Connection.ensure_connection return TokenBucket(1) @cached_property def connection_errors(self): return self.connection.connection_errors @cached_property def channel_errors(self): return self.connection.channel_errors kombu-3.0.7/kombu/pidbox.py0000644000076500000000000002754212237554371016247 0ustar asksolwheel00000000000000""" kombu.pidbox =============== Generic process mailbox. """ from __future__ import absolute_import import socket import warnings from collections import defaultdict, deque from copy import copy from itertools import count from threading import local from time import time from . import Exchange, Queue, Consumer, Producer from .clocks import LamportClock from .common import maybe_declare, oid_from from .exceptions import InconsistencyError from .five import range from .log import get_logger from .utils import cached_property, kwdict, uuid, reprcall REPLY_QUEUE_EXPIRES = 10 W_PIDBOX_IN_USE = """\ A node named {node.hostname} is already using this process mailbox! Maybe you forgot to shutdown the other node or did not do so properly? Or if you meant to start multiple nodes on the same host please make sure you give each node a unique node name! """ __all__ = ['Node', 'Mailbox'] logger = get_logger(__name__) debug, error = logger.debug, logger.error class Node(object): #: hostname of the node. hostname = None #: the :class:`Mailbox` this is a node for. mailbox = None #: map of method name/handlers. handlers = None #: current context (passed on to handlers) state = None #: current channel. channel = None def __init__(self, hostname, state=None, channel=None, handlers=None, mailbox=None): self.channel = channel self.mailbox = mailbox self.hostname = hostname self.state = state self.adjust_clock = self.mailbox.clock.adjust if handlers is None: handlers = {} self.handlers = handlers def Consumer(self, channel=None, no_ack=True, accept=None, **options): queue = self.mailbox.get_queue(self.hostname) def verify_exclusive(name, messages, consumers): if consumers: warnings.warn(W_PIDBOX_IN_USE.format(node=self)) queue.on_declared = verify_exclusive return Consumer( channel or self.channel, [queue], no_ack=no_ack, accept=self.mailbox.accept if accept is None else accept, **options ) def handler(self, fun): self.handlers[fun.__name__] = fun return fun def listen(self, channel=None, callback=None): consumer = self.Consumer(channel=channel, callbacks=[callback or self.handle_message]) consumer.consume() return consumer def dispatch(self, method, arguments=None, reply_to=None, ticket=None, **kwargs): arguments = arguments or {} debug('pidbox received method %s [reply_to:%s ticket:%s]', reprcall(method, (), kwargs=arguments), reply_to, ticket) handle = reply_to and self.handle_call or self.handle_cast try: reply = handle(method, kwdict(arguments)) except SystemExit: raise except Exception as exc: error('pidbox command error: %r', exc, exc_info=1) reply = {'error': repr(exc)} if reply_to: self.reply({self.hostname: reply}, exchange=reply_to['exchange'], routing_key=reply_to['routing_key'], ticket=ticket) return reply def handle(self, method, arguments={}): return self.handlers[method](self.state, **arguments) def handle_call(self, method, arguments): return self.handle(method, arguments) def handle_cast(self, method, arguments): return self.handle(method, arguments) def handle_message(self, body, message=None): destination = body.get('destination') if message: self.adjust_clock(message.headers.get('clock') or 0) if not destination or self.hostname in destination: return self.dispatch(**kwdict(body)) dispatch_from_message = handle_message def reply(self, data, exchange, routing_key, ticket, **kwargs): self.mailbox._publish_reply(data, exchange, routing_key, ticket, channel=self.channel) class Mailbox(object): node_cls = Node exchange_fmt = '%s.pidbox' reply_exchange_fmt = 'reply.%s.pidbox' #: Name of application. namespace = None #: Connection (if bound). connection = None #: Exchange type (usually direct, or fanout for broadcast). type = 'direct' #: mailbox exchange (init by constructor). exchange = None #: exchange to send replies to. reply_exchange = None #: Only accepts json messages by default. accept = ['json'] def __init__(self, namespace, type='direct', connection=None, clock=None, accept=None): self.namespace = namespace self.connection = connection self.type = type self.clock = LamportClock() if clock is None else clock self.exchange = self._get_exchange(self.namespace, self.type) self.reply_exchange = self._get_reply_exchange(self.namespace) self._tls = local() self.unclaimed = defaultdict(deque) self.accept = self.accept if accept is None else accept def __call__(self, connection): bound = copy(self) bound.connection = connection return bound def Node(self, hostname=None, state=None, channel=None, handlers=None): hostname = hostname or socket.gethostname() return self.node_cls(hostname, state, channel, handlers, mailbox=self) def call(self, destination, command, kwargs={}, timeout=None, callback=None, channel=None): return self._broadcast(command, kwargs, destination, reply=True, timeout=timeout, callback=callback, channel=channel) def cast(self, destination, command, kwargs={}): return self._broadcast(command, kwargs, destination, reply=False) def abcast(self, command, kwargs={}): return self._broadcast(command, kwargs, reply=False) def multi_call(self, command, kwargs={}, timeout=1, limit=None, callback=None, channel=None): return self._broadcast(command, kwargs, reply=True, timeout=timeout, limit=limit, callback=callback, channel=channel) def get_reply_queue(self): oid = self.oid return Queue('%s.%s' % (oid, self.reply_exchange.name), exchange=self.reply_exchange, routing_key=oid, durable=False, auto_delete=True, queue_arguments={ 'x-expires': int(REPLY_QUEUE_EXPIRES * 1000), }) @cached_property def reply_queue(self): return self.get_reply_queue() def get_queue(self, hostname): return Queue('%s.%s.pidbox' % (hostname, self.namespace), exchange=self.exchange, durable=False, auto_delete=True) def _publish_reply(self, reply, exchange, routing_key, ticket, channel=None, **opts): chan = channel or self.connection.default_channel exchange = Exchange(exchange, exchange_type='direct', delivery_mode='transient', durable=False) producer = Producer(chan, auto_declare=False) try: producer.publish( reply, exchange=exchange, routing_key=routing_key, declare=[exchange], headers={ 'ticket': ticket, 'clock': self.clock.forward(), }, **opts ) except InconsistencyError: pass # queue probably deleted and no one is expecting a reply. def _publish(self, type, arguments, destination=None, reply_ticket=None, channel=None, timeout=None): message = {'method': type, 'arguments': arguments, 'destination': destination} chan = channel or self.connection.default_channel exchange = self.exchange if reply_ticket: maybe_declare(self.reply_queue(channel)) message.update(ticket=reply_ticket, reply_to={'exchange': self.reply_exchange.name, 'routing_key': self.oid}) producer = Producer(chan, auto_declare=False) producer.publish( message, exchange=exchange.name, declare=[exchange], headers={'clock': self.clock.forward(), 'expires': time() + timeout if timeout else 0}, ) def _broadcast(self, command, arguments=None, destination=None, reply=False, timeout=1, limit=None, callback=None, channel=None): if destination is not None and \ not isinstance(destination, (list, tuple)): raise ValueError( 'destination must be a list/tuple not {0}'.format( type(destination))) arguments = arguments or {} reply_ticket = reply and uuid() or None chan = channel or self.connection.default_channel # Set reply limit to number of destinations (if specified) if limit is None and destination: limit = destination and len(destination) or None self._publish(command, arguments, destination=destination, reply_ticket=reply_ticket, channel=chan, timeout=timeout) if reply_ticket: return self._collect(reply_ticket, limit=limit, timeout=timeout, callback=callback, channel=chan) def _collect(self, ticket, limit=None, timeout=1, callback=None, channel=None, accept=None): if accept is None: accept = self.accept chan = channel or self.connection.default_channel queue = self.reply_queue consumer = Consumer(channel, [queue], accept=accept, no_ack=True) responses = [] unclaimed = self.unclaimed adjust_clock = self.clock.adjust try: return unclaimed.pop(ticket) except KeyError: pass def on_message(body, message): # ticket header added in kombu 2.5 header = message.headers.get adjust_clock(header('clock') or 0) expires = header('expires') if expires and time() > expires: return this_id = header('ticket', ticket) if this_id == ticket: if callback: callback(body) responses.append(body) else: unclaimed[this_id].append(body) consumer.register_callback(on_message) try: with consumer: for i in limit and range(limit) or count(): try: self.connection.drain_events(timeout=timeout) except socket.timeout: break return responses finally: chan.after_reply_message_received(queue.name) def _get_exchange(self, namespace, type): return Exchange(self.exchange_fmt % namespace, type=type, durable=False, delivery_mode='transient') def _get_reply_exchange(self, namespace): return Exchange(self.reply_exchange_fmt % namespace, type='direct', durable=False, delivery_mode='transient') @cached_property def oid(self): try: return self._tls.OID except AttributeError: oid = self._tls.OID = oid_from(self) return oid kombu-3.0.7/kombu/pools.py0000644000076500000000000000734412243752136016110 0ustar asksolwheel00000000000000""" kombu.pools =========== Public resource pools. """ from __future__ import absolute_import import os from itertools import chain from .connection import Resource from .five import range, values from .messaging import Producer from .utils import EqualityDict from .utils.functional import lazy __all__ = ['ProducerPool', 'PoolGroup', 'register_group', 'connections', 'producers', 'get_limit', 'set_limit', 'reset'] _limit = [200] _used = [False] _groups = [] use_global_limit = object() disable_limit_protection = os.environ.get('KOMBU_DISABLE_LIMIT_PROTECTION') class ProducerPool(Resource): Producer = Producer def __init__(self, connections, *args, **kwargs): self.connections = connections self.Producer = kwargs.pop('Producer', None) or self.Producer super(ProducerPool, self).__init__(*args, **kwargs) def _acquire_connection(self): return self.connections.acquire(block=True) def create_producer(self): conn = self._acquire_connection() try: return self.Producer(conn) except BaseException: conn.release() raise def new(self): return lazy(self.create_producer) def setup(self): if self.limit: for _ in range(self.limit): self._resource.put_nowait(self.new()) def close_resource(self, resource): pass def prepare(self, p): if callable(p): p = p() if p._channel is None: conn = self._acquire_connection() try: p.revive(conn) except BaseException: conn.release() raise return p def release(self, resource): if resource.__connection__: resource.__connection__.release() resource.channel = None super(ProducerPool, self).release(resource) class PoolGroup(EqualityDict): def __init__(self, limit=None): self.limit = limit def create(self, resource, limit): raise NotImplementedError('PoolGroups must define ``create``') def __missing__(self, resource): limit = self.limit if limit is use_global_limit: limit = get_limit() if not _used[0]: _used[0] = True k = self[resource] = self.create(resource, limit) return k def register_group(group): _groups.append(group) return group class Connections(PoolGroup): def create(self, connection, limit): return connection.Pool(limit=limit) connections = register_group(Connections(limit=use_global_limit)) class Producers(PoolGroup): def create(self, connection, limit): return ProducerPool(connections[connection], limit=limit) producers = register_group(Producers(limit=use_global_limit)) def _all_pools(): return chain(*[(values(g) if g else iter([])) for g in _groups]) def get_limit(): return _limit[0] def set_limit(limit, force=False, reset_after=False): limit = limit or 0 glimit = _limit[0] or 0 if limit < glimit: if not disable_limit_protection and (_used[0] and not force): raise RuntimeError("Can't lower limit after pool in use.") reset_after = True if limit != glimit: _limit[0] = limit for pool in _all_pools(): pool.limit = limit if reset_after: reset() return limit def reset(*args, **kwargs): for pool in _all_pools(): try: pool.force_close_all() except Exception: pass for group in _groups: group.clear() _used[0] = False try: from multiprocessing.util import register_after_fork register_after_fork(connections, reset) except ImportError: # pragma: no cover pass kombu-3.0.7/kombu/serialization.py0000644000076500000000000003326312237554371017634 0ustar asksolwheel00000000000000""" kombu.serialization =================== Serialization utilities. """ from __future__ import absolute_import import codecs import os import sys import pickle as pypickle try: import cPickle as cpickle except ImportError: # pragma: no cover cpickle = None # noqa from collections import namedtuple from .exceptions import SerializerNotInstalled, ContentDisallowed from .five import BytesIO, text_t from .utils import entrypoints from .utils.encoding import str_to_bytes, bytes_t __all__ = ['pickle', 'loads', 'dumps', 'register', 'unregister'] SKIP_DECODE = frozenset(['binary', 'ascii-8bit']) if sys.platform.startswith('java'): # pragma: no cover def _decode(t, coding): return codecs.getdecoder(coding)(t)[0] else: _decode = codecs.decode pickle = cpickle or pypickle pickle_load = pickle.load #: Kombu requires Python 2.5 or later so we use protocol 2 by default. #: There's a new protocol (3) but this is only supported by Python 3. pickle_protocol = int(os.environ.get('PICKLE_PROTOCOL', 2)) codec = namedtuple('codec', ('content_type', 'content_encoding', 'encoder')) def pickle_loads(s, load=pickle_load): # used to support buffer objects return load(BytesIO(s)) def parenthesize_alias(first, second): return '%s (%s)' % (first, second) if first else second class SerializerRegistry(object): """The registry keeps track of serialization methods.""" def __init__(self): self._encoders = {} self._decoders = {} self._default_encode = None self._default_content_type = None self._default_content_encoding = None self._disabled_content_types = set() self.type_to_name = {} self.name_to_type = {} def register(self, name, encoder, decoder, content_type, content_encoding='utf-8'): if encoder: self._encoders[name] = codec( content_type, content_encoding, encoder, ) if decoder: self._decoders[content_type] = decoder self.type_to_name[content_type] = name self.name_to_type[name] = content_type def enable(self, name): if '/' not in name: name = self.name_to_type[name] self._disabled_content_types.discard(name) def disable(self, name): if '/' not in name: name = self.name_to_type[name] self._disabled_content_types.add(name) def unregister(self, name): try: content_type = self.name_to_type[name] self._decoders.pop(content_type, None) self._encoders.pop(name, None) self.type_to_name.pop(content_type, None) self.name_to_type.pop(name, None) except KeyError: raise SerializerNotInstalled( 'No encoder/decoder installed for {0}'.format(name)) def _set_default_serializer(self, name): """ Set the default serialization method used by this library. :param name: The name of the registered serialization method. For example, `json` (default), `pickle`, `yaml`, `msgpack`, or any custom methods registered using :meth:`register`. :raises SerializerNotInstalled: If the serialization method requested is not available. """ try: (self._default_content_type, self._default_content_encoding, self._default_encode) = self._encoders[name] except KeyError: raise SerializerNotInstalled( 'No encoder installed for {0}'.format(name)) def dumps(self, data, serializer=None): if serializer == 'raw': return raw_encode(data) if serializer and not self._encoders.get(serializer): raise SerializerNotInstalled( 'No encoder installed for {0}'.format(serializer)) # If a raw string was sent, assume binary encoding # (it's likely either ASCII or a raw binary file, and a character # set of 'binary' will encompass both, even if not ideal. if not serializer and isinstance(data, bytes_t): # In Python 3+, this would be "bytes"; allow binary data to be # sent as a message without getting encoder errors return 'application/data', 'binary', data # For Unicode objects, force it into a string if not serializer and isinstance(data, text_t): payload = data.encode('utf-8') return 'text/plain', 'utf-8', payload if serializer: content_type, content_encoding, encoder = \ self._encoders[serializer] else: encoder = self._default_encode content_type = self._default_content_type content_encoding = self._default_content_encoding payload = encoder(data) return content_type, content_encoding, payload encode = dumps # XXX compat def loads(self, data, content_type, content_encoding, accept=None, force=False): if accept is not None: if content_type not in accept: raise self._for_untrusted_content(content_type, 'untrusted') else: if content_type in self._disabled_content_types and not force: raise self._for_untrusted_content(content_type, 'disabled') content_type = content_type or 'application/data' content_encoding = (content_encoding or 'utf-8').lower() if data: decode = self._decoders.get(content_type) if decode: return decode(data) if content_encoding not in SKIP_DECODE and \ not isinstance(data, text_t): return _decode(data, content_encoding) return data decode = loads # XXX compat def _for_untrusted_content(self, ctype, why): return ContentDisallowed( 'Refusing to deserialize {0} content of type {1}'.format( why, parenthesize_alias(self.type_to_name.get(ctype, ctype), ctype), ), ) #: Global registry of serializers/deserializers. registry = SerializerRegistry() """ .. function:: dumps(data, serializer=default_serializer) Serialize a data structure into a string suitable for sending as an AMQP message body. :param data: The message data to send. Can be a list, dictionary or a string. :keyword serializer: An optional string representing the serialization method you want the data marshalled into. (For example, `json`, `raw`, or `pickle`). If :const:`None` (default), then json will be used, unless `data` is a :class:`str` or :class:`unicode` object. In this latter case, no serialization occurs as it would be unnecessary. Note that if `serializer` is specified, then that serialization method will be used even if a :class:`str` or :class:`unicode` object is passed in. :returns: A three-item tuple containing the content type (e.g., `application/json`), content encoding, (e.g., `utf-8`) and a string containing the serialized data. :raises SerializerNotInstalled: If the serialization method requested is not available. """ dumps = encode = registry.encode # XXX encode is a compat alias """ .. function:: loads(data, content_type, content_encoding): Deserialize a data stream as serialized using `dumps` based on `content_type`. :param data: The message data to deserialize. :param content_type: The content-type of the data. (e.g., `application/json`). :param content_encoding: The content-encoding of the data. (e.g., `utf-8`, `binary`, or `us-ascii`). :returns: The unserialized data. """ loads = decode = registry.decode # XXX decode is a compat alias """ .. function:: register(name, encoder, decoder, content_type, content_encoding='utf-8'): Register a new encoder/decoder. :param name: A convenience name for the serialization method. :param encoder: A method that will be passed a python data structure and should return a string representing the serialized data. If :const:`None`, then only a decoder will be registered. Encoding will not be possible. :param decoder: A method that will be passed a string representing serialized data and should return a python data structure. If :const:`None`, then only an encoder will be registered. Decoding will not be possible. :param content_type: The mime-type describing the serialized structure. :param content_encoding: The content encoding (character set) that the `decoder` method will be returning. Will usually be `utf-8`, `us-ascii`, or `binary`. """ register = registry.register """ .. function:: unregister(name): Unregister registered encoder/decoder. :param name: Registered serialization method name. """ unregister = registry.unregister def raw_encode(data): """Special case serializer.""" content_type = 'application/data' payload = data if isinstance(payload, text_t): content_encoding = 'utf-8' payload = payload.encode(content_encoding) else: content_encoding = 'binary' return content_type, content_encoding, payload def register_json(): """Register a encoder/decoder for JSON serialization.""" from anyjson import loads as json_loads, dumps as json_dumps def _loads(obj): if isinstance(obj, bytes_t): obj = obj.decode() return json_loads(obj) registry.register('json', json_dumps, _loads, content_type='application/json', content_encoding='utf-8') def register_yaml(): """Register a encoder/decoder for YAML serialization. It is slower than JSON, but allows for more data types to be serialized. Useful if you need to send data such as dates""" try: import yaml registry.register('yaml', yaml.safe_dump, yaml.safe_load, content_type='application/x-yaml', content_encoding='utf-8') except ImportError: def not_available(*args, **kwargs): """In case a client receives a yaml message, but yaml isn't installed.""" raise SerializerNotInstalled( 'No decoder installed for YAML. Install the PyYAML library') registry.register('yaml', None, not_available, 'application/x-yaml') if sys.version_info[0] == 3: # pragma: no cover def unpickle(s): return pickle_loads(str_to_bytes(s)) else: unpickle = pickle_loads # noqa def register_pickle(): """The fastest serialization method, but restricts you to python clients.""" def pickle_dumps(obj, dumper=pickle.dumps): return dumper(obj, protocol=pickle_protocol) registry.register('pickle', pickle_dumps, unpickle, content_type='application/x-python-serialize', content_encoding='binary') def register_msgpack(): """See http://msgpack.sourceforge.net/""" try: try: from msgpack import packb as pack, unpackb unpack = lambda s: unpackb(s, encoding='utf-8') except ImportError: # msgpack < 0.2.0 and Python 2.5 from msgpack import packs as pack, unpacks as unpack # noqa registry.register( 'msgpack', pack, unpack, content_type='application/x-msgpack', content_encoding='binary') except (ImportError, ValueError): def not_available(*args, **kwargs): """In case a client receives a msgpack message, but yaml isn't installed.""" raise SerializerNotInstalled( 'No decoder installed for msgpack. ' 'Please install the msgpack library') registry.register('msgpack', None, not_available, 'application/x-msgpack') # Register the base serialization methods. register_json() register_pickle() register_yaml() register_msgpack() # Default serializer is 'json' registry._set_default_serializer('json') _setupfuns = { 'json': register_json, 'pickle': register_pickle, 'yaml': register_yaml, 'msgpack': register_msgpack, 'application/json': register_json, 'application/x-yaml': register_yaml, 'application/x-python-serialize': register_pickle, 'application/x-msgpack': register_msgpack, } def enable_insecure_serializers(choices=['pickle', 'yaml', 'msgpack']): """Enable serializers that are considered to be unsafe. Will enable ``pickle``, ``yaml`` and ``msgpack`` by default, but you can also specify a list of serializers (by name or content type) to enable. """ for choice in choices: try: registry.enable(choice) except KeyError: pass def disable_insecure_serializers(allowed=['json']): """Disable untrusted serializers. Will disable all serializers except ``json`` or you can specify a list of deserializers to allow. .. note:: Producers will still be able to serialize data in these formats, but consumers will not accept incoming data using the untrusted content types. """ for name in registry._decoders: registry.disable(name) if allowed is not None: for name in allowed: registry.enable(name) # Insecure serializers are disabled by default since v3.0 disable_insecure_serializers() # Load entrypoints from installed extensions for ep, args in entrypoints('kombu.serializers'): # pragma: no cover register(ep.name, *args) def prepare_accept_content(l, name_to_type=registry.name_to_type): if l is not None: return set(n if '/' in n else name_to_type[n] for n in l) return l kombu-3.0.7/kombu/simple.py0000644000076500000000000000767312237554371016256 0ustar asksolwheel00000000000000""" kombu.simple ============ Simple interface. """ from __future__ import absolute_import import socket from collections import deque from . import entity from . import messaging from .connection import maybe_channel from .five import Empty, monotonic __all__ = ['SimpleQueue', 'SimpleBuffer'] class SimpleBase(object): Empty = Empty _consuming = False def __enter__(self): return self def __exit__(self, *exc_info): self.close() def __init__(self, channel, producer, consumer, no_ack=False): self.channel = maybe_channel(channel) self.producer = producer self.consumer = consumer self.no_ack = no_ack self.queue = self.consumer.queues[0] self.buffer = deque() self.consumer.register_callback(self._receive) def get(self, block=True, timeout=None): if not block: return self.get_nowait() self._consume() elapsed = 0.0 remaining = timeout while True: time_start = monotonic() if self.buffer: return self.buffer.pop() try: self.channel.connection.client.drain_events( timeout=timeout and remaining) except socket.timeout: raise self.Empty() elapsed += monotonic() - time_start remaining = timeout and timeout - elapsed or None def get_nowait(self): m = self.queue.get(no_ack=self.no_ack) if not m: raise self.Empty() return m def put(self, message, serializer=None, headers=None, compression=None, routing_key=None, **kwargs): self.producer.publish(message, serializer=serializer, routing_key=routing_key, headers=headers, compression=compression, **kwargs) def clear(self): return self.consumer.purge() def qsize(self): _, size, _ = self.queue.queue_declare(passive=True) return size def close(self): self.consumer.cancel() def _receive(self, message_data, message): self.buffer.append(message) def _consume(self): if not self._consuming: self.consumer.consume(no_ack=self.no_ack) self._consuming = True def __len__(self): """`len(self) -> self.qsize()`""" return self.qsize() def __bool__(self): return True __nonzero__ = __bool__ class SimpleQueue(SimpleBase): no_ack = False queue_opts = {} exchange_opts = {'type': 'direct'} def __init__(self, channel, name, no_ack=None, queue_opts=None, exchange_opts=None, serializer=None, compression=None, **kwargs): queue = name queue_opts = dict(self.queue_opts, **queue_opts or {}) exchange_opts = dict(self.exchange_opts, **exchange_opts or {}) if no_ack is None: no_ack = self.no_ack if not isinstance(queue, entity.Queue): exchange = entity.Exchange(name, **exchange_opts) queue = entity.Queue(name, exchange, name, **queue_opts) else: name = queue.name exchange = queue.exchange producer = messaging.Producer(channel, exchange, serializer=serializer, routing_key=name, compression=compression) consumer = messaging.Consumer(channel, queue) super(SimpleQueue, self).__init__(channel, producer, consumer, no_ack, **kwargs) class SimpleBuffer(SimpleQueue): no_ack = True queue_opts = dict(durable=False, auto_delete=True) exchange_opts = dict(durable=False, delivery_mode='transient', auto_delete=True) kombu-3.0.7/kombu/syn.py0000644000076500000000000000174412232230637015556 0ustar asksolwheel00000000000000""" kombu.syn ========= """ from __future__ import absolute_import import sys __all__ = ['detect_environment'] _environment = None def blocking(fun, *args, **kwargs): return fun(*args, **kwargs) def select_blocking_method(type): pass def _detect_environment(): ## -eventlet- if 'eventlet' in sys.modules: try: from eventlet.patcher import is_monkey_patched as is_eventlet import socket if is_eventlet(socket): return 'eventlet' except ImportError: pass # -gevent- if 'gevent' in sys.modules: try: from gevent import socket as _gsocket import socket if socket.socket is _gsocket.socket: return 'gevent' except ImportError: pass return 'default' def detect_environment(): global _environment if _environment is None: _environment = _detect_environment() return _environment kombu-3.0.7/kombu/tests/0000755000076500000000000000000012247127370015534 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/__init__.py0000644000076500000000000000512612234207745017652 0ustar asksolwheel00000000000000from __future__ import absolute_import import anyjson import atexit import os import sys from kombu.exceptions import VersionMismatch # avoid json implementation inconsistencies. try: import json # noqa anyjson.force_implementation('json') except ImportError: anyjson.force_implementation('simplejson') def teardown(): # Workaround for multiprocessing bug where logging # is attempted after global already collected at shutdown. cancelled = set() try: import multiprocessing.util cancelled.add(multiprocessing.util._exit_function) except (AttributeError, ImportError): pass try: atexit._exithandlers[:] = [ e for e in atexit._exithandlers if e[0] not in cancelled ] except AttributeError: # pragma: no cover pass # Py3 missing _exithandlers def find_distribution_modules(name=__name__, file=__file__): current_dist_depth = len(name.split('.')) - 1 current_dist = os.path.join(os.path.dirname(file), *([os.pardir] * current_dist_depth)) abs = os.path.abspath(current_dist) dist_name = os.path.basename(abs) for dirpath, dirnames, filenames in os.walk(abs): package = (dist_name + dirpath[len(abs):]).replace('/', '.') if '__init__.py' in filenames: yield package for filename in filenames: if filename.endswith('.py') and filename != '__init__.py': yield '.'.join([package, filename])[:-3] def import_all_modules(name=__name__, file=__file__, skip=[]): for module in find_distribution_modules(name, file): if module not in skip: print('preimporting %r for coverage...' % (module, )) try: __import__(module) except (ImportError, VersionMismatch, AttributeError): pass def is_in_coverage(): return (os.environ.get('COVER_ALL_MODULES') or '--with-coverage3' in sys.argv) def setup_django_env(): try: from django.conf import settings except ImportError: return if not settings.configured: settings.configure( DATABASES={ 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': ':memory:', }, }, DATABASE_ENGINE='sqlite3', DATABASE_NAME=':memory:', INSTALLED_APPS=('kombu.transport.django', ), ) def setup(): # so coverage sees all our modules. setup_django_env() if is_in_coverage(): import_all_modules() kombu-3.0.7/kombu/tests/async/0000755000076500000000000000000012247127370016651 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/async/__init__.py0000644000076500000000000000000012237554371020754 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/async/test_hub.py0000644000076500000000000000135712237554371021052 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu.async import hub as _hub from kombu.async.hub import Hub, get_event_loop, set_event_loop from kombu.tests.case import Case class test_Utils(Case): def setUp(self): self._prev_loop = get_event_loop() def tearDown(self): set_event_loop(self._prev_loop) def test_get_set_event_loop(self): set_event_loop(None) self.assertIsNone(_hub._current_loop) self.assertIsNone(get_event_loop()) hub = Hub() set_event_loop(hub) self.assertIs(_hub._current_loop, hub) self.assertIs(get_event_loop(), hub) class test_Hub(Case): def setUp(self): self.hub = Hub() def tearDown(self): self.hub.close() kombu-3.0.7/kombu/tests/case.py0000644000076500000000000001147112237554371017031 0ustar asksolwheel00000000000000from __future__ import absolute_import import os import sys import types from functools import wraps import mock from nose import SkipTest from kombu.five import builtins, string_t, StringIO from kombu.utils.encoding import ensure_bytes try: import unittest unittest.skip except AttributeError: import unittest2 as unittest # noqa PY3 = sys.version_info[0] == 3 patch = mock.patch call = mock.call class Case(unittest.TestCase): def assertItemsEqual(self, a, b, *args, **kwargs): return self.assertEqual(sorted(a), sorted(b), *args, **kwargs) assertSameElements = assertItemsEqual class Mock(mock.Mock): def __init__(self, *args, **kwargs): attrs = kwargs.pop('attrs', None) or {} super(Mock, self).__init__(*args, **kwargs) for attr_name, attr_value in attrs.items(): setattr(self, attr_name, attr_value) class _ContextMock(Mock): """Dummy class implementing __enter__ and __exit__ as the with statement requires these to be implemented in the class, not just the instance.""" def __enter__(self): pass def __exit__(self, *exc_info): pass def ContextMock(*args, **kwargs): obj = _ContextMock(*args, **kwargs) obj.attach_mock(Mock(), '__enter__') obj.attach_mock(Mock(), '__exit__') obj.__enter__.return_value = obj # if __exit__ return a value the exception is ignored, # so it must return None here. obj.__exit__.return_value = None return obj class MockPool(object): def __init__(self, value=None): self.value = value or ContextMock() def acquire(self, **kwargs): return self.value def redirect_stdouts(fun): @wraps(fun) def _inner(*args, **kwargs): sys.stdout = StringIO() sys.stderr = StringIO() try: return fun(*args, **dict(kwargs, stdout=sys.stdout, stderr=sys.stderr)) finally: sys.stdout = sys.__stdout__ sys.stderr = sys.__stderr__ return _inner def module_exists(*modules): def _inner(fun): @wraps(fun) def __inner(*args, **kwargs): gen = [] for module in modules: if isinstance(module, string_t): if not PY3: module = ensure_bytes(module) module = types.ModuleType(module) gen.append(module) sys.modules[module.__name__] = module name = module.__name__ if '.' in name: parent, _, attr = name.rpartition('.') setattr(sys.modules[parent], attr, module) try: return fun(*args, **kwargs) finally: for module in gen: sys.modules.pop(module.__name__, None) return __inner return _inner # Taken from # http://bitbucket.org/runeh/snippets/src/tip/missing_modules.py def mask_modules(*modnames): def _inner(fun): @wraps(fun) def __inner(*args, **kwargs): realimport = builtins.__import__ def myimp(name, *args, **kwargs): if name in modnames: raise ImportError('No module named %s' % name) else: return realimport(name, *args, **kwargs) builtins.__import__ = myimp try: return fun(*args, **kwargs) finally: builtins.__import__ = realimport return __inner return _inner def skip_if_environ(env_var_name): def _wrap_test(fun): @wraps(fun) def _skips_if_environ(*args, **kwargs): if os.environ.get(env_var_name): raise SkipTest('SKIP %s: %s set\n' % ( fun.__name__, env_var_name)) return fun(*args, **kwargs) return _skips_if_environ return _wrap_test def skip_if_module(module): def _wrap_test(fun): @wraps(fun) def _skip_if_module(*args, **kwargs): try: __import__(module) raise SkipTest('SKIP %s: %s available\n' % ( fun.__name__, module)) except ImportError: pass return fun(*args, **kwargs) return _skip_if_module return _wrap_test def skip_if_not_module(module, import_errors=(ImportError, )): def _wrap_test(fun): @wraps(fun) def _skip_if_not_module(*args, **kwargs): try: __import__(module) except import_errors: raise SkipTest('SKIP %s: %s available\n' % ( fun.__name__, module)) return fun(*args, **kwargs) return _skip_if_not_module return _wrap_test def skip_if_quick(fun): return skip_if_environ('QUICKTEST')(fun) kombu-3.0.7/kombu/tests/mocks.py0000644000076500000000000001021612237554371017226 0ustar asksolwheel00000000000000from __future__ import absolute_import from itertools import count import anyjson from kombu.transport import base class Message(base.Message): def __init__(self, *args, **kwargs): self.throw_decode_error = kwargs.get('throw_decode_error', False) super(Message, self).__init__(*args, **kwargs) def decode(self): if self.throw_decode_error: raise ValueError("can't decode message") return super(Message, self).decode() class Channel(base.StdChannel): open = True throw_decode_error = False _ids = count(1) def __init__(self, connection): self.connection = connection self.called = [] self.deliveries = count(1) self.to_deliver = [] self.events = {'basic_return': set()} self.channel_id = next(self._ids) def _called(self, name): self.called.append(name) def __contains__(self, key): return key in self.called def exchange_declare(self, *args, **kwargs): self._called('exchange_declare') def prepare_message(self, body, priority=0, content_type=None, content_encoding=None, headers=None, properties={}): self._called('prepare_message') return dict(body=body, headers=headers, properties=properties, priority=priority, content_type=content_type, content_encoding=content_encoding) def basic_publish(self, message, exchange='', routing_key='', mandatory=False, immediate=False, **kwargs): self._called('basic_publish') return message, exchange, routing_key def exchange_delete(self, *args, **kwargs): self._called('exchange_delete') def queue_declare(self, *args, **kwargs): self._called('queue_declare') def queue_bind(self, *args, **kwargs): self._called('queue_bind') def queue_unbind(self, *args, **kwargs): self._called('queue_unbind') def queue_delete(self, queue, if_unused=False, if_empty=False, **kwargs): self._called('queue_delete') def basic_get(self, *args, **kwargs): self._called('basic_get') try: return self.to_deliver.pop() except IndexError: pass def queue_purge(self, *args, **kwargs): self._called('queue_purge') def basic_consume(self, *args, **kwargs): self._called('basic_consume') def basic_cancel(self, *args, **kwargs): self._called('basic_cancel') def basic_ack(self, *args, **kwargs): self._called('basic_ack') def basic_recover(self, requeue=False): self._called('basic_recover') def exchange_bind(self, *args, **kwargs): self._called('exchange_bind') def exchange_unbind(self, *args, **kwargs): self._called('exchange_unbind') def close(self): self._called('close') def message_to_python(self, message, *args, **kwargs): self._called('message_to_python') return Message(self, body=anyjson.dumps(message), delivery_tag=next(self.deliveries), throw_decode_error=self.throw_decode_error, content_type='application/json', content_encoding='utf-8') def flow(self, active): self._called('flow') def basic_reject(self, delivery_tag, requeue=False): if requeue: return self._called('basic_reject:requeue') return self._called('basic_reject') def basic_qos(self, prefetch_size=0, prefetch_count=0, apply_global=False): self._called('basic_qos') class Connection(object): connected = True def __init__(self, client): self.client = client def channel(self): return Channel(self) class Transport(base.Transport): def establish_connection(self): return Connection(self.client) def create_channel(self, connection): return connection.channel() def drain_events(self, connection, **kwargs): return 'event' def close_connection(self, connection): connection.connected = False kombu-3.0.7/kombu/tests/test_clocks.py0000644000076500000000000000514412241157622020424 0ustar asksolwheel00000000000000from __future__ import absolute_import import pickle from heapq import heappush from time import time from kombu.clocks import LamportClock, timetuple from .case import Mock, Case class test_LamportClock(Case): def test_clocks(self): c1 = LamportClock() c2 = LamportClock() c1.forward() c2.forward() c1.forward() c1.forward() c2.adjust(c1.value) self.assertEqual(c2.value, c1.value + 1) self.assertTrue(repr(c1)) c2_val = c2.value c2.forward() c2.forward() c2.adjust(c1.value) self.assertEqual(c2.value, c2_val + 2 + 1) c1.adjust(c2.value) self.assertEqual(c1.value, c2.value + 1) def test_sort(self): c = LamportClock() pid1 = 'a.example.com:312' pid2 = 'b.example.com:311' events = [] m1 = (c.forward(), pid1) heappush(events, m1) m2 = (c.forward(), pid2) heappush(events, m2) m3 = (c.forward(), pid1) heappush(events, m3) m4 = (30, pid1) heappush(events, m4) m5 = (30, pid2) heappush(events, m5) self.assertEqual(str(c), str(c.value)) self.assertEqual(c.sort_heap(events), m1) self.assertEqual(c.sort_heap([m4, m5]), m4) self.assertEqual(c.sort_heap([m4, m5, m1]), m4) class test_timetuple(Case): def test_repr(self): x = timetuple(133, time(), 'id', Mock()) self.assertTrue(repr(x)) def test_pickleable(self): x = timetuple(133, time(), 'id', 'obj') self.assertEqual(pickle.loads(pickle.dumps(x)), tuple(x)) def test_order(self): t1 = time() t2 = time() + 300 # windows clock not reliable a = timetuple(133, t1, 'A', 'obj') b = timetuple(140, t1, 'A', 'obj') self.assertTrue(a.__getnewargs__()) self.assertEqual(a.clock, 133) self.assertEqual(a.timestamp, t1) self.assertEqual(a.id, 'A') self.assertEqual(a.obj, 'obj') self.assertTrue( a <= b, ) self.assertTrue( b >= a, ) self.assertEqual( timetuple(134, time(), 'A', 'obj').__lt__(tuple()), NotImplemented, ) self.assertGreater( timetuple(134, t2, 'A', 'obj'), timetuple(133, t1, 'A', 'obj'), ) self.assertGreater( timetuple(134, t1, 'B', 'obj'), timetuple(134, t1, 'A', 'obj'), ) self.assertGreater( timetuple(None, t2, 'B', 'obj'), timetuple(None, t1, 'A', 'obj'), ) kombu-3.0.7/kombu/tests/test_common.py0000644000076500000000000003266712237554371020457 0ustar asksolwheel00000000000000from __future__ import absolute_import import socket from kombu import common from kombu.common import ( Broadcast, maybe_declare, send_reply, collect_replies, declaration_cached, ignore_errors, QoS, PREFETCH_COUNT_MAX, ) from kombu.exceptions import ChannelError from .case import Case, ContextMock, Mock, MockPool, patch class test_ignore_errors(Case): def test_ignored(self): connection = Mock() connection.channel_errors = (KeyError, ) connection.connection_errors = (KeyError, ) with ignore_errors(connection): raise KeyError() def raising(): raise KeyError() ignore_errors(connection, raising) connection.channel_errors = connection.connection_errors = \ () with self.assertRaises(KeyError): with ignore_errors(connection): raise KeyError() class test_declaration_cached(Case): def test_when_cached(self): chan = Mock() chan.connection.client.declared_entities = ['foo'] self.assertTrue(declaration_cached('foo', chan)) def test_when_not_cached(self): chan = Mock() chan.connection.client.declared_entities = ['bar'] self.assertFalse(declaration_cached('foo', chan)) class test_Broadcast(Case): def test_arguments(self): q = Broadcast(name='test_Broadcast') self.assertTrue(q.name.startswith('bcast.')) self.assertEqual(q.alias, 'test_Broadcast') self.assertTrue(q.auto_delete) self.assertEqual(q.exchange.name, 'test_Broadcast') self.assertEqual(q.exchange.type, 'fanout') q = Broadcast('test_Broadcast', 'explicit_queue_name') self.assertEqual(q.name, 'explicit_queue_name') self.assertEqual(q.exchange.name, 'test_Broadcast') class test_maybe_declare(Case): def test_cacheable(self): channel = Mock() client = channel.connection.client = Mock() client.declared_entities = set() entity = Mock() entity.can_cache_declaration = True entity.auto_delete = False entity.is_bound = True entity.channel = channel maybe_declare(entity, channel) self.assertEqual(entity.declare.call_count, 1) self.assertIn( hash(entity), channel.connection.client.declared_entities, ) maybe_declare(entity, channel) self.assertEqual(entity.declare.call_count, 1) entity.channel.connection = None with self.assertRaises(ChannelError): maybe_declare(entity) def test_binds_entities(self): channel = Mock() channel.connection.client.declared_entities = set() entity = Mock() entity.can_cache_declaration = True entity.is_bound = False entity.bind.return_value = entity entity.bind.return_value.channel = channel maybe_declare(entity, channel) entity.bind.assert_called_with(channel) def test_with_retry(self): channel = Mock() entity = Mock() entity.can_cache_declaration = True entity.is_bound = True entity.channel = channel maybe_declare(entity, channel, retry=True) self.assertTrue(channel.connection.client.ensure.call_count) class test_replies(Case): def test_send_reply(self): req = Mock() req.content_type = 'application/json' req.content_encoding = 'binary' req.properties = {'reply_to': 'hello', 'correlation_id': 'world'} channel = Mock() exchange = Mock() exchange.is_bound = True exchange.channel = channel producer = Mock() producer.channel = channel producer.channel.connection.client.declared_entities = set() send_reply(exchange, req, {'hello': 'world'}, producer) self.assertTrue(producer.publish.call_count) args = producer.publish.call_args self.assertDictEqual(args[0][0], {'hello': 'world'}) self.assertDictEqual(args[1], {'exchange': exchange, 'routing_key': 'hello', 'correlation_id': 'world', 'serializer': 'json', 'retry': False, 'retry_policy': None, 'content_encoding': 'binary'}) @patch('kombu.common.itermessages') def test_collect_replies_with_ack(self, itermessages): conn, channel, queue = Mock(), Mock(), Mock() body, message = Mock(), Mock() itermessages.return_value = [(body, message)] it = collect_replies(conn, channel, queue, no_ack=False) m = next(it) self.assertIs(m, body) itermessages.assert_called_with(conn, channel, queue, no_ack=False) message.ack.assert_called_with() with self.assertRaises(StopIteration): next(it) channel.after_reply_message_received.assert_called_with(queue.name) @patch('kombu.common.itermessages') def test_collect_replies_no_ack(self, itermessages): conn, channel, queue = Mock(), Mock(), Mock() body, message = Mock(), Mock() itermessages.return_value = [(body, message)] it = collect_replies(conn, channel, queue) m = next(it) self.assertIs(m, body) itermessages.assert_called_with(conn, channel, queue, no_ack=True) self.assertFalse(message.ack.called) @patch('kombu.common.itermessages') def test_collect_replies_no_replies(self, itermessages): conn, channel, queue = Mock(), Mock(), Mock() itermessages.return_value = [] it = collect_replies(conn, channel, queue) with self.assertRaises(StopIteration): next(it) self.assertFalse(channel.after_reply_message_received.called) class test_insured(Case): @patch('kombu.common.logger') def test_ensure_errback(self, logger): common._ensure_errback('foo', 30) self.assertTrue(logger.error.called) def test_revive_connection(self): on_revive = Mock() channel = Mock() common.revive_connection(Mock(), channel, on_revive) on_revive.assert_called_with(channel) common.revive_connection(Mock(), channel, None) def get_insured_mocks(self, insured_returns=('works', 'ignored')): conn = ContextMock() pool = MockPool(conn) fun = Mock() insured = conn.autoretry.return_value = Mock() insured.return_value = insured_returns return conn, pool, fun, insured def test_insured(self): conn, pool, fun, insured = self.get_insured_mocks() ret = common.insured(pool, fun, (2, 2), {'foo': 'bar'}) self.assertEqual(ret, 'works') conn.ensure_connection.assert_called_with( errback=common._ensure_errback, ) self.assertTrue(insured.called) i_args, i_kwargs = insured.call_args self.assertTupleEqual(i_args, (2, 2)) self.assertDictEqual(i_kwargs, {'foo': 'bar', 'connection': conn}) self.assertTrue(conn.autoretry.called) ar_args, ar_kwargs = conn.autoretry.call_args self.assertTupleEqual(ar_args, (fun, conn.default_channel)) self.assertTrue(ar_kwargs.get('on_revive')) self.assertTrue(ar_kwargs.get('errback')) def test_insured_custom_errback(self): conn, pool, fun, insured = self.get_insured_mocks() custom_errback = Mock() common.insured(pool, fun, (2, 2), {'foo': 'bar'}, errback=custom_errback) conn.ensure_connection.assert_called_with(errback=custom_errback) class MockConsumer(object): consumers = set() def __init__(self, channel, queues=None, callbacks=None, **kwargs): self.channel = channel self.queues = queues self.callbacks = callbacks def __enter__(self): self.consumers.add(self) return self def __exit__(self, *exc_info): self.consumers.discard(self) class test_itermessages(Case): class MockConnection(object): should_raise_timeout = False def drain_events(self, **kwargs): if self.should_raise_timeout: raise socket.timeout() for consumer in MockConsumer.consumers: for callback in consumer.callbacks: callback('body', 'message') def test_default(self): conn = self.MockConnection() channel = Mock() channel.connection.client = conn it = common.itermessages(conn, channel, 'q', limit=1, Consumer=MockConsumer) ret = next(it) self.assertTupleEqual(ret, ('body', 'message')) with self.assertRaises(StopIteration): next(it) def test_when_raises_socket_timeout(self): conn = self.MockConnection() conn.should_raise_timeout = True channel = Mock() channel.connection.client = conn it = common.itermessages(conn, channel, 'q', limit=1, Consumer=MockConsumer) with self.assertRaises(StopIteration): next(it) @patch('kombu.common.deque') def test_when_raises_IndexError(self, deque): deque_instance = deque.return_value = Mock() deque_instance.popleft.side_effect = IndexError() conn = self.MockConnection() channel = Mock() it = common.itermessages(conn, channel, 'q', limit=1, Consumer=MockConsumer) with self.assertRaises(StopIteration): next(it) class test_QoS(Case): class _QoS(QoS): def __init__(self, value): self.value = value QoS.__init__(self, None, value) def set(self, value): return value def test_qos_exceeds_16bit(self): with patch('kombu.common.logger') as logger: callback = Mock() qos = QoS(callback, 10) qos.prev = 100 # cannot use 2 ** 32 because of a bug on OSX Py2.5: # https://jira.mongodb.org/browse/PYTHON-389 qos.set(4294967296) self.assertTrue(logger.warn.called) callback.assert_called_with(prefetch_count=0) def test_qos_increment_decrement(self): qos = self._QoS(10) self.assertEqual(qos.increment_eventually(), 11) self.assertEqual(qos.increment_eventually(3), 14) self.assertEqual(qos.increment_eventually(-30), 14) self.assertEqual(qos.decrement_eventually(7), 7) self.assertEqual(qos.decrement_eventually(), 6) def test_qos_disabled_increment_decrement(self): qos = self._QoS(0) self.assertEqual(qos.increment_eventually(), 0) self.assertEqual(qos.increment_eventually(3), 0) self.assertEqual(qos.increment_eventually(-30), 0) self.assertEqual(qos.decrement_eventually(7), 0) self.assertEqual(qos.decrement_eventually(), 0) self.assertEqual(qos.decrement_eventually(10), 0) def test_qos_thread_safe(self): qos = self._QoS(10) def add(): for i in range(1000): qos.increment_eventually() def sub(): for i in range(1000): qos.decrement_eventually() def threaded(funs): from threading import Thread threads = [Thread(target=fun) for fun in funs] for thread in threads: thread.start() for thread in threads: thread.join() threaded([add, add]) self.assertEqual(qos.value, 2010) qos.value = 1000 threaded([add, sub]) # n = 2 self.assertEqual(qos.value, 1000) def test_exceeds_short(self): qos = QoS(Mock(), PREFETCH_COUNT_MAX - 1) qos.update() self.assertEqual(qos.value, PREFETCH_COUNT_MAX - 1) qos.increment_eventually() self.assertEqual(qos.value, PREFETCH_COUNT_MAX) qos.increment_eventually() self.assertEqual(qos.value, PREFETCH_COUNT_MAX + 1) qos.decrement_eventually() self.assertEqual(qos.value, PREFETCH_COUNT_MAX) qos.decrement_eventually() self.assertEqual(qos.value, PREFETCH_COUNT_MAX - 1) def test_consumer_increment_decrement(self): mconsumer = Mock() qos = QoS(mconsumer.qos, 10) qos.update() self.assertEqual(qos.value, 10) mconsumer.qos.assert_called_with(prefetch_count=10) qos.decrement_eventually() qos.update() self.assertEqual(qos.value, 9) mconsumer.qos.assert_called_with(prefetch_count=9) qos.decrement_eventually() self.assertEqual(qos.value, 8) mconsumer.qos.assert_called_with(prefetch_count=9) self.assertIn({'prefetch_count': 9}, mconsumer.qos.call_args) # Does not decrement 0 value qos.value = 0 qos.decrement_eventually() self.assertEqual(qos.value, 0) qos.increment_eventually() self.assertEqual(qos.value, 0) def test_consumer_decrement_eventually(self): mconsumer = Mock() qos = QoS(mconsumer.qos, 10) qos.decrement_eventually() self.assertEqual(qos.value, 9) qos.value = 0 qos.decrement_eventually() self.assertEqual(qos.value, 0) def test_set(self): mconsumer = Mock() qos = QoS(mconsumer.qos, 10) qos.set(12) self.assertEqual(qos.prev, 12) qos.set(qos.prev) kombu-3.0.7/kombu/tests/test_compat.py0000644000076500000000000002713612237554371020445 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection, Exchange, Queue from kombu import compat from .case import Case, Mock, patch from .mocks import Transport, Channel class test_misc(Case): def test_iterconsume(self): class MyConnection(object): drained = 0 def drain_events(self, *args, **kwargs): self.drained += 1 return self.drained class Consumer(object): active = False def consume(self, *args, **kwargs): self.active = True conn = MyConnection() consumer = Consumer() it = compat._iterconsume(conn, consumer) self.assertEqual(next(it), 1) self.assertTrue(consumer.active) it2 = compat._iterconsume(conn, consumer, limit=10) self.assertEqual(list(it2), [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) def test_Queue_from_dict(self): defs = {'binding_key': 'foo.#', 'exchange': 'fooex', 'exchange_type': 'topic', 'durable': True, 'auto_delete': False} q1 = Queue.from_dict('foo', **dict(defs)) self.assertEqual(q1.name, 'foo') self.assertEqual(q1.routing_key, 'foo.#') self.assertEqual(q1.exchange.name, 'fooex') self.assertEqual(q1.exchange.type, 'topic') self.assertTrue(q1.durable) self.assertTrue(q1.exchange.durable) self.assertFalse(q1.auto_delete) self.assertFalse(q1.exchange.auto_delete) q2 = Queue.from_dict('foo', **dict(defs, exchange_durable=False)) self.assertTrue(q2.durable) self.assertFalse(q2.exchange.durable) q3 = Queue.from_dict('foo', **dict(defs, exchange_auto_delete=True)) self.assertFalse(q3.auto_delete) self.assertTrue(q3.exchange.auto_delete) q4 = Queue.from_dict('foo', **dict(defs, queue_durable=False)) self.assertFalse(q4.durable) self.assertTrue(q4.exchange.durable) q5 = Queue.from_dict('foo', **dict(defs, queue_auto_delete=True)) self.assertTrue(q5.auto_delete) self.assertFalse(q5.exchange.auto_delete) self.assertEqual(Queue.from_dict('foo', **dict(defs)), Queue.from_dict('foo', **dict(defs))) class test_Publisher(Case): def setUp(self): self.connection = Connection(transport=Transport) def test_constructor(self): pub = compat.Publisher(self.connection, exchange='test_Publisher_constructor', routing_key='rkey') self.assertIsInstance(pub.backend, Channel) self.assertEqual(pub.exchange.name, 'test_Publisher_constructor') self.assertTrue(pub.exchange.durable) self.assertFalse(pub.exchange.auto_delete) self.assertEqual(pub.exchange.type, 'direct') pub2 = compat.Publisher(self.connection, exchange='test_Publisher_constructor2', routing_key='rkey', auto_delete=True, durable=False) self.assertTrue(pub2.exchange.auto_delete) self.assertFalse(pub2.exchange.durable) explicit = Exchange('test_Publisher_constructor_explicit', type='topic') pub3 = compat.Publisher(self.connection, exchange=explicit) self.assertEqual(pub3.exchange, explicit) compat.Publisher(self.connection, exchange='test_Publisher_constructor3', channel=self.connection.default_channel) def test_send(self): pub = compat.Publisher(self.connection, exchange='test_Publisher_send', routing_key='rkey') pub.send({'foo': 'bar'}) self.assertIn('basic_publish', pub.backend) pub.close() def test__enter__exit__(self): pub = compat.Publisher(self.connection, exchange='test_Publisher_send', routing_key='rkey') x = pub.__enter__() self.assertIs(x, pub) x.__exit__() self.assertTrue(pub._closed) class test_Consumer(Case): def setUp(self): self.connection = Connection(transport=Transport) @patch('kombu.compat._iterconsume') def test_iterconsume_calls__iterconsume(self, it, n='test_iterconsume'): c = compat.Consumer(self.connection, queue=n, exchange=n) c.iterconsume(limit=10, no_ack=True) it.assert_called_with(c.connection, c, True, 10) def test_constructor(self, n='test_Consumer_constructor'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') self.assertIsInstance(c.backend, Channel) q = c.queues[0] self.assertTrue(q.durable) self.assertTrue(q.exchange.durable) self.assertFalse(q.auto_delete) self.assertFalse(q.exchange.auto_delete) self.assertEqual(q.name, n) self.assertEqual(q.exchange.name, n) c2 = compat.Consumer(self.connection, queue=n + '2', exchange=n + '2', routing_key='rkey', durable=False, auto_delete=True, exclusive=True) q2 = c2.queues[0] self.assertFalse(q2.durable) self.assertFalse(q2.exchange.durable) self.assertTrue(q2.auto_delete) self.assertTrue(q2.exchange.auto_delete) def test__enter__exit__(self, n='test__enter__exit__'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') x = c.__enter__() self.assertIs(x, c) x.__exit__() self.assertTrue(c._closed) def test_revive(self, n='test_revive'): c = compat.Consumer(self.connection, queue=n, exchange=n) with self.connection.channel() as c2: c.revive(c2) self.assertIs(c.backend, c2) def test__iter__(self, n='test__iter__'): c = compat.Consumer(self.connection, queue=n, exchange=n) c.iterqueue = Mock() c.__iter__() c.iterqueue.assert_called_with(infinite=True) def test_iter(self, n='test_iterqueue'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') c.close() def test_process_next(self, n='test_process_next'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') with self.assertRaises(NotImplementedError): c.process_next() c.close() def test_iterconsume(self, n='test_iterconsume'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') c.close() def test_discard_all(self, n='test_discard_all'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') c.discard_all() self.assertIn('queue_purge', c.backend) def test_fetch(self, n='test_fetch'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') self.assertIsNone(c.fetch()) self.assertIsNone(c.fetch(no_ack=True)) self.assertIn('basic_get', c.backend) callback_called = [False] def receive(payload, message): callback_called[0] = True c.backend.to_deliver.append('42') payload = c.fetch().payload self.assertEqual(payload, '42') c.backend.to_deliver.append('46') c.register_callback(receive) self.assertEqual(c.fetch(enable_callbacks=True).payload, '46') self.assertTrue(callback_called[0]) def test_discard_all_filterfunc_not_supported(self, n='xjf21j21'): c = compat.Consumer(self.connection, queue=n, exchange=n, routing_key='rkey') with self.assertRaises(NotImplementedError): c.discard_all(filterfunc=lambda x: x) c.close() def test_wait(self, n='test_wait'): class C(compat.Consumer): def iterconsume(self, limit=None): for i in range(limit): yield i c = C(self.connection, queue=n, exchange=n, routing_key='rkey') self.assertEqual(c.wait(10), list(range(10))) c.close() def test_iterqueue(self, n='test_iterqueue'): i = [0] class C(compat.Consumer): def fetch(self, limit=None): z = i[0] i[0] += 1 return z c = C(self.connection, queue=n, exchange=n, routing_key='rkey') self.assertEqual(list(c.iterqueue(limit=10)), list(range(10))) c.close() class test_ConsumerSet(Case): def setUp(self): self.connection = Connection(transport=Transport) def test_providing_channel(self): chan = Mock(name='channel') cs = compat.ConsumerSet(self.connection, channel=chan) self.assertTrue(cs._provided_channel) self.assertIs(cs.backend, chan) cs.cancel = Mock(name='cancel') cs.close() self.assertFalse(chan.close.called) @patch('kombu.compat._iterconsume') def test_iterconsume(self, _iterconsume, n='test_iterconsume'): c = compat.Consumer(self.connection, queue=n, exchange=n) cs = compat.ConsumerSet(self.connection, consumers=[c]) cs.iterconsume(limit=10, no_ack=True) _iterconsume.assert_called_with(c.connection, cs, True, 10) def test_revive(self, n='test_revive'): c = compat.Consumer(self.connection, queue=n, exchange=n) cs = compat.ConsumerSet(self.connection, consumers=[c]) with self.connection.channel() as c2: cs.revive(c2) self.assertIs(cs.backend, c2) def test_constructor(self, prefix='0daf8h21'): dcon = {'%s.xyx' % prefix: {'exchange': '%s.xyx' % prefix, 'routing_key': 'xyx'}, '%s.xyz' % prefix: {'exchange': '%s.xyz' % prefix, 'routing_key': 'xyz'}} consumers = [compat.Consumer(self.connection, queue=prefix + str(i), exchange=prefix + str(i)) for i in range(3)] c = compat.ConsumerSet(self.connection, consumers=consumers) c2 = compat.ConsumerSet(self.connection, from_dict=dcon) self.assertEqual(len(c.queues), 3) self.assertEqual(len(c2.queues), 2) c.add_consumer(compat.Consumer(self.connection, queue=prefix + 'xaxxxa', exchange=prefix + 'xaxxxa')) self.assertEqual(len(c.queues), 4) for cq in c.queues: self.assertIs(cq.channel, c.channel) c2.add_consumer_from_dict({ '%s.xxx' % prefix: { 'exchange': '%s.xxx' % prefix, 'routing_key': 'xxx', }, }) self.assertEqual(len(c2.queues), 3) for c2q in c2.queues: self.assertIs(c2q.channel, c2.channel) c.discard_all() self.assertEqual(c.channel.called.count('queue_purge'), 4) c.consume() c.close() c2.close() self.assertIn('basic_cancel', c.channel) self.assertIn('close', c.channel) self.assertIn('close', c2.channel) kombu-3.0.7/kombu/tests/test_compression.py0000644000076500000000000000274212237554371021517 0ustar asksolwheel00000000000000from __future__ import absolute_import import sys from kombu import compression from .case import Case, SkipTest, mask_modules class test_compression(Case): def setUp(self): try: import bz2 # noqa except ImportError: self.has_bzip2 = False else: self.has_bzip2 = True @mask_modules('bz2') def test_no_bz2(self): c = sys.modules.pop('kombu.compression') try: import kombu.compression self.assertFalse(hasattr(kombu.compression, 'bz2')) finally: if c is not None: sys.modules['kombu.compression'] = c def test_encoders(self): encoders = compression.encoders() self.assertIn('application/x-gzip', encoders) if self.has_bzip2: self.assertIn('application/x-bz2', encoders) def test_compress__decompress__zlib(self): text = 'The Quick Brown Fox Jumps Over The Lazy Dog' c, ctype = compression.compress(text, 'zlib') self.assertNotEqual(text, c) d = compression.decompress(c, ctype) self.assertEqual(d, text) def test_compress__decompress__bzip2(self): if not self.has_bzip2: raise SkipTest('bzip2 not available') text = 'The Brown Quick Fox Over The Lazy Dog Jumps' c, ctype = compression.compress(text, 'bzip2') self.assertNotEqual(text, c) d = compression.decompress(c, ctype) self.assertEqual(d, text) kombu-3.0.7/kombu/tests/test_connection.py0000644000076500000000000005376012237554371021323 0ustar asksolwheel00000000000000from __future__ import absolute_import import pickle import socket from copy import copy from kombu import Connection, Consumer, Producer, parse_url from kombu.connection import Resource from kombu.five import items, range from .case import Case, Mock, SkipTest, patch, skip_if_not_module from .mocks import Transport class test_connection_utils(Case): def setUp(self): self.url = 'amqp://user:pass@localhost:5672/my/vhost' self.nopass = 'amqp://user@localhost:5672/my/vhost' self.expected = { 'transport': 'amqp', 'userid': 'user', 'password': 'pass', 'hostname': 'localhost', 'port': 5672, 'virtual_host': 'my/vhost', } def test_parse_url(self): result = parse_url(self.url) self.assertDictEqual(result, self.expected) def test_parse_url_mongodb(self): result = parse_url('mongodb://example.com/') self.assertEqual(result['hostname'], 'example.com/') def test_parse_generated_as_uri(self): conn = Connection(self.url) info = conn.info() for k, v in self.expected.items(): self.assertEqual(info[k], v) # by default almost the same- no password self.assertEqual(conn.as_uri(), self.nopass) self.assertEqual(conn.as_uri(include_password=True), self.url) def test_as_uri_when_prefix(self): conn = Connection('memory://') conn.uri_prefix = 'foo' self.assertTrue(conn.as_uri().startswith('foo+memory://')) @skip_if_not_module('pymongo') def test_as_uri_when_mongodb(self): x = Connection('mongodb://localhost') self.assertTrue(x.as_uri()) def test_bogus_scheme(self): with self.assertRaises(KeyError): Connection('bogus://localhost:7421').transport def assert_info(self, conn, **fields): info = conn.info() for field, expected in items(fields): self.assertEqual(info[field], expected) def test_rabbitmq_example_urls(self): # see Appendix A of http://www.rabbitmq.com/uri-spec.html self.assert_info( Connection('amqp://user:pass@host:10000/vhost'), userid='user', password='pass', hostname='host', port=10000, virtual_host='vhost', ) self.assert_info( Connection('amqp://user%61:%61pass@ho%61st:10000/v%2fhost'), userid='usera', password='apass', hostname='hoast', port=10000, virtual_host='v/host', ) self.assert_info( Connection('amqp://'), userid='guest', password='guest', hostname='localhost', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://:@/'), userid='guest', password='guest', hostname='localhost', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://user@/'), userid='user', password='guest', hostname='localhost', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://user:pass@/'), userid='user', password='pass', hostname='localhost', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://host'), userid='guest', password='guest', hostname='host', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://:10000'), userid='guest', password='guest', hostname='localhost', port=10000, virtual_host='/', ) self.assert_info( Connection('amqp:///vhost'), userid='guest', password='guest', hostname='localhost', port=5672, virtual_host='vhost', ) self.assert_info( Connection('amqp://host/'), userid='guest', password='guest', hostname='host', port=5672, virtual_host='/', ) self.assert_info( Connection('amqp://host/%2f'), userid='guest', password='guest', hostname='host', port=5672, virtual_host='/', ) def test_url_IPV6(self): raise SkipTest("urllib can't parse ipv6 urls") self.assert_info( Connection('amqp://[::1]'), userid='guest', password='guest', hostname='[::1]', port=5672, virtual_host='/', ) class test_Connection(Case): def setUp(self): self.conn = Connection(port=5672, transport=Transport) def test_establish_connection(self): conn = self.conn conn.connect() self.assertTrue(conn.connection.connected) self.assertEqual(conn.host, 'localhost:5672') channel = conn.channel() self.assertTrue(channel.open) self.assertEqual(conn.drain_events(), 'event') _connection = conn.connection conn.close() self.assertFalse(_connection.connected) self.assertIsInstance(conn.transport, Transport) def test_multiple_urls(self): conn1 = Connection('amqp://foo;amqp://bar') self.assertEqual(conn1.hostname, 'foo') self.assertListEqual(conn1.alt, ['amqp://foo', 'amqp://bar']) conn2 = Connection(['amqp://foo', 'amqp://bar']) self.assertEqual(conn2.hostname, 'foo') self.assertListEqual(conn2.alt, ['amqp://foo', 'amqp://bar']) def test_collect(self): connection = Connection('memory://') trans = connection._transport = Mock(name='transport') _collect = trans._collect = Mock(name='transport._collect') _close = connection._close = Mock(name='connection._close') connection.declared_entities = Mock(name='decl_entities') uconn = connection._connection = Mock(name='_connection') connection.collect() self.assertFalse(_close.called) _collect.assert_called_with(uconn) connection.declared_entities.clear.assert_called_with() self.assertIsNone(trans.client) self.assertIsNone(connection._transport) self.assertIsNone(connection._connection) def test_collect_no_transport(self): connection = Connection('memory://') connection._transport = None connection._close = Mock() connection.collect() connection._close.assert_called_with() connection._close.side_effect = socket.timeout() connection.collect() def test_collect_transport_gone(self): connection = Connection('memory://') uconn = connection._connection = Mock(name='conn._conn') trans = connection._transport = Mock(name='transport') collect = trans._collect = Mock(name='transport._collect') def se(conn): connection._transport = None collect.side_effect = se connection.collect() collect.assert_called_with(uconn) self.assertIsNone(connection._transport) def test_uri_passthrough(self): transport = Mock(name='transport') with patch('kombu.connection.get_transport_cls') as gtc: gtc.return_value = transport transport.can_parse_url = True with patch('kombu.connection.parse_url') as parse_url: c = Connection('foo+mysql://some_host') self.assertEqual(c.transport_cls, 'foo') self.assertFalse(parse_url.called) self.assertEqual(c.hostname, 'mysql://some_host') self.assertTrue(c.as_uri().startswith('foo+')) with patch('kombu.connection.parse_url') as parse_url: c = Connection('mysql://some_host', transport='foo') self.assertEqual(c.transport_cls, 'foo') self.assertFalse(parse_url.called) self.assertEqual(c.hostname, 'mysql://some_host') c = Connection('pyamqp+sqlite://some_host') self.assertTrue(c.as_uri().startswith('pyamqp+')) def test_default_ensure_callback(self): with patch('kombu.connection.logger') as logger: c = Connection(transport=Mock) c._default_ensure_callback(KeyError(), 3) self.assertTrue(logger.error.called) def test_ensure_connection_on_error(self): c = Connection('amqp://A;amqp://B') with patch('kombu.connection.retry_over_time') as rot: c.ensure_connection() self.assertTrue(rot.called) args = rot.call_args[0] cb = args[4] intervals = iter([1, 2, 3, 4, 5]) self.assertEqual(cb(KeyError(), intervals, 0), 0) self.assertEqual(cb(KeyError(), intervals, 1), 1) self.assertEqual(cb(KeyError(), intervals, 2), 0) self.assertEqual(cb(KeyError(), intervals, 3), 2) self.assertEqual(cb(KeyError(), intervals, 4), 0) self.assertEqual(cb(KeyError(), intervals, 5), 3) self.assertEqual(cb(KeyError(), intervals, 6), 0) self.assertEqual(cb(KeyError(), intervals, 7), 4) errback = Mock() c.ensure_connection(errback=errback) args = rot.call_args[0] cb = args[4] self.assertEqual(cb(KeyError(), intervals, 0), 0) self.assertTrue(errback.called) def test_supports_heartbeats(self): c = Connection(transport=Mock) c.transport.supports_heartbeats = False self.assertFalse(c.supports_heartbeats) def test_is_evented(self): c = Connection(transport=Mock) c.transport.supports_ev = False self.assertFalse(c.is_evented) def test_register_with_event_loop(self): c = Connection(transport=Mock) loop = Mock(name='loop') c.register_with_event_loop(loop) c.transport.register_with_event_loop.assert_called_with( c.connection, loop, ) def test_manager(self): c = Connection(transport=Mock) self.assertIs(c.manager, c.transport.manager) def test_copy(self): c = Connection('amqp://example.com') self.assertEqual(copy(c).info(), c.info()) def test_copy_multiples(self): c = Connection('amqp://A.example.com;amqp://B.example.com') self.assertTrue(c.alt) d = copy(c) self.assertEqual(d.alt, c.alt) def test_switch(self): c = Connection('amqp://foo') c._closed = True c.switch('redis://example.com//3') self.assertFalse(c._closed) self.assertEqual(c.hostname, 'example.com') self.assertEqual(c.transport_cls, 'redis') self.assertEqual(c.virtual_host, '/3') def test_maybe_switch_next(self): c = Connection('amqp://foo;redis://example.com//3') c.maybe_switch_next() self.assertFalse(c._closed) self.assertEqual(c.hostname, 'example.com') self.assertEqual(c.transport_cls, 'redis') self.assertEqual(c.virtual_host, '/3') def test_maybe_switch_next_no_cycle(self): c = Connection('amqp://foo') c.maybe_switch_next() self.assertFalse(c._closed) self.assertEqual(c.hostname, 'foo') self.assertIn(c.transport_cls, ('librabbitmq', 'pyamqp', 'amqp')) def test_heartbeat_check(self): c = Connection(transport=Transport) c.transport.heartbeat_check = Mock() c.heartbeat_check(3) c.transport.heartbeat_check.assert_called_with(c.connection, rate=3) def test_completes_cycle_no_cycle(self): c = Connection('amqp://') self.assertTrue(c.completes_cycle(0)) self.assertTrue(c.completes_cycle(1)) def test_completes_cycle(self): c = Connection('amqp://a;amqp://b;amqp://c') self.assertFalse(c.completes_cycle(0)) self.assertFalse(c.completes_cycle(1)) self.assertTrue(c.completes_cycle(2)) def test__enter____exit__(self): conn = self.conn context = conn.__enter__() self.assertIs(context, conn) conn.connect() self.assertTrue(conn.connection.connected) conn.__exit__() self.assertIsNone(conn.connection) conn.close() # again def test_close_survives_connerror(self): class _CustomError(Exception): pass class MyTransport(Transport): connection_errors = (_CustomError, ) def close_connection(self, connection): raise _CustomError('foo') conn = Connection(transport=MyTransport) conn.connect() conn.close() self.assertTrue(conn._closed) def test_close_when_default_channel(self): conn = self.conn conn._default_channel = Mock() conn._close() conn._default_channel.close.assert_called_with() def test_close_when_default_channel_close_raises(self): class Conn(Connection): @property def connection_errors(self): return (KeyError, ) conn = Conn('memory://') conn._default_channel = Mock() conn._default_channel.close.side_effect = KeyError() conn._close() conn._default_channel.close.assert_called_with() def test_revive_when_default_channel(self): conn = self.conn defchan = conn._default_channel = Mock() conn.revive(Mock()) defchan.close.assert_called_with() self.assertIsNone(conn._default_channel) def test_ensure_connection(self): self.assertTrue(self.conn.ensure_connection()) def test_ensure_success(self): def publish(): return 'foobar' ensured = self.conn.ensure(None, publish) self.assertEqual(ensured(), 'foobar') def test_ensure_failure(self): class _CustomError(Exception): pass def publish(): raise _CustomError('bar') ensured = self.conn.ensure(None, publish) with self.assertRaises(_CustomError): ensured() def test_ensure_connection_failure(self): class _ConnectionError(Exception): pass def publish(): raise _ConnectionError('failed connection') self.conn.transport.connection_errors = (_ConnectionError,) ensured = self.conn.ensure(self.conn, publish) with self.assertRaises(_ConnectionError): ensured() def test_autoretry(self): myfun = Mock() myfun.__name__ = 'test_autoretry' self.conn.transport.connection_errors = (KeyError, ) def on_call(*args, **kwargs): myfun.side_effect = None raise KeyError('foo') myfun.side_effect = on_call insured = self.conn.autoretry(myfun) insured() self.assertTrue(myfun.called) def test_SimpleQueue(self): conn = self.conn q = conn.SimpleQueue('foo') self.assertIs(q.channel, conn.default_channel) chan = conn.channel() q2 = conn.SimpleQueue('foo', channel=chan) self.assertIs(q2.channel, chan) def test_SimpleBuffer(self): conn = self.conn q = conn.SimpleBuffer('foo') self.assertIs(q.channel, conn.default_channel) chan = conn.channel() q2 = conn.SimpleBuffer('foo', channel=chan) self.assertIs(q2.channel, chan) def test_Producer(self): conn = self.conn self.assertIsInstance(conn.Producer(), Producer) self.assertIsInstance(conn.Producer(conn.default_channel), Producer) def test_Consumer(self): conn = self.conn self.assertIsInstance(conn.Consumer(queues=[]), Consumer) self.assertIsInstance(conn.Consumer(queues=[], channel=conn.default_channel), Consumer) def test__repr__(self): self.assertTrue(repr(self.conn)) def test__reduce__(self): x = pickle.loads(pickle.dumps(self.conn)) self.assertDictEqual(x.info(), self.conn.info()) def test_channel_errors(self): class MyTransport(Transport): channel_errors = (KeyError, ValueError) conn = Connection(transport=MyTransport) self.assertTupleEqual(conn.channel_errors, (KeyError, ValueError)) def test_connection_errors(self): class MyTransport(Transport): connection_errors = (KeyError, ValueError) conn = Connection(transport=MyTransport) self.assertTupleEqual(conn.connection_errors, (KeyError, ValueError)) class test_Connection_with_transport_options(Case): transport_options = {'pool_recycler': 3600, 'echo': True} def setUp(self): self.conn = Connection(port=5672, transport=Transport, transport_options=self.transport_options) def test_establish_connection(self): conn = self.conn self.assertEqual(conn.transport_options, self.transport_options) class xResource(Resource): def setup(self): pass class ResourceCase(Case): abstract = True def create_resource(self, limit, preload): raise NotImplementedError('subclass responsibility') def assertState(self, P, avail, dirty): self.assertEqual(P._resource.qsize(), avail) self.assertEqual(len(P._dirty), dirty) def test_setup(self): if self.abstract: with self.assertRaises(NotImplementedError): Resource() def test_acquire__release(self): if self.abstract: return P = self.create_resource(10, 0) self.assertState(P, 10, 0) chans = [P.acquire() for _ in range(10)] self.assertState(P, 0, 10) with self.assertRaises(P.LimitExceeded): P.acquire() chans.pop().release() self.assertState(P, 1, 9) [chan.release() for chan in chans] self.assertState(P, 10, 0) def test_acquire_prepare_raises(self): if self.abstract: return P = self.create_resource(10, 0) self.assertEqual(len(P._resource.queue), 10) P.prepare = Mock() P.prepare.side_effect = IOError() with self.assertRaises(IOError): P.acquire(block=True) self.assertEqual(len(P._resource.queue), 10) def test_acquire_no_limit(self): if self.abstract: return P = self.create_resource(None, 0) P.acquire().release() def test_replace_when_limit(self): if self.abstract: return P = self.create_resource(10, 0) r = P.acquire() P._dirty = Mock() P.close_resource = Mock() P.replace(r) P._dirty.discard.assert_called_with(r) P.close_resource.assert_called_with(r) def test_replace_no_limit(self): if self.abstract: return P = self.create_resource(None, 0) r = P.acquire() P._dirty = Mock() P.close_resource = Mock() P.replace(r) self.assertFalse(P._dirty.discard.called) P.close_resource.assert_called_with(r) def test_interface_prepare(self): if not self.abstract: return x = xResource() self.assertEqual(x.prepare(10), 10) def test_force_close_all_handles_AttributeError(self): if self.abstract: return P = self.create_resource(10, 10) cr = P.collect_resource = Mock() cr.side_effect = AttributeError('x') P.acquire() self.assertTrue(P._dirty) P.force_close_all() def test_force_close_all_no_mutex(self): if self.abstract: return P = self.create_resource(10, 10) P.close_resource = Mock() m = P._resource = Mock() m.mutex = None m.queue.pop.side_effect = IndexError P.force_close_all() def test_add_when_empty(self): if self.abstract: return P = self.create_resource(None, None) P._resource.queue[:] = [] self.assertFalse(P._resource.queue) P._add_when_empty() self.assertTrue(P._resource.queue) class test_ConnectionPool(ResourceCase): abstract = False def create_resource(self, limit, preload): return Connection(port=5672, transport=Transport).Pool(limit, preload) def test_setup(self): P = self.create_resource(10, 2) q = P._resource.queue self.assertIsNotNone(q[0]._connection) self.assertIsNotNone(q[1]._connection) self.assertIsNone(q[2]()._connection) def test_acquire_raises_evaluated(self): P = self.create_resource(1, 0) # evaluate the connection first r = P.acquire() r.release() P.prepare = Mock() P.prepare.side_effect = MemoryError() P.release = Mock() with self.assertRaises(MemoryError): with P.acquire(): assert False P.release.assert_called_with(r) def test_release_no__debug(self): P = self.create_resource(10, 2) R = Mock() R._debug.side_effect = AttributeError() P.release_resource(R) def test_setup_no_limit(self): P = self.create_resource(None, None) self.assertFalse(P._resource.queue) self.assertIsNone(P.limit) def test_prepare_not_callable(self): P = self.create_resource(None, None) conn = Connection('memory://') self.assertIs(P.prepare(conn), conn) def test_acquire_channel(self): P = self.create_resource(10, 0) with P.acquire_channel() as (conn, channel): self.assertIs(channel, conn.default_channel) class test_ChannelPool(ResourceCase): abstract = False def create_resource(self, limit, preload): return Connection(port=5672, transport=Transport) \ .ChannelPool(limit, preload) def test_setup(self): P = self.create_resource(10, 2) q = P._resource.queue self.assertTrue(q[0].basic_consume) self.assertTrue(q[1].basic_consume) with self.assertRaises(AttributeError): getattr(q[2], 'basic_consume') def test_setup_no_limit(self): P = self.create_resource(None, None) self.assertFalse(P._resource.queue) self.assertIsNone(P.limit) def test_prepare_not_callable(self): P = self.create_resource(10, 0) conn = Connection('memory://') chan = conn.default_channel self.assertIs(P.prepare(chan), chan) kombu-3.0.7/kombu/tests/test_entities.py0000644000076500000000000002674412237554371021012 0ustar asksolwheel00000000000000from __future__ import absolute_import import pickle from kombu import Connection, Exchange, Producer, Queue, binding from kombu.exceptions import NotBoundError from .case import Case, Mock, call from .mocks import Transport def get_conn(): return Connection(transport=Transport) class test_binding(Case): def test_constructor(self): x = binding( Exchange('foo'), 'rkey', arguments={'barg': 'bval'}, unbind_arguments={'uarg': 'uval'}, ) self.assertEqual(x.exchange, Exchange('foo')) self.assertEqual(x.routing_key, 'rkey') self.assertDictEqual(x.arguments, {'barg': 'bval'}) self.assertDictEqual(x.unbind_arguments, {'uarg': 'uval'}) def test_declare(self): chan = get_conn().channel() x = binding(Exchange('foo'), 'rkey') x.declare(chan) self.assertIn('exchange_declare', chan) def test_declare_no_exchange(self): chan = get_conn().channel() x = binding() x.declare(chan) self.assertNotIn('exchange_declare', chan) def test_bind(self): chan = get_conn().channel() x = binding(Exchange('foo')) x.bind(Exchange('bar')(chan)) self.assertIn('exchange_bind', chan) def test_unbind(self): chan = get_conn().channel() x = binding(Exchange('foo')) x.unbind(Exchange('bar')(chan)) self.assertIn('exchange_unbind', chan) def test_repr(self): b = binding(Exchange('foo'), 'rkey') self.assertIn('foo', repr(b)) self.assertIn('rkey', repr(b)) class test_Exchange(Case): def test_bound(self): exchange = Exchange('foo', 'direct') self.assertFalse(exchange.is_bound) self.assertIn('= 1: self.c.should_stop = True counter[0] += 1 return counter self.c.should_stop = False consume.side_effect = se self.c.run() self.assertTrue(sleep.called) def test_run_raises(self): conn = ContextMock(name='connection') self.c.connection = conn conn.connection_errors = (KeyError, ) conn.channel_errors = () consume = self.c.consume = Mock(name='c.consume') with patch('kombu.mixins.warn') as warn: def se_raises(*args, **kwargs): self.c.should_stop = True raise KeyError('foo') self.c.should_stop = False consume.side_effect = se_raises self.c.run() self.assertTrue(warn.called) kombu-3.0.7/kombu/tests/test_pidbox.py0000644000076500000000000002303312237554371020437 0ustar asksolwheel00000000000000from __future__ import absolute_import import socket import warnings from kombu import Connection from kombu import pidbox from kombu.exceptions import ContentDisallowed, InconsistencyError from kombu.utils import uuid from .case import Case, Mock, patch class test_Mailbox(Case): def _handler(self, state): return self.stats['var'] def setUp(self): class Mailbox(pidbox.Mailbox): def _collect(self, *args, **kwargs): return 'COLLECTED' self.mailbox = Mailbox('test_pidbox') self.connection = Connection(transport='memory') self.state = {'var': 1} self.handlers = {'mymethod': self._handler} self.bound = self.mailbox(self.connection) self.default_chan = self.connection.channel() self.node = self.bound.Node( 'test_pidbox', state=self.state, handlers=self.handlers, channel=self.default_chan, ) def test_publish_reply_ignores_InconsistencyError(self): mailbox = pidbox.Mailbox('test_reply__collect')(self.connection) with patch('kombu.pidbox.Producer') as Producer: producer = Producer.return_value = Mock(name='producer') producer.publish.side_effect = InconsistencyError() mailbox._publish_reply( {'foo': 'bar'}, mailbox.reply_exchange, mailbox.oid, 'foo', ) self.assertTrue(producer.publish.called) def test_reply__collect(self): mailbox = pidbox.Mailbox('test_reply__collect')(self.connection) exchange = mailbox.reply_exchange.name channel = self.connection.channel() mailbox.reply_queue(channel).declare() ticket = uuid() mailbox._publish_reply({'foo': 'bar'}, exchange, mailbox.oid, ticket) _callback_called = [False] def callback(body): _callback_called[0] = True reply = mailbox._collect(ticket, limit=1, callback=callback, channel=channel) self.assertEqual(reply, [{'foo': 'bar'}]) self.assertTrue(_callback_called[0]) ticket = uuid() mailbox._publish_reply({'biz': 'boz'}, exchange, mailbox.oid, ticket) reply = mailbox._collect(ticket, limit=1, channel=channel) self.assertEqual(reply, [{'biz': 'boz'}]) mailbox._publish_reply({'foo': 'BAM'}, exchange, mailbox.oid, 'doom', serializer='pickle') with self.assertRaises(ContentDisallowed): reply = mailbox._collect('doom', limit=1, channel=channel) mailbox._publish_reply( {'foo': 'BAMBAM'}, exchange, mailbox.oid, 'doom', serializer='pickle', ) reply = mailbox._collect('doom', limit=1, channel=channel, accept=['pickle']) self.assertEqual(reply[0]['foo'], 'BAMBAM') de = mailbox.connection.drain_events = Mock() de.side_effect = socket.timeout mailbox._collect(ticket, limit=1, channel=channel) def test_constructor(self): self.assertIsNone(self.mailbox.connection) self.assertTrue(self.mailbox.exchange.name) self.assertTrue(self.mailbox.reply_exchange.name) def test_bound(self): bound = self.mailbox(self.connection) self.assertIs(bound.connection, self.connection) def test_Node(self): self.assertTrue(self.node.hostname) self.assertTrue(self.node.state) self.assertIs(self.node.mailbox, self.bound) self.assertTrue(self.handlers) # No initial handlers node2 = self.bound.Node('test_pidbox2', state=self.state) self.assertDictEqual(node2.handlers, {}) def test_Node_consumer(self): consumer1 = self.node.Consumer() self.assertIs(consumer1.channel, self.default_chan) self.assertTrue(consumer1.no_ack) chan2 = self.connection.channel() consumer2 = self.node.Consumer(channel=chan2, no_ack=False) self.assertIs(consumer2.channel, chan2) self.assertFalse(consumer2.no_ack) def test_Node_consumer_multiple_listeners(self): warnings.resetwarnings() consumer = self.node.Consumer() q = consumer.queues[0] with warnings.catch_warnings(record=True) as log: q.on_declared('foo', 1, 1) self.assertTrue(log) self.assertIn('already using this', log[0].message.args[0]) with warnings.catch_warnings(record=True) as log: q.on_declared('foo', 1, 0) self.assertFalse(log) def test_handler(self): node = self.bound.Node('test_handler', state=self.state) @node.handler def my_handler_name(state): return 42 self.assertIn('my_handler_name', node.handlers) def test_dispatch(self): node = self.bound.Node('test_dispatch', state=self.state) @node.handler def my_handler_name(state, x=None, y=None): return x + y self.assertEqual(node.dispatch('my_handler_name', arguments={'x': 10, 'y': 10}), 20) def test_dispatch_raising_SystemExit(self): node = self.bound.Node('test_dispatch_raising_SystemExit', state=self.state) @node.handler def my_handler_name(state): raise SystemExit with self.assertRaises(SystemExit): node.dispatch('my_handler_name') def test_dispatch_raising(self): node = self.bound.Node('test_dispatch_raising', state=self.state) @node.handler def my_handler_name(state): raise KeyError('foo') res = node.dispatch('my_handler_name') self.assertIn('error', res) self.assertIn('KeyError', res['error']) def test_dispatch_replies(self): _replied = [False] def reply(data, **options): _replied[0] = True node = self.bound.Node('test_dispatch', state=self.state) node.reply = reply @node.handler def my_handler_name(state, x=None, y=None): return x + y node.dispatch('my_handler_name', arguments={'x': 10, 'y': 10}, reply_to={'exchange': 'foo', 'routing_key': 'bar'}) self.assertTrue(_replied[0]) def test_reply(self): _replied = [(None, None, None)] def publish_reply(data, exchange, routing_key, ticket, **kwargs): _replied[0] = (data, exchange, routing_key, ticket) mailbox = self.mailbox(self.connection) mailbox._publish_reply = publish_reply node = mailbox.Node('test_reply') @node.handler def my_handler_name(state): return 42 node.dispatch('my_handler_name', reply_to={'exchange': 'exchange', 'routing_key': 'rkey'}, ticket='TICKET') data, exchange, routing_key, ticket = _replied[0] self.assertEqual(data, {'test_reply': 42}) self.assertEqual(exchange, 'exchange') self.assertEqual(routing_key, 'rkey') self.assertEqual(ticket, 'TICKET') def test_handle_message(self): node = self.bound.Node('test_dispatch_from_message') @node.handler def my_handler_name(state, x=None, y=None): return x * y body = {'method': 'my_handler_name', 'arguments': {'x': 64, 'y': 64}} self.assertEqual(node.handle_message(body, None), 64 * 64) # message not for me should not be processed. body['destination'] = ['some_other_node'] self.assertIsNone(node.handle_message(body, None)) def test_handle_message_adjusts_clock(self): node = self.bound.Node('test_adjusts_clock') @node.handler def my_handler_name(state): return 10 body = {'method': 'my_handler_name', 'arguments': {}} message = Mock(name='message') message.headers = {'clock': 313} node.adjust_clock = Mock(name='adjust_clock') res = node.handle_message(body, message) node.adjust_clock.assert_called_with(313) self.assertEqual(res, 10) def test_listen(self): consumer = self.node.listen() self.assertEqual(consumer.callbacks[0], self.node.handle_message) self.assertEqual(consumer.channel, self.default_chan) def test_cast(self): self.bound.cast(['somenode'], 'mymethod') consumer = self.node.Consumer() self.assertIsCast(self.get_next(consumer)) def test_abcast(self): self.bound.abcast('mymethod') consumer = self.node.Consumer() self.assertIsCast(self.get_next(consumer)) def test_call_destination_must_be_sequence(self): with self.assertRaises(ValueError): self.bound.call('some_node', 'mymethod') def test_call(self): self.assertEqual( self.bound.call(['some_node'], 'mymethod'), 'COLLECTED', ) consumer = self.node.Consumer() self.assertIsCall(self.get_next(consumer)) def test_multi_call(self): self.assertEqual(self.bound.multi_call('mymethod'), 'COLLECTED') consumer = self.node.Consumer() self.assertIsCall(self.get_next(consumer)) def get_next(self, consumer): m = consumer.queues[0].get() if m: return m.payload def assertIsCast(self, message): self.assertTrue(message['method']) def assertIsCall(self, message): self.assertTrue(message['method']) self.assertTrue(message['reply_to']) kombu-3.0.7/kombu/tests/test_pools.py0000644000076500000000000001611212237554371020306 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection, Producer from kombu import pools from kombu.connection import ConnectionPool from kombu.utils import eqhash from .case import Case, Mock class test_ProducerPool(Case): Pool = pools.ProducerPool class MyPool(pools.ProducerPool): def __init__(self, *args, **kwargs): self.instance = Mock() pools.ProducerPool.__init__(self, *args, **kwargs) def Producer(self, connection): return self.instance def setUp(self): self.connections = Mock() self.pool = self.Pool(self.connections, limit=10) def test_close_resource(self): self.pool.close_resource(Mock(name='resource')) def test_releases_connection_when_Producer_raises(self): self.pool.Producer = Mock() self.pool.Producer.side_effect = IOError() acq = self.pool._acquire_connection = Mock() conn = acq.return_value = Mock() with self.assertRaises(IOError): self.pool.create_producer() conn.release.assert_called_with() def test_prepare_release_connection_on_error(self): pp = Mock() p = pp.return_value = Mock() p.revive.side_effect = IOError() acq = self.pool._acquire_connection = Mock() conn = acq.return_value = Mock() p._channel = None with self.assertRaises(IOError): self.pool.prepare(pp) conn.release.assert_called_with() def test_release_releases_connection(self): p = Mock() p.__connection__ = Mock() self.pool.release(p) p.__connection__.release.assert_called_with() p.__connection__ = None self.pool.release(p) def test_init(self): self.assertIs(self.pool.connections, self.connections) def test_Producer(self): self.assertIsInstance(self.pool.Producer(Mock()), Producer) def test_acquire_connection(self): self.pool._acquire_connection() self.connections.acquire.assert_called_with(block=True) def test_new(self): promise = self.pool.new() producer = promise() self.assertIsInstance(producer, Producer) self.connections.acquire.assert_called_with(block=True) def test_setup_unlimited(self): pool = self.Pool(self.connections, limit=None) pool.setup() self.assertFalse(pool._resource.queue) def test_setup(self): self.assertEqual(len(self.pool._resource.queue), self.pool.limit) first = self.pool._resource.get_nowait() producer = first() self.assertIsInstance(producer, Producer) def test_prepare(self): connection = self.connections.acquire.return_value = Mock() pool = self.MyPool(self.connections, limit=10) pool.instance._channel = None first = pool._resource.get_nowait() producer = pool.prepare(first) self.assertTrue(self.connections.acquire.called) producer.revive.assert_called_with(connection) def test_prepare_channel_already_created(self): self.connections.acquire.return_value = Mock() pool = self.MyPool(self.connections, limit=10) pool.instance._channel = Mock() first = pool._resource.get_nowait() self.connections.acquire.reset() producer = pool.prepare(first) self.assertFalse(producer.revive.called) def test_prepare_not_callable(self): x = Producer(Mock) self.pool.prepare(x) def test_release(self): p = Mock() p.channel = Mock() p.__connection__ = Mock() self.pool.release(p) p.__connection__.release.assert_called_with() self.assertIsNone(p.channel) class test_PoolGroup(Case): Group = pools.PoolGroup class MyGroup(pools.PoolGroup): def create(self, resource, limit): return resource, limit def test_interface_create(self): g = self.Group() with self.assertRaises(NotImplementedError): g.create(Mock(), 10) def test_getitem_using_global_limit(self): pools._used[0] = False g = self.MyGroup(limit=pools.use_global_limit) res = g['foo'] self.assertTupleEqual(res, ('foo', pools.get_limit())) self.assertTrue(pools._used[0]) def test_getitem_using_custom_limit(self): pools._used[0] = True g = self.MyGroup(limit=102456) res = g['foo'] self.assertTupleEqual(res, ('foo', 102456)) def test_delitem(self): g = self.MyGroup() g['foo'] del(g['foo']) self.assertNotIn('foo', g) def test_Connections(self): conn = Connection('memory://') p = pools.connections[conn] self.assertTrue(p) self.assertIsInstance(p, ConnectionPool) self.assertIs(p.connection, conn) self.assertEqual(p.limit, pools.get_limit()) def test_Producers(self): conn = Connection('memory://') p = pools.producers[conn] self.assertTrue(p) self.assertIsInstance(p, pools.ProducerPool) self.assertIs(p.connections, pools.connections[conn]) self.assertEqual(p.limit, p.connections.limit) self.assertEqual(p.limit, pools.get_limit()) def test_all_groups(self): conn = Connection('memory://') pools.connections[conn] self.assertTrue(list(pools._all_pools())) def test_reset(self): pools.reset() class MyGroup(dict): clear_called = False def clear(self): self.clear_called = True p1 = pools.connections['foo'] = Mock() g1 = MyGroup() pools._groups.append(g1) pools.reset() p1.force_close_all.assert_called_with() self.assertTrue(g1.clear_called) p1 = pools.connections['foo'] = Mock() p1.force_close_all.side_effect = KeyError() pools.reset() def test_set_limit(self): pools.reset() pools.set_limit(34576) limit = pools.get_limit() self.assertEqual(limit, 34576) pools.connections[Connection('memory://')] pools.set_limit(limit + 1) self.assertEqual(pools.get_limit(), limit + 1) limit = pools.get_limit() with self.assertRaises(RuntimeError): pools.set_limit(limit - 1) pools.set_limit(limit - 1, force=True) self.assertEqual(pools.get_limit(), limit - 1) pools.set_limit(pools.get_limit()) class test_fun_PoolGroup(Case): def test_connections_behavior(self): c1u = 'memory://localhost:123' c2u = 'memory://localhost:124' c1 = Connection(c1u) c2 = Connection(c2u) c3 = Connection(c1u) assert eqhash(c1) != eqhash(c2) assert eqhash(c1) == eqhash(c3) p1 = pools.connections[c1] p2 = pools.connections[c2] p3 = pools.connections[c3] self.assertIsNot(p1, p2) self.assertIs(p1, p3) r1 = p1.acquire() self.assertTrue(p1._dirty) self.assertTrue(p3._dirty) self.assertFalse(p2._dirty) r1.release() self.assertFalse(p1._dirty) self.assertFalse(p3._dirty) kombu-3.0.7/kombu/tests/test_serialization.py0000644000076500000000000002561612237554371022040 0ustar asksolwheel00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import unicode_literals import sys from base64 import b64decode from kombu.exceptions import ContentDisallowed from kombu.five import text_t, bytes_t from kombu.serialization import ( registry, register, SerializerNotInstalled, raw_encode, register_yaml, register_msgpack, dumps, loads, pickle, pickle_protocol, unregister, register_pickle, enable_insecure_serializers, disable_insecure_serializers, ) from kombu.utils.encoding import str_to_bytes from .case import Case, call, mask_modules, patch, skip_if_not_module # For content_encoding tests unicode_string = 'abcdé\u8463' unicode_string_as_utf8 = unicode_string.encode('utf-8') latin_string = 'abcdé' latin_string_as_latin1 = latin_string.encode('latin-1') latin_string_as_utf8 = latin_string.encode('utf-8') # For serialization tests py_data = { 'string': 'The quick brown fox jumps over the lazy dog', 'int': 10, 'float': 3.14159265, 'unicode': 'Thé quick brown fox jumps over thé lazy dog', 'list': ['george', 'jerry', 'elaine', 'cosmo'], } # JSON serialization tests json_data = """\ {"int": 10, "float": 3.1415926500000002, \ "list": ["george", "jerry", "elaine", "cosmo"], \ "string": "The quick brown fox jumps over the lazy \ dog", "unicode": "Th\\u00e9 quick brown fox jumps over \ th\\u00e9 lazy dog"}\ """ # Pickle serialization tests pickle_data = pickle.dumps(py_data, protocol=pickle_protocol) # YAML serialization tests yaml_data = """\ float: 3.1415926500000002 int: 10 list: [george, jerry, elaine, cosmo] string: The quick brown fox jumps over the lazy dog unicode: "Th\\xE9 quick brown fox jumps over th\\xE9 lazy dog" """ msgpack_py_data = dict(py_data) # Unicode chars are lost in transmit :( msgpack_py_data['unicode'] = 'Th quick brown fox jumps over th lazy dog' msgpack_data = b64decode(str_to_bytes("""\ haNpbnQKpWZsb2F0y0AJIftTyNTxpGxpc3SUpmdlb3JnZaVqZXJyeaZlbGFpbmWlY29zbW+mc3Rya\ W5n2gArVGhlIHF1aWNrIGJyb3duIGZveCBqdW1wcyBvdmVyIHRoZSBsYXp5IGRvZ6d1bmljb2Rl2g\ ApVGggcXVpY2sgYnJvd24gZm94IGp1bXBzIG92ZXIgdGggbGF6eSBkb2c=\ """)) def say(m): sys.stderr.write('%s\n' % (m, )) registry.register('testS', lambda s: s, lambda s: 'decoded', 'application/testS', 'utf-8') class test_Serialization(Case): def test_disable(self): disabled = registry._disabled_content_types try: registry.disable('testS') self.assertIn('application/testS', disabled) disabled.clear() registry.disable('application/testS') self.assertIn('application/testS', disabled) finally: disabled.clear() def test_enable(self): registry._disabled_content_types.add('application/json') registry.enable('json') self.assertNotIn('application/json', registry._disabled_content_types) registry._disabled_content_types.add('application/json') registry.enable('application/json') self.assertNotIn('application/json', registry._disabled_content_types) def test_loads_when_disabled(self): disabled = registry._disabled_content_types try: registry.disable('testS') with self.assertRaises(SerializerNotInstalled): loads('xxd', 'application/testS', 'utf-8', force=False) ret = loads('xxd', 'application/testS', 'utf-8', force=True) self.assertEqual(ret, 'decoded') finally: disabled.clear() def test_loads_when_data_is_None(self): loads(None, 'application/testS', 'utf-8') def test_content_type_decoding(self): self.assertEqual( unicode_string, loads(unicode_string_as_utf8, content_type='plain/text', content_encoding='utf-8'), ) self.assertEqual( latin_string, loads(latin_string_as_latin1, content_type='application/data', content_encoding='latin-1'), ) def test_content_type_binary(self): self.assertIsInstance( loads(unicode_string_as_utf8, content_type='application/data', content_encoding='binary'), bytes_t, ) self.assertEqual( unicode_string_as_utf8, loads(unicode_string_as_utf8, content_type='application/data', content_encoding='binary'), ) def test_content_type_encoding(self): # Using the 'raw' serializer self.assertEqual( unicode_string_as_utf8, dumps(unicode_string, serializer='raw')[-1], ) self.assertEqual( latin_string_as_utf8, dumps(latin_string, serializer='raw')[-1], ) # And again w/o a specific serializer to check the # code where we force unicode objects into a string. self.assertEqual( unicode_string_as_utf8, dumps(unicode_string)[-1], ) self.assertEqual( latin_string_as_utf8, dumps(latin_string)[-1], ) def test_enable_insecure_serializers(self): with patch('kombu.serialization.registry') as registry: enable_insecure_serializers() registry.assert_has_calls([ call.enable('pickle'), call.enable('yaml'), call.enable('msgpack'), ]) registry.enable.side_effect = KeyError() enable_insecure_serializers() with patch('kombu.serialization.registry') as registry: enable_insecure_serializers(['msgpack']) registry.assert_has_calls([call.enable('msgpack')]) def test_disable_insecure_serializers(self): with patch('kombu.serialization.registry') as registry: registry._decoders = ['pickle', 'yaml', 'doomsday'] disable_insecure_serializers(allowed=['doomsday']) registry.disable.assert_has_calls([call('pickle'), call('yaml')]) registry.enable.assert_has_calls([call('doomsday')]) disable_insecure_serializers(allowed=None) registry.disable.assert_has_calls([ call('pickle'), call('yaml'), call('doomsday') ]) def test_json_loads(self): self.assertEqual( py_data, loads(json_data, content_type='application/json', content_encoding='utf-8'), ) def test_json_dumps(self): self.assertEqual( loads( dumps(py_data, serializer='json')[-1], content_type='application/json', content_encoding='utf-8', ), loads( json_data, content_type='application/json', content_encoding='utf-8', ), ) @skip_if_not_module('msgpack', (ImportError, ValueError)) def test_msgpack_loads(self): register_msgpack() res = loads(msgpack_data, content_type='application/x-msgpack', content_encoding='binary') if sys.version_info[0] < 3: for k, v in res.items(): if isinstance(v, text_t): res[k] = v.encode() if isinstance(v, (list, tuple)): res[k] = [i.encode() for i in v] self.assertEqual( msgpack_py_data, res, ) @skip_if_not_module('msgpack', (ImportError, ValueError)) def test_msgpack_dumps(self): register_msgpack() self.assertEqual( loads( dumps(msgpack_py_data, serializer='msgpack')[-1], content_type='application/x-msgpack', content_encoding='binary', ), loads( msgpack_data, content_type='application/x-msgpack', content_encoding='binary', ), ) @skip_if_not_module('yaml') def test_yaml_loads(self): register_yaml() self.assertEqual( py_data, loads(yaml_data, content_type='application/x-yaml', content_encoding='utf-8'), ) @skip_if_not_module('yaml') def test_yaml_dumps(self): register_yaml() self.assertEqual( loads( dumps(py_data, serializer='yaml')[-1], content_type='application/x-yaml', content_encoding='utf-8', ), loads( yaml_data, content_type='application/x-yaml', content_encoding='utf-8', ), ) def test_pickle_loads(self): self.assertEqual( py_data, loads(pickle_data, content_type='application/x-python-serialize', content_encoding='binary'), ) def test_pickle_dumps(self): self.assertEqual( pickle.loads(pickle_data), pickle.loads(dumps(py_data, serializer='pickle')[-1]), ) def test_register(self): register(None, None, None, None) def test_unregister(self): with self.assertRaises(SerializerNotInstalled): unregister('nonexisting') dumps('foo', serializer='pickle') unregister('pickle') with self.assertRaises(SerializerNotInstalled): dumps('foo', serializer='pickle') register_pickle() def test_set_default_serializer_missing(self): with self.assertRaises(SerializerNotInstalled): registry._set_default_serializer('nonexisting') def test_dumps_missing(self): with self.assertRaises(SerializerNotInstalled): dumps('foo', serializer='nonexisting') def test_dumps__no_serializer(self): ctyp, cenc, data = dumps(str_to_bytes('foo')) self.assertEqual(ctyp, 'application/data') self.assertEqual(cenc, 'binary') def test_loads__not_accepted(self): with self.assertRaises(ContentDisallowed): loads('tainted', 'application/x-evil', 'binary', accept=[]) with self.assertRaises(ContentDisallowed): loads('tainted', 'application/x-evil', 'binary', accept=['application/x-json']) self.assertTrue( loads('tainted', 'application/x-doomsday', 'binary', accept=['application/x-doomsday']) ) def test_raw_encode(self): self.assertTupleEqual( raw_encode('foo'.encode('utf-8')), ('application/data', 'binary', 'foo'.encode('utf-8')), ) @mask_modules('yaml') def test_register_yaml__no_yaml(self): register_yaml() with self.assertRaises(SerializerNotInstalled): loads('foo', 'application/x-yaml', 'utf-8') @mask_modules('msgpack') def test_register_msgpack__no_msgpack(self): register_msgpack() with self.assertRaises(SerializerNotInstalled): loads('foo', 'application/x-msgpack', 'utf-8') kombu-3.0.7/kombu/tests/test_simple.py0000644000076500000000000000727412237554371020454 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection, Exchange, Queue from .case import Case, Mock class SimpleBase(Case): abstract = True def Queue(self, name, *args, **kwargs): q = name if not isinstance(q, Queue): q = self.__class__.__name__ if name: q = '%s.%s' % (q, name) return self._Queue(q, *args, **kwargs) def _Queue(self, *args, **kwargs): raise NotImplementedError() def setUp(self): if not self.abstract: self.connection = Connection(transport='memory') with self.connection.channel() as channel: channel.exchange_declare('amq.direct') self.q = self.Queue(None, no_ack=True) def tearDown(self): if not self.abstract: self.q.close() self.connection.close() def test_produce__consume(self): if self.abstract: return q = self.Queue('test_produce__consume', no_ack=True) q.put({'hello': 'Simple'}) self.assertEqual(q.get(timeout=1).payload, {'hello': 'Simple'}) with self.assertRaises(q.Empty): q.get(timeout=0.1) def test_produce__basic_get(self): if self.abstract: return q = self.Queue('test_produce__basic_get', no_ack=True) q.put({'hello': 'SimpleSync'}) self.assertEqual(q.get_nowait().payload, {'hello': 'SimpleSync'}) with self.assertRaises(q.Empty): q.get_nowait() q.put({'hello': 'SimpleSync'}) self.assertEqual(q.get(block=False).payload, {'hello': 'SimpleSync'}) with self.assertRaises(q.Empty): q.get(block=False) def test_clear(self): if self.abstract: return q = self.Queue('test_clear', no_ack=True) for i in range(10): q.put({'hello': 'SimplePurge%d' % (i, )}) self.assertEqual(q.clear(), 10) def test_enter_exit(self): if self.abstract: return q = self.Queue('test_enter_exit') q.close = Mock() self.assertIs(q.__enter__(), q) q.__exit__() q.close.assert_called_with() def test_qsize(self): if self.abstract: return q = self.Queue('test_clear', no_ack=True) for i in range(10): q.put({'hello': 'SimplePurge%d' % (i, )}) self.assertEqual(q.qsize(), 10) self.assertEqual(len(q), 10) def test_autoclose(self): if self.abstract: return channel = self.connection.channel() q = self.Queue('test_autoclose', no_ack=True, channel=channel) q.close() def test_custom_Queue(self): if self.abstract: return n = self.__class__.__name__ exchange = Exchange('%s-test.custom.Queue' % (n, )) queue = Queue('%s-test.custom.Queue' % (n, ), exchange, 'my.routing.key') q = self.Queue(queue) self.assertEqual(q.consumer.queues[0], queue) q.close() def test_bool(self): if self.abstract: return q = self.Queue('test_nonzero') self.assertTrue(q) class test_SimpleQueue(SimpleBase): abstract = False def _Queue(self, *args, **kwargs): return self.connection.SimpleQueue(*args, **kwargs) def test_is_ack(self): q = self.Queue('test_is_no_ack') self.assertFalse(q.no_ack) class test_SimpleBuffer(SimpleBase): abstract = False def Queue(self, *args, **kwargs): return self.connection.SimpleBuffer(*args, **kwargs) def test_is_no_ack(self): q = self.Queue('test_is_no_ack') self.assertTrue(q.no_ack) kombu-3.0.7/kombu/tests/test_syn.py0000644000076500000000000000364512237554371017772 0ustar asksolwheel00000000000000from __future__ import absolute_import import socket import sys import types from kombu import syn from kombu.tests.case import Case, patch, module_exists class test_syn(Case): def test_compat(self): self.assertEqual(syn.blocking(lambda: 10), 10) syn.select_blocking_method('foo') def test_detect_environment(self): try: syn._environment = None X = syn.detect_environment() self.assertEqual(syn._environment, X) Y = syn.detect_environment() self.assertEqual(Y, X) finally: syn._environment = None @module_exists('eventlet', 'eventlet.patcher') def test_detect_environment_eventlet(self): with patch('eventlet.patcher.is_monkey_patched', create=True) as m: self.assertTrue(sys.modules['eventlet']) m.return_value = True env = syn._detect_environment() m.assert_called_with(socket) self.assertEqual(env, 'eventlet') @module_exists('gevent') def test_detect_environment_gevent(self): with patch('gevent.socket', create=True) as m: prev, socket.socket = socket.socket, m.socket self.assertTrue(sys.modules['gevent']) env = syn._detect_environment() self.assertEqual(env, 'gevent') def test_detect_environment_no_eventlet_or_gevent(self): try: sys.modules['eventlet'] = types.ModuleType('eventlet') sys.modules['eventlet.patcher'] = types.ModuleType('eventlet') self.assertEqual(syn._detect_environment(), 'default') finally: sys.modules.pop('eventlet', None) syn._detect_environment() try: sys.modules['gevent'] = types.ModuleType('gevent') self.assertEqual(syn._detect_environment(), 'default') finally: sys.modules.pop('gevent', None) syn._detect_environment() kombu-3.0.7/kombu/tests/transport/0000755000076500000000000000000012247127370017570 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/transport/__init__.py0000644000076500000000000000000012075774634021701 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/transport/test_amqplib.py0000644000076500000000000001114012237554371022627 0ustar asksolwheel00000000000000from __future__ import absolute_import import sys from kombu import Connection from kombu.tests.case import Case, SkipTest, Mock, mask_modules class MockConnection(dict): def __setattr__(self, key, value): self[key] = value try: __import__('amqplib') except ImportError: amqplib = Channel = None else: from kombu.transport import amqplib class Channel(amqplib.Channel): wait_returns = [] def _x_open(self, *args, **kwargs): pass def wait(self, *args, **kwargs): return self.wait_returns def _send_method(self, *args, **kwargs): pass class amqplibCase(Case): def setUp(self): if amqplib is None: raise SkipTest('amqplib not installed') self.setup() def setup(self): pass class test_Channel(amqplibCase): def setup(self): self.conn = Mock() self.conn.channels = {} self.channel = Channel(self.conn, 0) def test_init(self): self.assertFalse(self.channel.no_ack_consumers) def test_prepare_message(self): self.assertTrue(self.channel.prepare_message( 'foobar', 10, 'application/data', 'utf-8', properties={}, )) def test_message_to_python(self): message = Mock() message.headers = {} message.properties = {} self.assertTrue(self.channel.message_to_python(message)) def test_close_resolves_connection_cycle(self): self.assertIsNotNone(self.channel.connection) self.channel.close() self.assertIsNone(self.channel.connection) def test_basic_consume_registers_ack_status(self): self.channel.wait_returns = 'my-consumer-tag' self.channel.basic_consume('foo', no_ack=True) self.assertIn('my-consumer-tag', self.channel.no_ack_consumers) self.channel.wait_returns = 'other-consumer-tag' self.channel.basic_consume('bar', no_ack=False) self.assertNotIn('other-consumer-tag', self.channel.no_ack_consumers) self.channel.basic_cancel('my-consumer-tag') self.assertNotIn('my-consumer-tag', self.channel.no_ack_consumers) class test_Transport(amqplibCase): def setup(self): self.connection = Connection('amqplib://') self.transport = self.connection.transport def test_create_channel(self): connection = Mock() self.transport.create_channel(connection) connection.channel.assert_called_with() def test_drain_events(self): connection = Mock() self.transport.drain_events(connection, timeout=10.0) connection.drain_events.assert_called_with(timeout=10.0) def test_dnspython_localhost_resolve_bug(self): class Conn(object): def __init__(self, **kwargs): vars(self).update(kwargs) self.transport.Connection = Conn self.transport.client.hostname = 'localhost' conn1 = self.transport.establish_connection() self.assertEqual(conn1.host, '127.0.0.1:5672') self.transport.client.hostname = 'example.com' conn2 = self.transport.establish_connection() self.assertEqual(conn2.host, 'example.com:5672') def test_close_connection(self): connection = Mock() connection.client = Mock() self.transport.close_connection(connection) self.assertIsNone(connection.client) connection.close.assert_called_with() def test_verify_connection(self): connection = Mock() connection.channels = None self.assertFalse(self.transport.verify_connection(connection)) connection.channels = {1: 1, 2: 2} self.assertTrue(self.transport.verify_connection(connection)) @mask_modules('ssl') def test_import_no_ssl(self): pm = sys.modules.pop('kombu.transport.amqplib') try: from kombu.transport.amqplib import SSLError self.assertEqual(SSLError.__module__, 'kombu.transport.amqplib') finally: if pm is not None: sys.modules['kombu.transport.amqplib'] = pm class test_amqplib(amqplibCase): def test_default_port(self): class Transport(amqplib.Transport): Connection = MockConnection c = Connection(port=None, transport=Transport).connect() self.assertEqual(c['host'], '127.0.0.1:%s' % (Transport.default_port, )) def test_custom_port(self): class Transport(amqplib.Transport): Connection = MockConnection c = Connection(port=1337, transport=Transport).connect() self.assertEqual(c['host'], '127.0.0.1:1337') kombu-3.0.7/kombu/tests/transport/test_base.py0000644000076500000000000001141312237554371022117 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection, Consumer, Exchange, Producer, Queue from kombu.five import text_t from kombu.message import Message from kombu.transport.base import StdChannel, Transport, Management from kombu.tests.case import Case, Mock class test_StdChannel(Case): def setUp(self): self.conn = Connection('memory://') self.channel = self.conn.channel() self.channel.queues.clear() self.conn.connection.state.clear() def test_Consumer(self): q = Queue('foo', Exchange('foo')) print(self.channel.queues) cons = self.channel.Consumer(q) self.assertIsInstance(cons, Consumer) self.assertIs(cons.channel, self.channel) def test_Producer(self): prod = self.channel.Producer() self.assertIsInstance(prod, Producer) self.assertIs(prod.channel, self.channel) def test_interface_get_bindings(self): with self.assertRaises(NotImplementedError): StdChannel().get_bindings() def test_interface_after_reply_message_received(self): self.assertIsNone( StdChannel().after_reply_message_received(Queue('foo')), ) class test_Message(Case): def setUp(self): self.conn = Connection('memory://') self.channel = self.conn.channel() self.message = Message(self.channel, delivery_tag=313) def test_postencode(self): with self.assertRaises(LookupError): Message(self.channel, text_t('FOO'), postencode='ccyzz') def test_ack_respects_no_ack_consumers(self): self.channel.no_ack_consumers = set(['abc']) self.message.delivery_info['consumer_tag'] = 'abc' ack = self.channel.basic_ack = Mock() self.message.ack() self.assertNotEqual(self.message._state, 'ACK') self.assertFalse(ack.called) def test_ack_missing_consumer_tag(self): self.channel.no_ack_consumers = set(['abc']) self.message.delivery_info = {} ack = self.channel.basic_ack = Mock() self.message.ack() ack.assert_called_with(self.message.delivery_tag) def test_ack_not_no_ack(self): self.channel.no_ack_consumers = set() self.message.delivery_info['consumer_tag'] = 'abc' ack = self.channel.basic_ack = Mock() self.message.ack() ack.assert_called_with(self.message.delivery_tag) def test_ack_log_error_when_no_error(self): ack = self.message.ack = Mock() self.message.ack_log_error(Mock(), KeyError) ack.assert_called_with() def test_ack_log_error_when_error(self): ack = self.message.ack = Mock() ack.side_effect = KeyError('foo') logger = Mock() self.message.ack_log_error(logger, KeyError) ack.assert_called_with() self.assertTrue(logger.critical.called) self.assertIn("Couldn't ack", logger.critical.call_args[0][0]) def test_reject_log_error_when_no_error(self): reject = self.message.reject = Mock() self.message.reject_log_error(Mock(), KeyError, requeue=True) reject.assert_called_with(requeue=True) def test_reject_log_error_when_error(self): reject = self.message.reject = Mock() reject.side_effect = KeyError('foo') logger = Mock() self.message.reject_log_error(logger, KeyError) reject.assert_called_with(requeue=False) self.assertTrue(logger.critical.called) self.assertIn("Couldn't reject", logger.critical.call_args[0][0]) class test_interface(Case): def test_establish_connection(self): with self.assertRaises(NotImplementedError): Transport(None).establish_connection() def test_close_connection(self): with self.assertRaises(NotImplementedError): Transport(None).close_connection(None) def test_create_channel(self): with self.assertRaises(NotImplementedError): Transport(None).create_channel(None) def test_close_channel(self): with self.assertRaises(NotImplementedError): Transport(None).close_channel(None) def test_drain_events(self): with self.assertRaises(NotImplementedError): Transport(None).drain_events(None) def test_heartbeat_check(self): Transport(None).heartbeat_check(Mock(name='connection')) def test_driver_version(self): self.assertTrue(Transport(None).driver_version()) def test_register_with_event_loop(self): Transport(None).register_with_event_loop(Mock(name='loop')) def test_manager(self): self.assertTrue(Transport(None).manager) class test_Management(Case): def test_get_bindings(self): m = Management(Mock(name='transport')) with self.assertRaises(NotImplementedError): m.get_bindings() kombu-3.0.7/kombu/tests/transport/test_filesystem.py0000644000076500000000000001030012241157622023354 0ustar asksolwheel00000000000000from __future__ import absolute_import import sys import tempfile from kombu import Connection, Exchange, Queue, Consumer, Producer from kombu.tests.case import Case, SkipTest class test_FilesystemTransport(Case): def setUp(self): if sys.platform == 'win32': raise SkipTest('Needs win32con module') try: data_folder_in = tempfile.mkdtemp() data_folder_out = tempfile.mkdtemp() except Exception: raise SkipTest('filesystem transport: cannot create tempfiles') self.c = Connection(transport='filesystem', transport_options={ 'data_folder_in': data_folder_in, 'data_folder_out': data_folder_out, }) self.p = Connection(transport='filesystem', transport_options={ 'data_folder_in': data_folder_out, 'data_folder_out': data_folder_in, }) self.e = Exchange('test_transport_filesystem') self.q = Queue('test_transport_filesystem', exchange=self.e, routing_key='test_transport_filesystem') self.q2 = Queue('test_transport_filesystem2', exchange=self.e, routing_key='test_transport_filesystem2') def test_produce_consume_noack(self): producer = Producer(self.p.channel(), self.e) consumer = Consumer(self.c.channel(), self.q, no_ack=True) for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_filesystem') _received = [] def callback(message_data, message): _received.append(message) consumer.register_callback(callback) consumer.consume() while 1: if len(_received) == 10: break self.c.drain_events() self.assertEqual(len(_received), 10) def test_produce_consume(self): producer_channel = self.p.channel() consumer_channel = self.c.channel() producer = Producer(producer_channel, self.e) consumer1 = Consumer(consumer_channel, self.q) consumer2 = Consumer(consumer_channel, self.q2) self.q2(consumer_channel).declare() for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_filesystem') for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_filesystem2') _received1 = [] _received2 = [] def callback1(message_data, message): _received1.append(message) message.ack() def callback2(message_data, message): _received2.append(message) message.ack() consumer1.register_callback(callback1) consumer2.register_callback(callback2) consumer1.consume() consumer2.consume() while 1: if len(_received1) + len(_received2) == 20: break self.c.drain_events() self.assertEqual(len(_received1) + len(_received2), 20) # compression producer.publish({'compressed': True}, routing_key='test_transport_filesystem', compression='zlib') m = self.q(consumer_channel).get() self.assertDictEqual(m.payload, {'compressed': True}) # queue.delete for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_filesystem') self.assertTrue(self.q(consumer_channel).get()) self.q(consumer_channel).delete() self.q(consumer_channel).declare() self.assertIsNone(self.q(consumer_channel).get()) # queue.purge for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_filesystem2') self.assertTrue(self.q2(consumer_channel).get()) self.q2(consumer_channel).purge() self.assertIsNone(self.q2(consumer_channel).get()) kombu-3.0.7/kombu/tests/transport/test_librabbitmq.py0000644000076500000000000001142212237554371023475 0ustar asksolwheel00000000000000from __future__ import absolute_import try: import librabbitmq except ImportError: librabbitmq = None # noqa else: from kombu.transport import librabbitmq # noqa from kombu.tests.case import Case, Mock, SkipTest, patch class lrmqCase(Case): def setUp(self): if librabbitmq is None: raise SkipTest('librabbitmq is not installed') class test_Message(lrmqCase): def test_init(self): chan = Mock(name='channel') message = librabbitmq.Message( chan, {'prop': 42}, {'delivery_tag': 337}, 'body', ) self.assertEqual(message.body, 'body') self.assertEqual(message.delivery_tag, 337) self.assertEqual(message.properties['prop'], 42) class test_Channel(lrmqCase): def test_prepare_message(self): conn = Mock(name='connection') chan = librabbitmq.Channel(conn, 1) self.assertTrue(chan) body = 'the quick brown fox...' properties = {'name': 'Elaine M.'} body2, props2 = chan.prepare_message( body, properties=properties, priority=999, content_type='ctype', content_encoding='cenc', headers={'H': 2}, ) self.assertEqual(props2['name'], 'Elaine M.') self.assertEqual(props2['priority'], 999) self.assertEqual(props2['content_type'], 'ctype') self.assertEqual(props2['content_encoding'], 'cenc') self.assertEqual(props2['headers'], {'H': 2}) self.assertEqual(body2, body) body3, props3 = chan.prepare_message(body, priority=777) self.assertEqual(props3['priority'], 777) self.assertEqual(body3, body) class test_Transport(lrmqCase): def setUp(self): super(test_Transport, self).setUp() self.client = Mock(name='client') self.T = librabbitmq.Transport(self.client) def test_driver_version(self): self.assertTrue(self.T.driver_version()) def test_create_channel(self): conn = Mock(name='connection') chan = self.T.create_channel(conn) self.assertTrue(chan) conn.channel.assert_called_with() def test_drain_events(self): conn = Mock(name='connection') self.T.drain_events(conn, timeout=1.33) conn.drain_events.assert_called_with(timeout=1.33) def test_establish_connection_SSL_not_supported(self): self.client.ssl = True with self.assertRaises(NotImplementedError): self.T.establish_connection() def test_establish_connection(self): self.T.Connection = Mock(name='Connection') self.T.client.ssl = False self.T.client.port = None self.T.client.transport_options = {} conn = self.T.establish_connection() self.assertEqual( self.T.client.port, self.T.default_connection_params['port'], ) self.assertEqual(conn.client, self.T.client) self.assertEqual(self.T.client.drain_events, conn.drain_events) def test_collect__no_conn(self): self.T.client.drain_events = 1234 self.T._collect(None) self.assertIsNone(self.client.drain_events) self.assertIsNone(self.T.client) def test_collect__with_conn(self): self.T.client.drain_events = 1234 conn = Mock(name='connection') chans = conn.channels = {1: Mock(name='chan1'), 2: Mock(name='chan2')} conn.callbacks = {'foo': Mock(name='cb1'), 'bar': Mock(name='cb2')} for i, chan in enumerate(conn.channels.values()): chan.connection = i with patch('os.close') as close: self.T._collect(conn) close.assert_called_with(conn.fileno()) self.assertFalse(conn.channels) self.assertFalse(conn.callbacks) for chan in chans.values(): self.assertIsNone(chan.connection) self.assertIsNone(self.client.drain_events) self.assertIsNone(self.T.client) with patch('os.close') as close: self.T.client = self.client close.side_effect = OSError() self.T._collect(conn) close.assert_called_with(conn.fileno()) def test_register_with_event_loop(self): conn = Mock(name='conn') loop = Mock(name='loop') self.T.register_with_event_loop(conn, loop) loop.add_reader.assert_called_with( conn.fileno(), self.T.on_readable, conn, loop, ) def test_verify_connection(self): conn = Mock(name='connection') conn.connected = True self.assertTrue(self.T.verify_connection(conn)) def test_close_connection(self): conn = Mock(name='connection') self.client.drain_events = 1234 self.T.close_connection(conn) self.assertIsNone(self.client.drain_events) conn.close.assert_called_with() kombu-3.0.7/kombu/tests/transport/test_memory.py0000644000076500000000000001146612237554371022525 0ustar asksolwheel00000000000000from __future__ import absolute_import import socket from kombu import Connection, Exchange, Queue, Consumer, Producer from kombu.tests.case import Case class test_MemoryTransport(Case): def setUp(self): self.c = Connection(transport='memory') self.e = Exchange('test_transport_memory') self.q = Queue('test_transport_memory', exchange=self.e, routing_key='test_transport_memory') self.q2 = Queue('test_transport_memory2', exchange=self.e, routing_key='test_transport_memory2') self.fanout = Exchange('test_transport_memory_fanout', type='fanout') self.q3 = Queue('test_transport_memory_fanout1', exchange=self.fanout) self.q4 = Queue('test_transport_memory_fanout2', exchange=self.fanout) def test_driver_version(self): self.assertTrue(self.c.transport.driver_version()) def test_produce_consume_noack(self): channel = self.c.channel() producer = Producer(channel, self.e) consumer = Consumer(channel, self.q, no_ack=True) for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_memory') _received = [] def callback(message_data, message): _received.append(message) consumer.register_callback(callback) consumer.consume() while 1: if len(_received) == 10: break self.c.drain_events() self.assertEqual(len(_received), 10) def test_produce_consume_fanout(self): producer = self.c.Producer() consumer = self.c.Consumer([self.q3, self.q4]) producer.publish( {'hello': 'world'}, declare=consumer.queues, exchange=self.fanout, ) self.assertEqual(self.q3(self.c).get().payload, {'hello': 'world'}) self.assertEqual(self.q4(self.c).get().payload, {'hello': 'world'}) self.assertIsNone(self.q3(self.c).get()) self.assertIsNone(self.q4(self.c).get()) def test_produce_consume(self): channel = self.c.channel() producer = Producer(channel, self.e) consumer1 = Consumer(channel, self.q) consumer2 = Consumer(channel, self.q2) self.q2(channel).declare() for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_memory') for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_memory2') _received1 = [] _received2 = [] def callback1(message_data, message): _received1.append(message) message.ack() def callback2(message_data, message): _received2.append(message) message.ack() consumer1.register_callback(callback1) consumer2.register_callback(callback2) consumer1.consume() consumer2.consume() while 1: if len(_received1) + len(_received2) == 20: break self.c.drain_events() self.assertEqual(len(_received1) + len(_received2), 20) # compression producer.publish({'compressed': True}, routing_key='test_transport_memory', compression='zlib') m = self.q(channel).get() self.assertDictEqual(m.payload, {'compressed': True}) # queue.delete for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_memory') self.assertTrue(self.q(channel).get()) self.q(channel).delete() self.q(channel).declare() self.assertIsNone(self.q(channel).get()) # queue.purge for i in range(10): producer.publish({'foo': i}, routing_key='test_transport_memory2') self.assertTrue(self.q2(channel).get()) self.q2(channel).purge() self.assertIsNone(self.q2(channel).get()) def test_drain_events(self): with self.assertRaises(socket.timeout): self.c.drain_events(timeout=0.1) c1 = self.c.channel() c2 = self.c.channel() with self.assertRaises(socket.timeout): self.c.drain_events(timeout=0.1) del(c1) # so pyflakes doesn't complain. del(c2) def test_drain_events_unregistered_queue(self): c1 = self.c.channel() class Cycle(object): def get(self, timeout=None): return ('foo', 'foo'), c1 self.c.transport.cycle = Cycle() with self.assertRaises(KeyError): self.c.drain_events() def test_queue_for(self): chan = self.c.channel() chan.queues.clear() x = chan._queue_for('foo') self.assertTrue(x) self.assertIs(chan._queue_for('foo'), x) kombu-3.0.7/kombu/tests/transport/test_mongodb.py0000644000076500000000000000634612237554371022643 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection from kombu.tests.case import Case, SkipTest, skip_if_not_module class MockConnection(dict): def __setattr__(self, key, value): self[key] = value class test_mongodb(Case): @skip_if_not_module('pymongo') def test_url_parser(self): from kombu.transport import mongodb from pymongo.errors import ConfigurationError raise SkipTest( 'Test is functional: it actually connects to mongod') class Transport(mongodb.Transport): Connection = MockConnection url = 'mongodb://' c = Connection(url, transport=Transport).connect() client = c.channels[0].client self.assertEquals(client.name, 'kombu_default') self.assertEquals(client.connection.host, '127.0.0.1') url = 'mongodb://localhost' c = Connection(url, transport=Transport).connect() client = c.channels[0].client self.assertEquals(client.name, 'kombu_default') url = 'mongodb://localhost/dbname' c = Connection(url, transport=Transport).connect() client = c.channels[0].client self.assertEquals(client.name, 'dbname') url = 'mongodb://localhost,localhost:29017/dbname' c = Connection(url, transport=Transport).connect() client = c.channels[0].client nodes = client.connection.nodes # If there's just 1 node it is because we're connecting to a single # server instead of a repl / mongoss. if len(nodes) == 2: self.assertTrue(('localhost', 29017) in nodes) self.assertEquals(client.name, 'dbname') # Passing options breaks kombu's _init_params method # url = 'mongodb://localhost,localhost2:29017/dbname?safe=true' # c = Connection(url, transport=Transport).connect() # client = c.channels[0].client url = 'mongodb://localhost:27017,localhost2:29017/dbname' c = Connection(url, transport=Transport).connect() client = c.channels[0].client # Login to admin db since there's no db specified url = "mongodb://adminusername:adminpassword@localhost" c = Connection(url, transport=Transport).connect() client = c.channels[0].client self.assertEquals(client.name, "kombu_default") # Lets make sure that using admin db doesn't break anything # when no user is specified url = "mongodb://localhost" c = Connection(url, transport=Transport).connect() client = c.channels[0].client # Assuming there's user 'username' with password 'password' # configured in mongodb url = "mongodb://username:password@localhost/dbname" c = Connection(url, transport=Transport).connect() client = c.channels[0].client # Assuming there's no user 'nousername' with password 'nopassword' # configured in mongodb url = "mongodb://nousername:nopassword@localhost/dbname" c = Connection(url, transport=Transport).connect() # Needed, otherwise the error would be rose before # the assertRaises is called def get_client(): c.channels[0].client self.assertRaises(ConfigurationError, get_client) kombu-3.0.7/kombu/tests/transport/test_pyamqp.py0000644000076500000000000001247312237554371022523 0ustar asksolwheel00000000000000from __future__ import absolute_import import sys from itertools import count try: import amqp # noqa except ImportError: pyamqp = None # noqa else: from kombu.transport import pyamqp from kombu import Connection from kombu.five import nextfun from kombu.tests.case import Case, Mock, SkipTest, mask_modules, patch class MockConnection(dict): def __setattr__(self, key, value): self[key] = value class test_Channel(Case): def setUp(self): if pyamqp is None: raise SkipTest('py-amqp not installed') class Channel(pyamqp.Channel): wait_returns = [] def _x_open(self, *args, **kwargs): pass def wait(self, *args, **kwargs): return self.wait_returns def _send_method(self, *args, **kwargs): pass self.conn = Mock() self.conn._get_free_channel_id.side_effect = nextfun(count(0)) self.conn.channels = {} self.channel = Channel(self.conn, 0) def test_init(self): self.assertFalse(self.channel.no_ack_consumers) def test_prepare_message(self): self.assertTrue(self.channel.prepare_message( 'foobar', 10, 'application/data', 'utf-8', properties={}, )) def test_message_to_python(self): message = Mock() message.headers = {} message.properties = {} self.assertTrue(self.channel.message_to_python(message)) def test_close_resolves_connection_cycle(self): self.assertIsNotNone(self.channel.connection) self.channel.close() self.assertIsNone(self.channel.connection) def test_basic_consume_registers_ack_status(self): self.channel.wait_returns = 'my-consumer-tag' self.channel.basic_consume('foo', no_ack=True) self.assertIn('my-consumer-tag', self.channel.no_ack_consumers) self.channel.wait_returns = 'other-consumer-tag' self.channel.basic_consume('bar', no_ack=False) self.assertNotIn('other-consumer-tag', self.channel.no_ack_consumers) self.channel.basic_cancel('my-consumer-tag') self.assertNotIn('my-consumer-tag', self.channel.no_ack_consumers) class test_Transport(Case): def setUp(self): if pyamqp is None: raise SkipTest('py-amqp not installed') self.connection = Connection('pyamqp://') self.transport = self.connection.transport def test_create_channel(self): connection = Mock() self.transport.create_channel(connection) connection.channel.assert_called_with() def test_driver_version(self): self.assertTrue(self.transport.driver_version()) def test_drain_events(self): connection = Mock() self.transport.drain_events(connection, timeout=10.0) connection.drain_events.assert_called_with(timeout=10.0) def test_dnspython_localhost_resolve_bug(self): class Conn(object): def __init__(self, **kwargs): vars(self).update(kwargs) self.transport.Connection = Conn self.transport.client.hostname = 'localhost' conn1 = self.transport.establish_connection() self.assertEqual(conn1.host, '127.0.0.1:5672') self.transport.client.hostname = 'example.com' conn2 = self.transport.establish_connection() self.assertEqual(conn2.host, 'example.com:5672') def test_close_connection(self): connection = Mock() connection.client = Mock() self.transport.close_connection(connection) self.assertIsNone(connection.client) connection.close.assert_called_with() @mask_modules('ssl') def test_import_no_ssl(self): pm = sys.modules.pop('amqp.connection') try: from amqp.connection import SSLError self.assertEqual(SSLError.__module__, 'amqp.connection') finally: if pm is not None: sys.modules['amqp.connection'] = pm class test_pyamqp(Case): def setUp(self): if pyamqp is None: raise SkipTest('py-amqp not installed') def test_default_port(self): class Transport(pyamqp.Transport): Connection = MockConnection c = Connection(port=None, transport=Transport).connect() self.assertEqual(c['host'], '127.0.0.1:%s' % (Transport.default_port, )) def test_custom_port(self): class Transport(pyamqp.Transport): Connection = MockConnection c = Connection(port=1337, transport=Transport).connect() self.assertEqual(c['host'], '127.0.0.1:1337') def test_register_with_event_loop(self): t = pyamqp.Transport(Mock()) conn = Mock(name='conn') loop = Mock(name='loop') t.register_with_event_loop(conn, loop) loop.add_reader.assert_called_with( conn.sock, t.on_readable, conn, loop, ) def test_heartbeat_check(self): t = pyamqp.Transport(Mock()) conn = Mock() t.heartbeat_check(conn, rate=4.331) conn.heartbeat_tick.assert_called_with(rate=4.331) def test_get_manager(self): with patch('kombu.transport.pyamqp.get_manager') as get_manager: t = pyamqp.Transport(Mock()) t.get_manager(1, kw=2) get_manager.assert_called_with(t.client, 1, kw=2) kombu-3.0.7/kombu/tests/transport/test_redis.py0000644000076500000000000011262212243671543022315 0ustar asksolwheel00000000000000from __future__ import absolute_import import socket import types from anyjson import dumps from collections import defaultdict from itertools import count from kombu import Connection, Exchange, Queue, Consumer, Producer from kombu.exceptions import InconsistencyError, VersionMismatch from kombu.five import Empty, Queue as _Queue from kombu.transport import virtual from kombu.utils import eventio # patch poll from kombu.tests.case import ( Case, Mock, call, module_exists, skip_if_not_module, patch, ) class _poll(eventio._select): def register(self, fd, flags): if flags & eventio.READ: self._rfd.add(fd) def poll(self, timeout): events = [] for fd in self._rfd: if fd.data: events.append((fd.fileno(), eventio.READ)) return events eventio.poll = _poll from kombu.transport import redis # must import after poller patch class ResponseError(Exception): pass class Client(object): queues = {} sets = defaultdict(set) hashes = defaultdict(dict) shard_hint = None def __init__(self, db=None, port=None, connection_pool=None, **kwargs): self._called = [] self._connection = None self.bgsave_raises_ResponseError = False self.connection = self._sconnection(self) def bgsave(self): self._called.append('BGSAVE') if self.bgsave_raises_ResponseError: raise ResponseError() def delete(self, key): self.queues.pop(key, None) def exists(self, key): return key in self.queues or key in self.sets def hset(self, key, k, v): self.hashes[key][k] = v def hget(self, key, k): return self.hashes[key].get(k) def hdel(self, key, k): self.hashes[key].pop(k, None) def sadd(self, key, member, *args): self.sets[key].add(member) zadd = sadd def smembers(self, key): return self.sets.get(key, set()) def srem(self, key, *args): self.sets.pop(key, None) zrem = srem def llen(self, key): try: return self.queues[key].qsize() except KeyError: return 0 def lpush(self, key, value): self.queues[key].put_nowait(value) def parse_response(self, connection, type, **options): cmd, queues = self.connection._sock.data.pop() assert cmd == type self.connection._sock.data = [] if type == 'BRPOP': item = self.brpop(queues, 0.001) if item: return item raise Empty() def brpop(self, keys, timeout=None): key = keys[0] try: item = self.queues[key].get(timeout=timeout) except Empty: pass else: return key, item def rpop(self, key): try: return self.queues[key].get_nowait() except KeyError: pass def __contains__(self, k): return k in self._called def pipeline(self): return Pipeline(self) def encode(self, value): return str(value) def _new_queue(self, key): self.queues[key] = _Queue() class _sconnection(object): disconnected = False class _socket(object): blocking = True filenos = count(30) def __init__(self, *args): self._fileno = next(self.filenos) self.data = [] def fileno(self): return self._fileno def setblocking(self, blocking): self.blocking = blocking def __init__(self, client): self.client = client self._sock = self._socket() def disconnect(self): self.disconnected = True def send_command(self, cmd, *args): self._sock.data.append((cmd, args)) def info(self): return {'foo': 1} def pubsub(self, *args, **kwargs): connection = self.connection class ConnectionPool(object): def get_connection(self, *args, **kwargs): return connection self.connection_pool = ConnectionPool() return self class Pipeline(object): def __init__(self, client): self.client = client self.stack = [] def __getattr__(self, key): if key not in self.__dict__: def _add(*args, **kwargs): self.stack.append((getattr(self.client, key), args, kwargs)) return self return _add return self.__dict__[key] def execute(self): stack = list(self.stack) self.stack[:] = [] return [fun(*args, **kwargs) for fun, args, kwargs in stack] class Channel(redis.Channel): def _get_client(self): return Client def _get_pool(self): return Mock() def _get_response_error(self): return ResponseError def _new_queue(self, queue, **kwargs): self.client._new_queue(queue) def pipeline(self): return Pipeline(Client()) class Transport(redis.Transport): Channel = Channel def _get_errors(self): return ((KeyError, ), (IndexError, )) class test_Channel(Case): def setUp(self): self.connection = Connection(transport=Transport) self.channel = self.connection.channel() def test_disable_ack_emulation(self): conn = Connection(transport=Transport, transport_options={ 'ack_emulation': False, }) chan = conn.channel() self.assertFalse(chan.ack_emulation) self.assertEqual(chan.QoS, virtual.QoS) def test_redis_info_raises(self): pool = Mock(name='pool') pool_at_init = [pool] client = Mock(name='client') class XChannel(Channel): def __init__(self, *args, **kwargs): self._pool = pool_at_init[0] super(XChannel, self).__init__(*args, **kwargs) def _get_client(self): return lambda *_, **__: client class XTransport(Transport): Channel = XChannel conn = Connection(transport=XTransport) client.info.side_effect = RuntimeError() with self.assertRaises(RuntimeError): conn.channel() pool.disconnect.assert_called_with() pool.disconnect.reset_mock() pool_at_init = [None] with self.assertRaises(RuntimeError): conn.channel() self.assertFalse(pool.disconnect.called) def test_after_fork(self): self.channel._pool = None self.channel._after_fork() self.channel._pool = Mock(name='pool') self.channel._after_fork() self.channel._pool.disconnect.assert_called_with() def test_next_delivery_tag(self): self.assertNotEqual( self.channel._next_delivery_tag(), self.channel._next_delivery_tag(), ) def test_do_restore_message(self): client = Mock(name='client') pl1 = {'body': 'BODY'} spl1 = dumps(pl1) lookup = self.channel._lookup = Mock(name='_lookup') lookup.return_value = ['george', 'elaine'] self.channel._do_restore_message( pl1, 'ex', 'rkey', client, ) client.rpush.assert_has_calls([ call('george', spl1), call('elaine', spl1), ]) pl2 = {'body': 'BODY2', 'headers': {'x-funny': 1}} headers_after = dict(pl2['headers'], redelivered=True) spl2 = dumps(dict(pl2, headers=headers_after)) self.channel._do_restore_message( pl2, 'ex', 'rkey', client, ) client.rpush.assert_has_calls([ call('george', spl2), call('elaine', spl2), ]) client.rpush.side_effect = KeyError() with patch('kombu.transport.redis.crit') as crit: self.channel._do_restore_message( pl2, 'ex', 'rkey', client, ) self.assertTrue(crit.called) def test_restore(self): message = Mock(name='message') with patch('kombu.transport.redis.loads') as loads: loads.return_value = 'M', 'EX', 'RK' client = self.channel.client = Mock(name='client') restore = self.channel._do_restore_message = Mock( name='_do_restore_message', ) pipe = Mock(name='pipe') client.pipeline.return_value = pipe pipe_hget = Mock(name='pipe.hget') pipe.hget.return_value = pipe_hget pipe_hget_hdel = Mock(name='pipe.hget.hdel') pipe_hget.hdel.return_value = pipe_hget_hdel result = Mock(name='result') pipe_hget_hdel.execute.return_value = None, None self.channel._restore(message) client.pipeline.assert_called_with() unacked_key = self.channel.unacked_key self.assertFalse(loads.called) tag = message.delivery_tag pipe.hget.assert_called_with(unacked_key, tag) pipe_hget.hdel.assert_called_with(unacked_key, tag) pipe_hget_hdel.execute.assert_called_with() pipe_hget_hdel.execute.return_value = result, None self.channel._restore(message) loads.assert_called_with(result) restore.assert_called_with('M', 'EX', 'RK', client, False) def test_qos_restore_visible(self): client = self.channel.client = Mock(name='client') client.zrevrangebyscore.return_value = [ (1, 10), (2, 20), (3, 30), ] qos = redis.QoS(self.channel) restore = qos.restore_by_tag = Mock(name='restore_by_tag') qos._vrestore_count = 1 qos.restore_visible() self.assertFalse(client.zrevrangebyscore.called) self.assertEqual(qos._vrestore_count, 2) qos._vrestore_count = 0 qos.restore_visible() restore.assert_has_calls([ call(1, client), call(2, client), call(3, client), ]) self.assertEqual(qos._vrestore_count, 1) qos._vrestore_count = 0 restore.reset_mock() client.zrevrangebyscore.return_value = [] qos.restore_visible() self.assertFalse(restore.called) self.assertEqual(qos._vrestore_count, 1) qos._vrestore_count = 0 client.setnx.side_effect = redis.MutexHeld() qos.restore_visible() def test_basic_consume_when_fanout_queue(self): self.channel.exchange_declare(exchange='txconfan', type='fanout') self.channel.queue_declare(queue='txconfanq') self.channel.queue_bind(queue='txconfanq', exchange='txconfan') self.assertIn('txconfanq', self.channel._fanout_queues) self.channel.basic_consume('txconfanq', False, None, 1) self.assertIn('txconfanq', self.channel.active_fanout_queues) self.assertEqual(self.channel._fanout_to_queue.get('txconfan'), 'txconfanq') def test_basic_cancel_unknown_delivery_tag(self): self.assertIsNone(self.channel.basic_cancel('txaseqwewq')) def test_subscribe_no_queues(self): self.channel.subclient = Mock() self.channel.active_fanout_queues.clear() self.channel._subscribe() self.assertFalse(self.channel.subclient.subscribe.called) def test_subscribe(self): self.channel.subclient = Mock() self.channel.active_fanout_queues.add('a') self.channel.active_fanout_queues.add('b') self.channel._fanout_queues.update(a='a', b='b') self.channel._subscribe() self.assertTrue(self.channel.subclient.subscribe.called) s_args, _ = self.channel.subclient.subscribe.call_args self.assertItemsEqual(s_args[0], ['a', 'b']) self.channel.subclient.connection._sock = None self.channel._subscribe() self.channel.subclient.connection.connect.assert_called_with() def test_handle_unsubscribe_message(self): s = self.channel.subclient s.subscribed = True self.channel._handle_message(s, ['unsubscribe', 'a', 0]) self.assertFalse(s.subscribed) def test_handle_pmessage_message(self): self.assertDictEqual( self.channel._handle_message( self.channel.subclient, ['pmessage', 'pattern', 'channel', 'data'], ), { 'type': 'pmessage', 'pattern': 'pattern', 'channel': 'channel', 'data': 'data', }, ) def test_handle_message(self): self.assertDictEqual( self.channel._handle_message( self.channel.subclient, ['type', 'channel', 'data'], ), { 'type': 'type', 'pattern': None, 'channel': 'channel', 'data': 'data', }, ) def test_brpop_start_but_no_queues(self): self.assertIsNone(self.channel._brpop_start()) def test_receive(self): s = self.channel.subclient = Mock() self.channel._fanout_to_queue['a'] = 'b' s.parse_response.return_value = ['message', 'a', dumps({'hello': 'world'})] payload, queue = self.channel._receive() self.assertDictEqual(payload, {'hello': 'world'}) self.assertEqual(queue, 'b') def test_receive_raises(self): self.channel._in_listen = True s = self.channel.subclient = Mock() s.parse_response.side_effect = KeyError('foo') with self.assertRaises(redis.Empty): self.channel._receive() self.assertFalse(self.channel._in_listen) def test_receive_empty(self): s = self.channel.subclient = Mock() s.parse_response.return_value = None with self.assertRaises(redis.Empty): self.channel._receive() def test_receive_different_message_Type(self): s = self.channel.subclient = Mock() s.parse_response.return_value = ['pmessage', '/foo/', 0, 'data'] with self.assertRaises(redis.Empty): self.channel._receive() def test_brpop_read_raises(self): c = self.channel.client = Mock() c.parse_response.side_effect = KeyError('foo') with self.assertRaises(redis.Empty): self.channel._brpop_read() c.connection.disconnect.assert_called_with() def test_brpop_read_gives_None(self): c = self.channel.client = Mock() c.parse_response.return_value = None with self.assertRaises(redis.Empty): self.channel._brpop_read() def test_poll_error(self): c = self.channel.client = Mock() c.parse_response = Mock() self.channel._poll_error('BRPOP') c.parse_response.assert_called_with('BRPOP') c.parse_response.side_effect = KeyError('foo') self.assertIsNone(self.channel._poll_error('BRPOP')) def test_put_fanout(self): self.channel._in_poll = False c = self.channel.client = Mock() body = {'hello': 'world'} self.channel._put_fanout('exchange', body) c.publish.assert_called_with('exchange', dumps(body)) def test_put_priority(self): client = self.channel.client = Mock(name='client') msg1 = {'properties': {'delivery_info': {'priority': 3}}} self.channel._put('george', msg1) client.lpush.assert_called_with( self.channel._q_for_pri('george', 3), dumps(msg1), ) msg2 = {'properties': {'delivery_info': {'priority': 313}}} self.channel._put('george', msg2) client.lpush.assert_called_with( self.channel._q_for_pri('george', 9), dumps(msg2), ) msg3 = {'properties': {'delivery_info': {}}} self.channel._put('george', msg3) client.lpush.assert_called_with( self.channel._q_for_pri('george', 0), dumps(msg3), ) def test_delete(self): x = self.channel self.channel._in_poll = False delete = x.client.delete = Mock() srem = x.client.srem = Mock() x._delete('queue', 'exchange', 'routing_key', None) delete.assert_has_call('queue') srem.assert_has_call(x.keyprefix_queue % ('exchange', ), x.sep.join(['routing_key', '', 'queue'])) def test_has_queue(self): self.channel._in_poll = False exists = self.channel.client.exists = Mock() exists.return_value = True self.assertTrue(self.channel._has_queue('foo')) exists.assert_has_call('foo') exists.return_value = False self.assertFalse(self.channel._has_queue('foo')) def test_close_when_closed(self): self.channel.closed = True self.channel.close() def test_close_deletes_autodelete_fanout_queues(self): self.channel._fanout_queues = ['foo', 'bar'] self.channel.auto_delete_queues = ['foo'] self.channel.queue_delete = Mock(name='queue_delete') self.channel.close() self.channel.queue_delete.assert_has_calls([call('foo')]) def test_close_client_close_raises(self): c = self.channel.client = Mock() c.connection.disconnect.side_effect = self.channel.ResponseError() self.channel.close() c.connection.disconnect.assert_called_with() def test_invalid_database_raises_ValueError(self): with self.assertRaises(ValueError): self.channel.connection.client.virtual_host = 'dwqeq' self.channel._connparams() def test_connparams_allows_slash_in_db(self): self.channel.connection.client.virtual_host = '/123' self.assertEqual(self.channel._connparams()['db'], 123) def test_connparams_db_can_be_int(self): self.channel.connection.client.virtual_host = 124 self.assertEqual(self.channel._connparams()['db'], 124) def test_new_queue_with_auto_delete(self): redis.Channel._new_queue(self.channel, 'george', auto_delete=False) self.assertNotIn('george', self.channel.auto_delete_queues) redis.Channel._new_queue(self.channel, 'elaine', auto_delete=True) self.assertIn('elaine', self.channel.auto_delete_queues) def test_connparams_regular_hostname(self): self.channel.connection.client.hostname = 'george.vandelay.com' self.assertEqual( self.channel._connparams()['host'], 'george.vandelay.com', ) def test_rotate_cycle_ValueError(self): cycle = self.channel._queue_cycle = ['kramer', 'jerry'] self.channel._rotate_cycle('kramer') self.assertEqual(cycle, ['jerry', 'kramer']) self.channel._rotate_cycle('elaine') @skip_if_not_module('redis') def test_get_client(self): import redis as R KombuRedis = redis.Channel._get_client(self.channel) self.assertTrue(KombuRedis) Rv = getattr(R, 'VERSION', None) try: R.VERSION = (2, 4, 0) with self.assertRaises(VersionMismatch): redis.Channel._get_client(self.channel) finally: if Rv is not None: R.VERSION = Rv @skip_if_not_module('redis') def test_get_response_error(self): from redis.exceptions import ResponseError self.assertIs(redis.Channel._get_response_error(self.channel), ResponseError) def test_avail_client_when_not_in_poll(self): self.channel._in_poll = False c = self.channel.client = Mock() with self.channel.conn_or_acquire() as client: self.assertIs(client, c) def test_avail_client_when_in_poll(self): self.channel._in_poll = True self.channel._pool = Mock() cc = self.channel._create_client = Mock() client = cc.return_value = Mock() with self.channel.conn_or_acquire(): pass self.channel.pool.release.assert_called_with(client.connection) cc.assert_called_with() def test_register_with_event_loop(self): transport = self.connection.transport transport.cycle = Mock(name='cycle') transport.cycle.fds = {12: 'LISTEN', 13: 'BRPOP'} conn = Mock(name='conn') loop = Mock(name='loop') redis.Transport.register_with_event_loop(transport, conn, loop) transport.cycle.on_poll_init.assert_called_with(loop.poller) loop.call_repeatedly.assert_called_with( 10, transport.cycle.maybe_restore_messages, ) self.assertTrue(loop.on_tick.add.called) on_poll_start = loop.on_tick.add.call_args[0][0] on_poll_start() transport.cycle.on_poll_start.assert_called_with() loop.add_reader.assert_has_calls([ call(12, transport.on_readable, 12), call(13, transport.on_readable, 13), ]) def test_transport_on_readable(self): transport = self.connection.transport cycle = transport.cycle = Mock(name='cyle') cycle.on_readable.return_value = None redis.Transport.on_readable(transport, 13) cycle.on_readable.assert_called_with(13) cycle.on_readable.reset_mock() queue = Mock(name='queue') ret = (Mock(name='message'), queue) cycle.on_readable.return_value = ret with self.assertRaises(KeyError): redis.Transport.on_readable(transport, 14) cb = transport._callbacks[queue] = Mock(name='callback') redis.Transport.on_readable(transport, 14) cb.assert_called_with(ret[0]) @skip_if_not_module('redis') def test_transport_get_errors(self): self.assertTrue(redis.Transport._get_errors(self.connection.transport)) @skip_if_not_module('redis') def test_transport_driver_version(self): self.assertTrue( redis.Transport.driver_version(self.connection.transport), ) @skip_if_not_module('redis') def test_transport_get_errors_when_InvalidData_used(self): from redis import exceptions class ID(Exception): pass DataError = getattr(exceptions, 'DataError', None) InvalidData = getattr(exceptions, 'InvalidData', None) exceptions.InvalidData = ID exceptions.DataError = None try: errors = redis.Transport._get_errors(self.connection.transport) self.assertTrue(errors) self.assertIn(ID, errors[1]) finally: if DataError is not None: exceptions.DataError = DataError if InvalidData is not None: exceptions.InvalidData = InvalidData def test_empty_queues_key(self): channel = self.channel channel._in_poll = False key = channel.keyprefix_queue % 'celery' # Everything is fine, there is a list of queues. channel.client.sadd(key, 'celery\x06\x16\x06\x16celery') self.assertListEqual(channel.get_table('celery'), [('celery', '', 'celery')]) # ... then for some reason, the _kombu.binding.celery key gets lost channel.client.srem(key) # which raises a channel error so that the consumer/publisher # can recover by redeclaring the required entities. with self.assertRaises(InconsistencyError): self.channel.get_table('celery') @skip_if_not_module('redis') def test_socket_connection(self): with patch('kombu.transport.redis.Channel._create_client'): with Connection('redis+socket:///tmp/redis.sock') as conn: connparams = conn.default_channel._connparams() self.assertEqual(connparams['connection_class'], redis.redis.UnixDomainSocketConnection) self.assertEqual(connparams['path'], '/tmp/redis.sock') class test_Redis(Case): def setUp(self): self.connection = Connection(transport=Transport) self.exchange = Exchange('test_Redis', type='direct') self.queue = Queue('test_Redis', self.exchange, 'test_Redis') def tearDown(self): self.connection.close() def test_publish__get(self): channel = self.connection.channel() producer = Producer(channel, self.exchange, routing_key='test_Redis') self.queue(channel).declare() producer.publish({'hello': 'world'}) self.assertDictEqual(self.queue(channel).get().payload, {'hello': 'world'}) self.assertIsNone(self.queue(channel).get()) self.assertIsNone(self.queue(channel).get()) self.assertIsNone(self.queue(channel).get()) def test_publish__consume(self): connection = Connection(transport=Transport) channel = connection.channel() producer = Producer(channel, self.exchange, routing_key='test_Redis') consumer = Consumer(channel, queues=[self.queue]) producer.publish({'hello2': 'world2'}) _received = [] def callback(message_data, message): _received.append(message_data) message.ack() consumer.register_callback(callback) consumer.consume() self.assertIn(channel, channel.connection.cycle._channels) try: connection.drain_events(timeout=1) self.assertTrue(_received) with self.assertRaises(socket.timeout): connection.drain_events(timeout=0.01) finally: channel.close() def test_purge(self): channel = self.connection.channel() producer = Producer(channel, self.exchange, routing_key='test_Redis') self.queue(channel).declare() for i in range(10): producer.publish({'hello': 'world-%s' % (i, )}) self.assertEqual(channel._size('test_Redis'), 10) self.assertEqual(self.queue(channel).purge(), 10) channel.close() def test_db_values(self): Connection(virtual_host=1, transport=Transport).channel() Connection(virtual_host='1', transport=Transport).channel() Connection(virtual_host='/1', transport=Transport).channel() with self.assertRaises(Exception): Connection('redis:///foo').channel() def test_db_port(self): c1 = Connection(port=None, transport=Transport).channel() c1.close() c2 = Connection(port=9999, transport=Transport).channel() c2.close() def test_close_poller_not_active(self): c = Connection(transport=Transport).channel() cycle = c.connection.cycle c.client.connection c.close() self.assertNotIn(c, cycle._channels) def test_close_ResponseError(self): c = Connection(transport=Transport).channel() c.client.bgsave_raises_ResponseError = True c.close() def test_close_disconnects(self): c = Connection(transport=Transport).channel() conn1 = c.client.connection conn2 = c.subclient.connection c.close() self.assertTrue(conn1.disconnected) self.assertTrue(conn2.disconnected) def test_get__Empty(self): channel = self.connection.channel() with self.assertRaises(Empty): channel._get('does-not-exist') channel.close() def test_get_client(self): myredis, exceptions = _redis_modules() @module_exists(myredis, exceptions) def _do_test(): conn = Connection(transport=Transport) chan = conn.channel() self.assertTrue(chan.Client) self.assertTrue(chan.ResponseError) self.assertTrue(conn.transport.connection_errors) self.assertTrue(conn.transport.channel_errors) _do_test() def _redis_modules(): class ConnectionError(Exception): pass class AuthenticationError(Exception): pass class InvalidData(Exception): pass class InvalidResponse(Exception): pass class ResponseError(Exception): pass exceptions = types.ModuleType('redis.exceptions') exceptions.ConnectionError = ConnectionError exceptions.AuthenticationError = AuthenticationError exceptions.InvalidData = InvalidData exceptions.InvalidResponse = InvalidResponse exceptions.ResponseError = ResponseError class Redis(object): pass myredis = types.ModuleType('redis') myredis.exceptions = exceptions myredis.Redis = Redis return myredis, exceptions class test_MultiChannelPoller(Case): def setUp(self): self.Poller = redis.MultiChannelPoller def test_on_poll_start(self): p = self.Poller() p._channels = [] p.on_poll_start() p._register_BRPOP = Mock(name='_register_BRPOP') p._register_LISTEN = Mock(name='_register_LISTEN') chan1 = Mock(name='chan1') p._channels = [chan1] chan1.active_queues = [] chan1.active_fanout_queues = [] p.on_poll_start() chan1.active_queues = ['q1'] chan1.active_fanout_queues = ['q2'] chan1.qos.can_consume.return_value = False p.on_poll_start() p._register_LISTEN.assert_called_with(chan1) self.assertFalse(p._register_BRPOP.called) chan1.qos.can_consume.return_value = True p._register_LISTEN.reset_mock() p.on_poll_start() p._register_BRPOP.assert_called_with(chan1) p._register_LISTEN.assert_called_with(chan1) def test_on_poll_init(self): p = self.Poller() chan1 = Mock(name='chan1') p._channels = [] poller = Mock(name='poller') p.on_poll_init(poller) self.assertIs(p.poller, poller) p._channels = [chan1] p.on_poll_init(poller) chan1.qos.restore_visible.assert_called_with( num=chan1.unacked_restore_limit, ) def test_handle_event(self): p = self.Poller() chan = Mock(name='chan') p._fd_to_chan[13] = chan, 'BRPOP' chan.handlers = {'BRPOP': Mock(name='BRPOP')} chan.qos.can_consume.return_value = False p.handle_event(13, redis.READ) self.assertFalse(chan.handlers['BRPOP'].called) chan.qos.can_consume.return_value = True p.handle_event(13, redis.READ) chan.handlers['BRPOP'].assert_called_with() p.handle_event(13, redis.ERR) chan._poll_error.assert_called_with('BRPOP') p.handle_event(13, ~(redis.READ | redis.ERR)) def test_fds(self): p = self.Poller() p._fd_to_chan = {1: 2} self.assertDictEqual(p.fds, p._fd_to_chan) def test_close_unregisters_fds(self): p = self.Poller() poller = p.poller = Mock() p._chan_to_sock.update({1: 1, 2: 2, 3: 3}) p.close() self.assertEqual(poller.unregister.call_count, 3) u_args = poller.unregister.call_args_list self.assertItemsEqual(u_args, [((1, ), {}), ((2, ), {}), ((3, ), {})]) def test_close_when_unregister_raises_KeyError(self): p = self.Poller() p.poller = Mock() p._chan_to_sock.update({1: 1}) p.poller.unregister.side_effect = KeyError(1) p.close() def test_close_resets_state(self): p = self.Poller() p.poller = Mock() p._channels = Mock() p._fd_to_chan = Mock() p._chan_to_sock = Mock() p._chan_to_sock.itervalues.return_value = [] p._chan_to_sock.values.return_value = [] # py3k p.close() p._channels.clear.assert_called_with() p._fd_to_chan.clear.assert_called_with() p._chan_to_sock.clear.assert_called_with() self.assertIsNone(p.poller) def test_register_when_registered_reregisters(self): p = self.Poller() p.poller = Mock() channel, client, type = Mock(), Mock(), Mock() sock = client.connection._sock = Mock() sock.fileno.return_value = 10 p._chan_to_sock = {(channel, client, type): 6} p._register(channel, client, type) p.poller.unregister.assert_called_with(6) self.assertTupleEqual(p._fd_to_chan[10], (channel, type)) self.assertEqual(p._chan_to_sock[(channel, client, type)], sock) p.poller.register.assert_called_with(sock, p.eventflags) # when client not connected yet client.connection._sock = None def after_connected(): client.connection._sock = Mock() client.connection.connect.side_effect = after_connected p._register(channel, client, type) client.connection.connect.assert_called_with() def test_register_BRPOP(self): p = self.Poller() channel = Mock() channel.client.connection._sock = None p._register = Mock() channel._in_poll = False p._register_BRPOP(channel) self.assertEqual(channel._brpop_start.call_count, 1) self.assertEqual(p._register.call_count, 1) channel.client.connection._sock = Mock() p._chan_to_sock[(channel, channel.client, 'BRPOP')] = True channel._in_poll = True p._register_BRPOP(channel) self.assertEqual(channel._brpop_start.call_count, 1) self.assertEqual(p._register.call_count, 1) def test_register_LISTEN(self): p = self.Poller() channel = Mock() channel.subclient.connection._sock = None channel._in_listen = False p._register = Mock() p._register_LISTEN(channel) p._register.assert_called_with(channel, channel.subclient, 'LISTEN') self.assertEqual(p._register.call_count, 1) self.assertEqual(channel._subscribe.call_count, 1) channel._in_listen = True channel.subclient.connection._sock = Mock() p._register_LISTEN(channel) self.assertEqual(p._register.call_count, 1) self.assertEqual(channel._subscribe.call_count, 1) def create_get(self, events=None, queues=None, fanouts=None): _pr = [] if events is None else events _aq = [] if queues is None else queues _af = [] if fanouts is None else fanouts p = self.Poller() p.poller = Mock() p.poller.poll.return_value = _pr p._register_BRPOP = Mock() p._register_LISTEN = Mock() channel = Mock() p._channels = [channel] channel.active_queues = _aq channel.active_fanout_queues = _af return p, channel def test_get_no_actions(self): p, channel = self.create_get() with self.assertRaises(redis.Empty): p.get() def test_qos_reject(self): p, channel = self.create_get() qos = redis.QoS(channel) qos.ack = Mock(name='Qos.ack') qos.reject(1234) qos.ack.assert_called_with(1234) def test_get_brpop_qos_allow(self): p, channel = self.create_get(queues=['a_queue']) channel.qos.can_consume.return_value = True with self.assertRaises(redis.Empty): p.get() p._register_BRPOP.assert_called_with(channel) def test_get_brpop_qos_disallow(self): p, channel = self.create_get(queues=['a_queue']) channel.qos.can_consume.return_value = False with self.assertRaises(redis.Empty): p.get() self.assertFalse(p._register_BRPOP.called) def test_get_listen(self): p, channel = self.create_get(fanouts=['f_queue']) with self.assertRaises(redis.Empty): p.get() p._register_LISTEN.assert_called_with(channel) def test_get_receives_ERR(self): p, channel = self.create_get(events=[(1, eventio.ERR)]) p._fd_to_chan[1] = (channel, 'BRPOP') with self.assertRaises(redis.Empty): p.get() channel._poll_error.assert_called_with('BRPOP') def test_get_receives_multiple(self): p, channel = self.create_get(events=[(1, eventio.ERR), (1, eventio.ERR)]) p._fd_to_chan[1] = (channel, 'BRPOP') with self.assertRaises(redis.Empty): p.get() channel._poll_error.assert_called_with('BRPOP') class test_Mutex(Case): @skip_if_not_module('redis') def test_mutex(self, lock_id='xxx'): client = Mock(name='client') with patch('kombu.transport.redis.uuid') as uuid: # Won uuid.return_value = lock_id client.setnx.return_value = True pipe = client.pipeline.return_value = Mock(name='pipe') pipe.get.return_value = lock_id held = False with redis.Mutex(client, 'foo1', 100): held = True self.assertTrue(held) client.setnx.assert_called_with('foo1', lock_id) pipe.get.return_value = 'yyy' held = False with redis.Mutex(client, 'foo1', 100): held = True self.assertTrue(held) # Did not win client.expire.reset_mock() pipe.get.return_value = lock_id client.setnx.return_value = False with self.assertRaises(redis.MutexHeld): held = False with redis.Mutex(client, 'foo1', '100'): held = True self.assertFalse(held) client.ttl.return_value = 0 with self.assertRaises(redis.MutexHeld): held = False with redis.Mutex(client, 'foo1', '100'): held = True self.assertFalse(held) self.assertTrue(client.expire.called) # Wins but raises WatchError (and that is ignored) client.setnx.return_value = True pipe.watch.side_effect = redis.redis.WatchError() held = False with redis.Mutex(client, 'foo1', 100): held = True self.assertTrue(held) kombu-3.0.7/kombu/tests/transport/test_sqlalchemy.py0000644000076500000000000000431112237554371023346 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection from kombu.tests.case import Case, SkipTest, patch class test_sqlalchemy(Case): def setUp(self): try: import sqlalchemy # noqa except ImportError: raise SkipTest('sqlalchemy not installed') def test_url_parser(self): with patch('kombu.transport.sqlalchemy.Channel._open'): url = 'sqlalchemy+sqlite:///celerydb.sqlite' Connection(url).connect() url = 'sqla+sqlite:///celerydb.sqlite' Connection(url).connect() # Should prevent regression fixed by f187ccd url = 'sqlb+sqlite:///celerydb.sqlite' with self.assertRaises(KeyError): Connection(url).connect() def test_simple_queueing(self): conn = Connection('sqlalchemy+sqlite:///:memory:') conn.connect() channel = conn.channel() self.assertEqual( channel.queue_cls.__table__.name, 'kombu_queue' ) self.assertEqual( channel.message_cls.__table__.name, 'kombu_message' ) channel._put('celery', 'DATA') assert channel._get('celery') == 'DATA' def test_custom_table_names(self): raise SkipTest('causes global side effect') conn = Connection('sqlalchemy+sqlite:///:memory:', transport_options={ 'queue_tablename': 'my_custom_queue', 'message_tablename': 'my_custom_message' }) conn.connect() channel = conn.channel() self.assertEqual( channel.queue_cls.__table__.name, 'my_custom_queue' ) self.assertEqual( channel.message_cls.__table__.name, 'my_custom_message' ) channel._put('celery', 'DATA') assert channel._get('celery') == 'DATA' def test_clone(self): hostname = 'sqlite:///celerydb.sqlite' x = Connection('+'.join(['sqla', hostname])) self.assertEqual(x.uri_prefix, 'sqla') self.assertEqual(x.hostname, hostname) clone = x.clone() self.assertEqual(clone.hostname, hostname) self.assertEqual(clone.uri_prefix, 'sqla') kombu-3.0.7/kombu/tests/transport/test_transport.py0000644000076500000000000000247512237554371023251 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import transport from kombu.tests.case import Case, Mock, patch class test_supports_librabbitmq(Case): def test_eventlet(self): with patch('kombu.transport._detect_environment') as de: de.return_value = 'eventlet' self.assertFalse(transport.supports_librabbitmq()) class test_transport(Case): def test_resolve_transport(self): from kombu.transport.memory import Transport self.assertIs(transport.resolve_transport( 'kombu.transport.memory:Transport'), Transport) self.assertIs(transport.resolve_transport(Transport), Transport) def test_resolve_transport_alias_callable(self): m = transport.TRANSPORT_ALIASES['George'] = Mock(name='lazyalias') try: transport.resolve_transport('George') m.assert_called_with() finally: transport.TRANSPORT_ALIASES.pop('George') def test_resolve_transport_alias(self): self.assertTrue(transport.resolve_transport('pyamqp')) class test_transport_ghettoq(Case): @patch('warnings.warn') def test_compat(self, warn): x = transport._ghettoq('Redis', 'redis', 'redis') self.assertEqual(x(), 'kombu.transport.redis.Transport') self.assertTrue(warn.called) kombu-3.0.7/kombu/tests/transport/virtual/0000755000076500000000000000000012247127370021256 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/transport/virtual/__init__.py0000644000076500000000000000000012075774634023367 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/transport/virtual/test_base.py0000644000076500000000000004105612237554371023613 0ustar asksolwheel00000000000000from __future__ import absolute_import import warnings from kombu import Connection from kombu.exceptions import ResourceError, ChannelError from kombu.transport import virtual from kombu.utils import uuid from kombu.compression import compress from kombu.tests.case import Case, Mock, patch, redirect_stdouts def client(**kwargs): return Connection(transport='kombu.transport.virtual:Transport', **kwargs) def memory_client(): return Connection(transport='memory') class test_BrokerState(Case): def test_constructor(self): s = virtual.BrokerState() self.assertTrue(hasattr(s, 'exchanges')) self.assertTrue(hasattr(s, 'bindings')) t = virtual.BrokerState(exchanges=16, bindings=32) self.assertEqual(t.exchanges, 16) self.assertEqual(t.bindings, 32) class test_QoS(Case): def setUp(self): self.q = virtual.QoS(client().channel(), prefetch_count=10) def tearDown(self): self.q._on_collect.cancel() def test_constructor(self): self.assertTrue(self.q.channel) self.assertTrue(self.q.prefetch_count) self.assertFalse(self.q._delivered.restored) self.assertTrue(self.q._on_collect) @redirect_stdouts def test_can_consume(self, stdout, stderr): _restored = [] class RestoreChannel(virtual.Channel): do_restore = True def _restore(self, message): _restored.append(message) self.assertTrue(self.q.can_consume()) for i in range(self.q.prefetch_count - 1): self.q.append(i, uuid()) self.assertTrue(self.q.can_consume()) self.q.append(i + 1, uuid()) self.assertFalse(self.q.can_consume()) tag1 = next(iter(self.q._delivered)) self.q.ack(tag1) self.assertTrue(self.q.can_consume()) tag2 = uuid() self.q.append(i + 2, tag2) self.assertFalse(self.q.can_consume()) self.q.reject(tag2) self.assertTrue(self.q.can_consume()) self.q.channel = RestoreChannel(self.q.channel.connection) tag3 = uuid() self.q.append(i + 3, tag3) self.q.reject(tag3, requeue=True) self.q._flush() self.q.restore_unacked_once() self.assertListEqual(_restored, [11, 9, 8, 7, 6, 5, 4, 3, 2, 1]) self.assertTrue(self.q._delivered.restored) self.assertFalse(self.q._delivered) self.q.restore_unacked_once() self.q._delivered.restored = False self.q.restore_unacked_once() self.assertTrue(stderr.getvalue()) self.assertFalse(stdout.getvalue()) self.q.restore_at_shutdown = False self.q.restore_unacked_once() def test_get(self): self.q._delivered['foo'] = 1 self.assertEqual(self.q.get('foo'), 1) class test_Message(Case): def test_create(self): c = client().channel() data = c.prepare_message('the quick brown fox...') tag = data['properties']['delivery_tag'] = uuid() message = c.message_to_python(data) self.assertIsInstance(message, virtual.Message) self.assertIs(message, c.message_to_python(message)) self.assertEqual(message.body, 'the quick brown fox...'.encode('utf-8')) self.assertTrue(message.delivery_tag, tag) def test_create_no_body(self): virtual.Message(Mock(), { 'body': None, 'properties': {'delivery_tag': 1}}) def test_serializable(self): c = client().channel() body, content_type = compress('the quick brown fox...', 'gzip') data = c.prepare_message(body, headers={'compression': content_type}) tag = data['properties']['delivery_tag'] = uuid() message = c.message_to_python(data) dict_ = message.serializable() self.assertEqual(dict_['body'], 'the quick brown fox...'.encode('utf-8')) self.assertEqual(dict_['properties']['delivery_tag'], tag) self.assertFalse('compression' in dict_['headers']) class test_AbstractChannel(Case): def test_get(self): with self.assertRaises(NotImplementedError): virtual.AbstractChannel()._get('queue') def test_put(self): with self.assertRaises(NotImplementedError): virtual.AbstractChannel()._put('queue', 'm') def test_size(self): self.assertEqual(virtual.AbstractChannel()._size('queue'), 0) def test_purge(self): with self.assertRaises(NotImplementedError): virtual.AbstractChannel()._purge('queue') def test_delete(self): with self.assertRaises(NotImplementedError): virtual.AbstractChannel()._delete('queue') def test_new_queue(self): self.assertIsNone(virtual.AbstractChannel()._new_queue('queue')) def test_has_queue(self): self.assertTrue(virtual.AbstractChannel()._has_queue('queue')) def test_poll(self): class Cycle(object): called = False def get(self): self.called = True return True cycle = Cycle() self.assertTrue(virtual.AbstractChannel()._poll(cycle)) self.assertTrue(cycle.called) class test_Channel(Case): def setUp(self): self.channel = client().channel() def tearDown(self): if self.channel._qos is not None: self.channel._qos._on_collect.cancel() def test_exceeds_channel_max(self): c = client() t = c.transport avail = t._avail_channel_ids = Mock(name='_avail_channel_ids') avail.pop.side_effect = IndexError() with self.assertRaises(ResourceError): virtual.Channel(t) def test_exchange_bind_interface(self): with self.assertRaises(NotImplementedError): self.channel.exchange_bind('dest', 'src', 'key') def test_exchange_unbind_interface(self): with self.assertRaises(NotImplementedError): self.channel.exchange_unbind('dest', 'src', 'key') def test_queue_unbind_interface(self): with self.assertRaises(NotImplementedError): self.channel.queue_unbind('dest', 'ex', 'key') def test_management(self): m = self.channel.connection.client.get_manager() self.assertTrue(m) m.get_bindings() m.close() def test_exchange_declare(self): c = self.channel with self.assertRaises(ChannelError): c.exchange_declare('test_exchange_declare', 'direct', durable=True, auto_delete=True, passive=True) c.exchange_declare('test_exchange_declare', 'direct', durable=True, auto_delete=True) c.exchange_declare('test_exchange_declare', 'direct', durable=True, auto_delete=True, passive=True) self.assertIn('test_exchange_declare', c.state.exchanges) # can declare again with same values c.exchange_declare('test_exchange_declare', 'direct', durable=True, auto_delete=True) self.assertIn('test_exchange_declare', c.state.exchanges) # using different values raises NotEquivalentError with self.assertRaises(virtual.NotEquivalentError): c.exchange_declare('test_exchange_declare', 'direct', durable=False, auto_delete=True) def test_exchange_delete(self, ex='test_exchange_delete'): class PurgeChannel(virtual.Channel): purged = [] def _purge(self, queue): self.purged.append(queue) c = PurgeChannel(self.channel.connection) c.exchange_declare(ex, 'direct', durable=True, auto_delete=True) self.assertIn(ex, c.state.exchanges) self.assertNotIn(ex, c.state.bindings) # no bindings yet c.exchange_delete(ex) self.assertNotIn(ex, c.state.exchanges) c.exchange_declare(ex, 'direct', durable=True, auto_delete=True) c.queue_declare(ex) c.queue_bind(ex, ex, ex) self.assertTrue(c.state.bindings[ex]) c.exchange_delete(ex) self.assertNotIn(ex, c.state.bindings) self.assertIn(ex, c.purged) def test_queue_delete__if_empty(self, n='test_queue_delete__if_empty'): class PurgeChannel(virtual.Channel): purged = [] size = 30 def _purge(self, queue): self.purged.append(queue) def _size(self, queue): return self.size c = PurgeChannel(self.channel.connection) c.exchange_declare(n) c.queue_declare(n) c.queue_bind(n, n, n) c.queue_bind(n, n, n) # tests code path that returns # if queue already bound. c.queue_delete(n, if_empty=True) self.assertIn(n, c.state.bindings) c.size = 0 c.queue_delete(n, if_empty=True) self.assertNotIn(n, c.state.bindings) self.assertIn(n, c.purged) def test_queue_purge(self, n='test_queue_purge'): class PurgeChannel(virtual.Channel): purged = [] def _purge(self, queue): self.purged.append(queue) c = PurgeChannel(self.channel.connection) c.exchange_declare(n) c.queue_declare(n) c.queue_bind(n, n, n) c.queue_purge(n) self.assertIn(n, c.purged) def test_basic_publish__get__consume__restore(self, n='test_basic_publish'): c = memory_client().channel() c.exchange_declare(n) c.queue_declare(n) c.queue_bind(n, n, n) c.queue_declare(n + '2') c.queue_bind(n + '2', n, n) m = c.prepare_message('nthex quick brown fox...') c.basic_publish(m, n, n) r1 = c.message_to_python(c.basic_get(n)) self.assertTrue(r1) self.assertEqual(r1.body, 'nthex quick brown fox...'.encode('utf-8')) self.assertIsNone(c.basic_get(n)) consumer_tag = uuid() c.basic_consume(n + '2', False, consumer_tag=consumer_tag, callback=lambda *a: None) self.assertIn(n + '2', c._active_queues) r2, _ = c.drain_events() r2 = c.message_to_python(r2) self.assertEqual(r2.body, 'nthex quick brown fox...'.encode('utf-8')) self.assertEqual(r2.delivery_info['exchange'], n) self.assertEqual(r2.delivery_info['routing_key'], n) with self.assertRaises(virtual.Empty): c.drain_events() c.basic_cancel(consumer_tag) c._restore(r2) r3 = c.message_to_python(c.basic_get(n)) self.assertTrue(r3) self.assertEqual(r3.body, 'nthex quick brown fox...'.encode('utf-8')) self.assertIsNone(c.basic_get(n)) def test_basic_ack(self): class MockQoS(virtual.QoS): was_acked = False def ack(self, delivery_tag): self.was_acked = True self.channel._qos = MockQoS(self.channel) self.channel.basic_ack('foo') self.assertTrue(self.channel._qos.was_acked) def test_basic_recover__requeue(self): class MockQoS(virtual.QoS): was_restored = False def restore_unacked(self): self.was_restored = True self.channel._qos = MockQoS(self.channel) self.channel.basic_recover(requeue=True) self.assertTrue(self.channel._qos.was_restored) def test_restore_unacked_raises_BaseException(self): q = self.channel.qos q._flush = Mock() q._delivered = {1: 1} q.channel._restore = Mock() q.channel._restore.side_effect = SystemExit errors = q.restore_unacked() self.assertIsInstance(errors[0][0], SystemExit) self.assertEqual(errors[0][1], 1) self.assertFalse(q._delivered) @patch('kombu.transport.virtual.emergency_dump_state') @patch('kombu.transport.virtual.say') def test_restore_unacked_once_when_unrestored(self, say, emergency_dump_state): q = self.channel.qos q._flush = Mock() class State(dict): restored = False q._delivered = State({1: 1}) ru = q.restore_unacked = Mock() exc = None try: raise KeyError() except KeyError as exc_: exc = exc_ ru.return_value = [(exc, 1)] self.channel.do_restore = True q.restore_unacked_once() self.assertTrue(say.called) self.assertTrue(emergency_dump_state.called) def test_basic_recover(self): with self.assertRaises(NotImplementedError): self.channel.basic_recover(requeue=False) def test_basic_reject(self): class MockQoS(virtual.QoS): was_rejected = False def reject(self, delivery_tag, requeue=False): self.was_rejected = True self.channel._qos = MockQoS(self.channel) self.channel.basic_reject('foo') self.assertTrue(self.channel._qos.was_rejected) def test_basic_qos(self): self.channel.basic_qos(prefetch_count=128) self.assertEqual(self.channel._qos.prefetch_count, 128) def test_lookup__undeliverable(self, n='test_lookup__undeliverable'): warnings.resetwarnings() with warnings.catch_warnings(record=True) as log: self.assertListEqual( self.channel._lookup(n, n, 'ae.undeliver'), ['ae.undeliver'], ) self.assertTrue(log) self.assertIn('could not be delivered', log[0].message.args[0]) def test_context(self): x = self.channel.__enter__() self.assertIs(x, self.channel) x.__exit__() self.assertTrue(x.closed) def test_cycle_property(self): self.assertTrue(self.channel.cycle) def test_flow(self): with self.assertRaises(NotImplementedError): self.channel.flow(False) def test_close_when_no_connection(self): self.channel.connection = None self.channel.close() self.assertTrue(self.channel.closed) def test_drain_events_has_get_many(self): c = self.channel c._get_many = Mock() c._poll = Mock() c._consumers = [1] c._qos = Mock() c._qos.can_consume.return_value = True c.drain_events(timeout=10.0) c._get_many.assert_called_with(c._active_queues, timeout=10.0) def test_get_exchanges(self): self.channel.exchange_declare(exchange='foo') self.assertTrue(self.channel.get_exchanges()) def test_basic_cancel_not_in_active_queues(self): c = self.channel c._consumers.add('x') c._tag_to_queue['x'] = 'foo' c._active_queues = Mock() c._active_queues.remove.side_effect = ValueError() c.basic_cancel('x') c._active_queues.remove.assert_called_with('foo') def test_basic_cancel_unknown_ctag(self): self.assertIsNone(self.channel.basic_cancel('unknown-tag')) def test_list_bindings(self): c = self.channel c.exchange_declare(exchange='foo') c.queue_declare(queue='q') c.queue_bind(queue='q', exchange='foo', routing_key='rk') self.assertIn(('q', 'foo', 'rk'), list(c.list_bindings())) def test_after_reply_message_received(self): c = self.channel c.queue_delete = Mock() c.after_reply_message_received('foo') c.queue_delete.assert_called_with('foo') def test_queue_delete_unknown_queue(self): self.assertIsNone(self.channel.queue_delete('xiwjqjwel')) def test_queue_declare_passive(self): has_queue = self.channel._has_queue = Mock() has_queue.return_value = False with self.assertRaises(ChannelError): self.channel.queue_declare(queue='21wisdjwqe', passive=True) class test_Transport(Case): def setUp(self): self.transport = client().transport def test_custom_polling_interval(self): x = client(transport_options=dict(polling_interval=32.3)) self.assertEqual(x.transport.polling_interval, 32.3) def test_close_connection(self): c1 = self.transport.create_channel(self.transport) c2 = self.transport.create_channel(self.transport) self.assertEqual(len(self.transport.channels), 2) self.transport.close_connection(self.transport) self.assertFalse(self.transport.channels) del(c1) # so pyflakes doesn't complain del(c2) def test_drain_channel(self): channel = self.transport.create_channel(self.transport) with self.assertRaises(virtual.Empty): self.transport._drain_channel(channel) kombu-3.0.7/kombu/tests/transport/virtual/test_exchange.py0000644000076500000000000001133212237554371024455 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection from kombu.transport.virtual import exchange from kombu.tests.case import Case, Mock from kombu.tests.mocks import Transport class ExchangeCase(Case): type = None def setUp(self): if self.type: self.e = self.type(Connection(transport=Transport).channel()) class test_Direct(ExchangeCase): type = exchange.DirectExchange table = [('rFoo', None, 'qFoo'), ('rFoo', None, 'qFox'), ('rBar', None, 'qBar'), ('rBaz', None, 'qBaz')] def test_lookup(self): self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'rFoo', None), ['qFoo', 'qFox'], ) self.assertListEqual( self.e.lookup(self.table, 'eMoz', 'rMoz', 'DEFAULT'), [], ) self.assertListEqual( self.e.lookup(self.table, 'eBar', 'rBar', None), ['qBar'], ) class test_Fanout(ExchangeCase): type = exchange.FanoutExchange table = [(None, None, 'qFoo'), (None, None, 'qFox'), (None, None, 'qBar')] def test_lookup(self): self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'rFoo', None), ['qFoo', 'qFox', 'qBar'], ) def test_deliver_when_fanout_supported(self): self.e.channel = Mock() self.e.channel.supports_fanout = True message = Mock() self.e.deliver(message, 'exchange', None) self.e.channel._put_fanout.assert_called_with('exchange', message) def test_deliver_when_fanout_unsupported(self): self.e.channel = Mock() self.e.channel.supports_fanout = False self.e.deliver(Mock(), 'exchange', None) self.assertFalse(self.e.channel._put_fanout.called) class test_Topic(ExchangeCase): type = exchange.TopicExchange table = [ ('stock.#', None, 'rFoo'), ('stock.us.*', None, 'rBar'), ] def setUp(self): super(test_Topic, self).setUp() self.table = [(rkey, self.e.key_to_pattern(rkey), queue) for rkey, _, queue in self.table] def test_prepare_bind(self): x = self.e.prepare_bind('qFoo', 'eFoo', 'stock.#', {}) self.assertTupleEqual(x, ('stock.#', r'^stock\..*?$', 'qFoo')) def test_lookup(self): self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'stock.us.nasdaq', None), ['rFoo', 'rBar'], ) self.assertTrue(self.e._compiled) self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'stock.europe.OSE', None), ['rFoo'], ) self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'stockxeuropexOSE', None), [], ) self.assertListEqual( self.e.lookup(self.table, 'eFoo', 'candy.schleckpulver.snap_crackle', None), [], ) def test_deliver(self): self.e.channel = Mock() self.e.channel._lookup.return_value = ('a', 'b') message = Mock() self.e.deliver(message, 'exchange', 'rkey') expected = [(('a', message), {}), (('b', message), {})] self.assertListEqual(self.e.channel._put.call_args_list, expected) class test_ExchangeType(ExchangeCase): type = exchange.ExchangeType def test_lookup(self): with self.assertRaises(NotImplementedError): self.e.lookup([], 'eFoo', 'rFoo', None) def test_prepare_bind(self): self.assertTupleEqual( self.e.prepare_bind('qFoo', 'eFoo', 'rFoo', {}), ('rFoo', None, 'qFoo'), ) def test_equivalent(self): e1 = dict( type='direct', durable=True, auto_delete=True, arguments={}, ) self.assertTrue( self.e.equivalent(e1, 'eFoo', 'direct', True, True, {}), ) self.assertFalse( self.e.equivalent(e1, 'eFoo', 'topic', True, True, {}), ) self.assertFalse( self.e.equivalent(e1, 'eFoo', 'direct', False, True, {}), ) self.assertFalse( self.e.equivalent(e1, 'eFoo', 'direct', True, False, {}), ) self.assertFalse( self.e.equivalent(e1, 'eFoo', 'direct', True, True, {'expires': 3000}), ) e2 = dict(e1, arguments={'expires': 3000}) self.assertTrue( self.e.equivalent(e2, 'eFoo', 'direct', True, True, {'expires': 3000}), ) self.assertFalse( self.e.equivalent(e2, 'eFoo', 'direct', True, True, {'expires': 6000}), ) kombu-3.0.7/kombu/tests/transport/virtual/test_scheduling.py0000644000076500000000000000345312237554371025025 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu.transport.virtual.scheduling import FairCycle from kombu.tests.case import Case class MyEmpty(Exception): pass def consume(fun, n): r = [] for i in range(n): r.append(fun()) return r class test_FairCycle(Case): def test_cycle(self): resources = ['a', 'b', 'c', 'd', 'e'] def echo(r, timeout=None): return r # cycle should be ['a', 'b', 'c', 'd', 'e', ... repeat] cycle = FairCycle(echo, resources, MyEmpty) for i in range(len(resources)): self.assertEqual(cycle.get(), (resources[i], resources[i])) for i in range(len(resources)): self.assertEqual(cycle.get(), (resources[i], resources[i])) def test_cycle_breaks(self): resources = ['a', 'b', 'c', 'd', 'e'] def echo(r): if r == 'c': raise MyEmpty(r) return r cycle = FairCycle(echo, resources, MyEmpty) self.assertEqual( consume(cycle.get, len(resources)), [('a', 'a'), ('b', 'b'), ('d', 'd'), ('e', 'e'), ('a', 'a')], ) self.assertEqual( consume(cycle.get, len(resources)), [('b', 'b'), ('d', 'd'), ('e', 'e'), ('a', 'a'), ('b', 'b')], ) cycle2 = FairCycle(echo, ['c', 'c'], MyEmpty) with self.assertRaises(MyEmpty): consume(cycle2.get, 3) def test_cycle_no_resources(self): cycle = FairCycle(None, [], MyEmpty) cycle.pos = 10 with self.assertRaises(MyEmpty): cycle._next() def test__repr__(self): self.assertTrue(repr(FairCycle(lambda x: x, [1, 2, 3], MyEmpty))) kombu-3.0.7/kombu/tests/utils/0000755000076500000000000000000012247127370016674 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/utils/__init__.py0000644000076500000000000000000012237554371020777 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/tests/utils/test_amq_manager.py0000644000076500000000000000232612237554371022564 0ustar asksolwheel00000000000000from __future__ import absolute_import from kombu import Connection from kombu.tests.case import Case, mask_modules, module_exists, patch class test_get_manager(Case): @mask_modules('pyrabbit') def test_without_pyrabbit(self): with self.assertRaises(ImportError): Connection('amqp://').get_manager() @module_exists('pyrabbit') def test_with_pyrabbit(self): with patch('pyrabbit.Client', create=True) as Client: manager = Connection('amqp://').get_manager() self.assertIsNotNone(manager) Client.assert_called_with( 'localhost:15672', 'guest', 'guest', ) @module_exists('pyrabbit') def test_transport_options(self): with patch('pyrabbit.Client', create=True) as Client: manager = Connection('amqp://', transport_options={ 'manager_hostname': 'admin.mq.vandelay.com', 'manager_port': 808, 'manager_userid': 'george', 'manager_password': 'bosco', }).get_manager() self.assertIsNotNone(manager) Client.assert_called_with( 'admin.mq.vandelay.com:808', 'george', 'bosco', ) kombu-3.0.7/kombu/tests/utils/test_debug.py0000644000076500000000000000327612237554371021407 0ustar asksolwheel00000000000000from __future__ import absolute_import import logging from kombu.utils.debug import ( setup_logging, Logwrapped, ) from kombu.tests.case import Case, Mock, patch class test_setup_logging(Case): def test_adds_handlers_sets_level(self): with patch('kombu.utils.debug.get_logger') as get_logger: logger = get_logger.return_value = Mock() setup_logging(loggers=['kombu.test']) get_logger.assert_called_with('kombu.test') self.assertTrue(logger.addHandler.called) logger.setLevel.assert_called_with(logging.DEBUG) class test_Logwrapped(Case): def test_wraps(self): with patch('kombu.utils.debug.get_logger') as get_logger: logger = get_logger.return_value = Mock() W = Logwrapped(Mock(), 'kombu.test') get_logger.assert_called_with('kombu.test') self.assertIsNotNone(W.instance) self.assertIs(W.logger, logger) W.instance.__repr__ = lambda s: 'foo' self.assertEqual(repr(W), 'foo') W.instance.some_attr = 303 self.assertEqual(W.some_attr, 303) W.instance.some_method.__name__ = 'some_method' W.some_method(1, 2, kw=1) W.instance.some_method.assert_called_with(1, 2, kw=1) W.some_method() W.instance.some_method.assert_called_with() W.some_method(kw=1) W.instance.some_method.assert_called_with(kw=1) W.ident = 'ident' W.some_method(kw=1) self.assertTrue(logger.debug.called) self.assertIn('ident', logger.debug.call_args[0][0]) self.assertEqual(dir(W), dir(W.instance)) kombu-3.0.7/kombu/tests/utils/test_encoding.py0000644000076500000000000000573212237554371022106 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import unicode_literals import sys from contextlib import contextmanager from kombu.five import bytes_t, string_t from kombu.utils.encoding import safe_str, default_encoding from kombu.tests.case import Case, SkipTest, patch @contextmanager def clean_encoding(): old_encoding = sys.modules.pop('kombu.utils.encoding', None) import kombu.utils.encoding try: yield kombu.utils.encoding finally: if old_encoding: sys.modules['kombu.utils.encoding'] = old_encoding class test_default_encoding(Case): @patch('sys.getfilesystemencoding') def test_default(self, getdefaultencoding): getdefaultencoding.return_value = 'ascii' with clean_encoding() as encoding: enc = encoding.default_encoding() if sys.platform.startswith('java'): self.assertEqual(enc, 'utf-8') else: self.assertEqual(enc, 'ascii') getdefaultencoding.assert_called_with() class test_encoding_utils(Case): def setUp(self): if sys.version_info >= (3, 0): raise SkipTest('not relevant on py3k') def test_str_to_bytes(self): with clean_encoding() as e: self.assertIsInstance(e.str_to_bytes('foobar'), bytes_t) def test_from_utf8(self): with clean_encoding() as e: self.assertIsInstance(e.from_utf8('foobar'), bytes_t) def test_default_encode(self): with clean_encoding() as e: self.assertTrue(e.default_encode(b'foo')) class test_safe_str(Case): def setUp(self): self._cencoding = patch('sys.getfilesystemencoding') self._encoding = self._cencoding.__enter__() self._encoding.return_value = 'ascii' def tearDown(self): self._cencoding.__exit__() def test_when_bytes(self): self.assertEqual(safe_str('foo'), 'foo') def test_when_unicode(self): self.assertIsInstance(safe_str('foo'), string_t) def test_when_encoding_utf8(self): with patch('sys.getfilesystemencoding') as encoding: encoding.return_value = 'utf-8' self.assertEqual(default_encoding(), 'utf-8') s = 'The quiæk fåx jømps øver the lazy dåg' res = safe_str(s) self.assertIsInstance(res, str) def test_when_containing_high_chars(self): with patch('sys.getfilesystemencoding') as encoding: encoding.return_value = 'ascii' s = 'The quiæk fåx jømps øver the lazy dåg' res = safe_str(s) self.assertIsInstance(res, str) self.assertEqual(len(s), len(res)) def test_when_not_string(self): o = object() self.assertEqual(safe_str(o), repr(o)) def test_when_unrepresentable(self): class O(object): def __repr__(self): raise KeyError('foo') self.assertIn('= (3, 0): from io import StringIO, BytesIO else: from StringIO import StringIO, StringIO as BytesIO # noqa from kombu import utils from kombu.five import string_t from kombu.tests.case import ( Case, Mock, patch, redirect_stdouts, mask_modules, module_exists, skip_if_module, ) class OldString(object): def __init__(self, value): self.value = value def __str__(self): return self.value def split(self, *args, **kwargs): return self.value.split(*args, **kwargs) def rsplit(self, *args, **kwargs): return self.value.rsplit(*args, **kwargs) class test_kombu_module(Case): def test_dir(self): import kombu self.assertTrue(dir(kombu)) class test_utils(Case): def test_maybe_list(self): self.assertEqual(utils.maybe_list(None), []) self.assertEqual(utils.maybe_list(1), [1]) self.assertEqual(utils.maybe_list([1, 2, 3]), [1, 2, 3]) def test_fxrange_no_repeatlast(self): self.assertEqual(list(utils.fxrange(1.0, 3.0, 1.0)), [1.0, 2.0, 3.0]) def test_fxrangemax(self): self.assertEqual(list(utils.fxrangemax(1.0, 3.0, 1.0, 30.0)), [1.0, 2.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0]) self.assertEqual(list(utils.fxrangemax(1.0, None, 1.0, 30.0)), [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]) def test_reprkwargs(self): self.assertTrue(utils.reprkwargs({'foo': 'bar', 1: 2, 'k': 'v'})) def test_reprcall(self): self.assertTrue( utils.reprcall('add', (2, 2), {'copy': True}), ) class test_UUID(Case): def test_uuid4(self): self.assertNotEqual(utils.uuid4(), utils.uuid4()) def test_uuid(self): i1 = utils.uuid() i2 = utils.uuid() self.assertIsInstance(i1, str) self.assertNotEqual(i1, i2) @skip_if_module('__pypy__') def test_uuid_without_ctypes(self): old_utils = sys.modules.pop('kombu.utils') @mask_modules('ctypes') def with_ctypes_masked(): from kombu.utils import ctypes, uuid self.assertIsNone(ctypes) tid = uuid() self.assertTrue(tid) self.assertIsInstance(tid, string_t) try: with_ctypes_masked() finally: sys.modules['celery.utils'] = old_utils class test_Misc(Case): def test_kwdict(self): def f(**kwargs): return kwargs kw = {'foo': 'foo', 'bar': 'bar'} self.assertTrue(f(**utils.kwdict(kw))) class MyStringIO(StringIO): def close(self): pass class MyBytesIO(BytesIO): def close(self): pass class test_emergency_dump_state(Case): @redirect_stdouts def test_dump(self, stdout, stderr): fh = MyBytesIO() utils.emergency_dump_state({'foo': 'bar'}, open_file=lambda n, m: fh) self.assertDictEqual(pickle.loads(fh.getvalue()), {'foo': 'bar'}) self.assertTrue(stderr.getvalue()) self.assertFalse(stdout.getvalue()) @redirect_stdouts def test_dump_second_strategy(self, stdout, stderr): fh = MyStringIO() def raise_something(*args, **kwargs): raise KeyError('foo') utils.emergency_dump_state( {'foo': 'bar'}, open_file=lambda n, m: fh, dump=raise_something ) self.assertIn('foo', fh.getvalue()) self.assertIn('bar', fh.getvalue()) self.assertTrue(stderr.getvalue()) self.assertFalse(stdout.getvalue()) def insomnia(fun): @wraps(fun) def _inner(*args, **kwargs): def mysleep(i): pass prev_sleep = utils.sleep utils.sleep = mysleep try: return fun(*args, **kwargs) finally: utils.sleep = prev_sleep return _inner class test_retry_over_time(Case): def setUp(self): self.index = 0 class Predicate(Exception): pass def myfun(self): if self.index < 9: raise self.Predicate() return 42 def errback(self, exc, intervals, retries): interval = next(intervals) sleepvals = (None, 2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0, 16.0, 16.0) self.index += 1 self.assertEqual(interval, sleepvals[self.index]) return interval @insomnia def test_simple(self): prev_count, utils.count = utils.count, Mock() try: utils.count.return_value = list(range(1)) x = utils.retry_over_time(self.myfun, self.Predicate, errback=None, interval_max=14) self.assertIsNone(x) utils.count.return_value = list(range(10)) cb = Mock() x = utils.retry_over_time(self.myfun, self.Predicate, errback=self.errback, callback=cb, interval_max=14) self.assertEqual(x, 42) self.assertEqual(self.index, 9) cb.assert_called_with() finally: utils.count = prev_count @insomnia def test_retry_once(self): self.assertRaises( self.Predicate, utils.retry_over_time, self.myfun, self.Predicate, max_retries=1, errback=self.errback, interval_max=14, ) self.assertEqual(self.index, 1) # no errback self.assertRaises( self.Predicate, utils.retry_over_time, self.myfun, self.Predicate, max_retries=1, errback=None, interval_max=14, ) @insomnia def test_retry_never(self): self.assertRaises( self.Predicate, utils.retry_over_time, self.myfun, self.Predicate, max_retries=0, errback=self.errback, interval_max=14, ) self.assertEqual(self.index, 0) class test_cached_property(Case): def test_deleting(self): class X(object): xx = False @utils.cached_property def foo(self): return 42 @foo.deleter # noqa def foo(self, value): self.xx = value x = X() del(x.foo) self.assertFalse(x.xx) x.__dict__['foo'] = 'here' del(x.foo) self.assertEqual(x.xx, 'here') def test_when_access_from_class(self): class X(object): xx = None @utils.cached_property def foo(self): return 42 @foo.setter # noqa def foo(self, value): self.xx = 10 desc = X.__dict__['foo'] self.assertIs(X.foo, desc) self.assertIs(desc.__get__(None), desc) self.assertIs(desc.__set__(None, 1), desc) self.assertIs(desc.__delete__(None), desc) self.assertTrue(desc.setter(1)) x = X() x.foo = 30 self.assertEqual(x.xx, 10) del(x.foo) class test_symbol_by_name(Case): def test_instance_returns_instance(self): instance = object() self.assertIs(utils.symbol_by_name(instance), instance) def test_returns_default(self): default = object() self.assertIs( utils.symbol_by_name('xyz.ryx.qedoa.weq:foz', default=default), default, ) def test_no_default(self): with self.assertRaises(ImportError): utils.symbol_by_name('xyz.ryx.qedoa.weq:foz') def test_imp_reraises_ValueError(self): imp = Mock() imp.side_effect = ValueError() with self.assertRaises(ValueError): utils.symbol_by_name('kombu.Connection', imp=imp) def test_package(self): from kombu.entity import Exchange self.assertIs( utils.symbol_by_name('.entity:Exchange', package='kombu'), Exchange, ) self.assertTrue(utils.symbol_by_name(':Consumer', package='kombu')) class test_ChannelPromise(Case): def test_repr(self): self.assertIn( 'foo', repr(utils.ChannelPromise(lambda: 'foo')), ) class test_entrypoints(Case): @mask_modules('pkg_resources') def test_without_pkg_resources(self): self.assertListEqual(list(utils.entrypoints('kombu.test')), []) @module_exists('pkg_resources') def test_with_pkg_resources(self): with patch('pkg_resources.iter_entry_points', create=True) as iterep: eps = iterep.return_value = [Mock(), Mock()] self.assertTrue(list(utils.entrypoints('kombu.test'))) iterep.assert_called_with('kombu.test') eps[0].load.assert_called_with() eps[1].load.assert_called_with() class test_shufflecycle(Case): def test_shuffles(self): prev_repeat, utils.repeat = utils.repeat, Mock() try: utils.repeat.return_value = list(range(10)) values = set(['A', 'B', 'C']) cycle = utils.shufflecycle(values) seen = set() for i in range(10): next(cycle) utils.repeat.assert_called_with(None) self.assertTrue(seen.issubset(values)) with self.assertRaises(StopIteration): next(cycle) next(cycle) finally: utils.repeat = prev_repeat kombu-3.0.7/kombu/transport/0000755000076500000000000000000012247127370016426 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/__init__.py0000644000076500000000000000712312243752124020537 0ustar asksolwheel00000000000000""" kombu.transport =============== Built-in transports. """ from __future__ import absolute_import from kombu.five import string_t from kombu.syn import _detect_environment from kombu.utils import symbol_by_name def supports_librabbitmq(): if _detect_environment() == 'default': try: import librabbitmq # noqa except ImportError: # pragma: no cover pass else: # pragma: no cover return True def _ghettoq(name, new, alias=None): xxx = new # stupid enclosing def __inner(): import warnings _new = callable(xxx) and xxx() or xxx gtransport = 'ghettoq.taproot.{0}'.format(name) ktransport = 'kombu.transport.{0}.Transport'.format(_new) this = alias or name warnings.warn(""" Ghettoq does not work with Kombu, but there is now a built-in version of the {0} transport. You should replace {1!r} with: {2!r} """.format(name, gtransport, this)) return ktransport return __inner TRANSPORT_ALIASES = { 'amqp': 'kombu.transport.pyamqp:Transport', 'pyamqp': 'kombu.transport.pyamqp:Transport', 'librabbitmq': 'kombu.transport.librabbitmq:Transport', 'memory': 'kombu.transport.memory:Transport', 'redis': 'kombu.transport.redis:Transport', 'SQS': 'kombu.transport.SQS:Transport', 'sqs': 'kombu.transport.SQS:Transport', 'beanstalk': 'kombu.transport.beanstalk:Transport', 'mongodb': 'kombu.transport.mongodb:Transport', 'couchdb': 'kombu.transport.couchdb:Transport', 'zookeeper': 'kombu.transport.zookeeper:Transport', 'django': 'kombu.transport.django:Transport', 'sqlalchemy': 'kombu.transport.sqlalchemy:Transport', 'sqla': 'kombu.transport.sqlalchemy:Transport', 'SLMQ': 'kombu.transport.SLMQ.Transport', 'slmq': 'kombu.transport.SLMQ.Transport', 'ghettoq.taproot.Redis': _ghettoq('Redis', 'redis', 'redis'), 'ghettoq.taproot.Database': _ghettoq('Database', 'django', 'django'), 'ghettoq.taproot.MongoDB': _ghettoq('MongoDB', 'mongodb'), 'ghettoq.taproot.Beanstalk': _ghettoq('Beanstalk', 'beanstalk'), 'ghettoq.taproot.CouchDB': _ghettoq('CouchDB', 'couchdb'), 'filesystem': 'kombu.transport.filesystem:Transport', 'zeromq': 'kombu.transport.zmq:Transport', 'zmq': 'kombu.transport.zmq:Transport', 'amqplib': 'kombu.transport.amqplib:Transport', } _transport_cache = {} def resolve_transport(transport=None): if isinstance(transport, string_t): try: transport = TRANSPORT_ALIASES[transport] except KeyError: if '.' not in transport and ':' not in transport: from kombu.utils.text import fmatch_best alt = fmatch_best(transport, TRANSPORT_ALIASES) if alt: raise KeyError( 'No such transport: {0}. Did you mean {1}?'.format( transport, alt)) raise KeyError('No such transport: {0}'.format(transport)) else: if callable(transport): transport = transport() return symbol_by_name(transport) return transport def get_transport_cls(transport=None): """Get transport class by name. The transport string is the full path to a transport class, e.g.:: "kombu.transport.pyamqp:Transport" If the name does not include `"."` (is not fully qualified), the alias table will be consulted. """ if transport not in _transport_cache: _transport_cache[transport] = resolve_transport(transport) return _transport_cache[transport] kombu-3.0.7/kombu/transport/amqplib.py0000644000076500000000000003074012237554371020435 0ustar asksolwheel00000000000000""" kombu.transport.amqplib ======================= amqplib transport. """ from __future__ import absolute_import import errno import socket try: from ssl import SSLError except ImportError: class SSLError(Exception): # noqa pass from struct import unpack from amqplib import client_0_8 as amqp from amqplib.client_0_8 import transport from amqplib.client_0_8.channel import Channel as _Channel from amqplib.client_0_8.exceptions import AMQPConnectionException from amqplib.client_0_8.exceptions import AMQPChannelException from kombu.five import items from kombu.utils.encoding import str_to_bytes from kombu.utils.amq_manager import get_manager from . import base DEFAULT_PORT = 5672 HAS_MSG_PEEK = hasattr(socket, 'MSG_PEEK') # amqplib's handshake mistakenly identifies as protocol version 1191, # this breaks in RabbitMQ tip, which no longer falls back to # 0-8 for unknown ids. transport.AMQP_PROTOCOL_HEADER = str_to_bytes('AMQP\x01\x01\x08\x00') # - fixes warnings when socket is not connected. class TCPTransport(transport.TCPTransport): def read_frame(self): frame_type, channel, size = unpack('>BHI', self._read(7, True)) payload = self._read(size) ch = ord(self._read(1)) if ch == 206: # '\xce' return frame_type, channel, payload else: raise Exception( 'Framing Error, received 0x%02x while expecting 0xce' % ch) def _read(self, n, initial=False): while len(self._read_buffer) < n: try: s = self.sock.recv(65536) except socket.error as exc: if not initial and exc.errno in (errno.EAGAIN, errno.EINTR): continue raise if not s: raise IOError('Socket closed') self._read_buffer += s result = self._read_buffer[:n] self._read_buffer = self._read_buffer[n:] return result def __del__(self): try: self.close() except Exception: pass finally: self.sock = None transport.TCPTransport = TCPTransport class SSLTransport(transport.SSLTransport): def __init__(self, host, connect_timeout, ssl): if isinstance(ssl, dict): self.sslopts = ssl self.sslobj = None transport._AbstractTransport.__init__(self, host, connect_timeout) def read_frame(self): frame_type, channel, size = unpack('>BHI', self._read(7, True)) payload = self._read(size) ch = ord(self._read(1)) if ch == 206: # '\xce' return frame_type, channel, payload else: raise Exception( 'Framing Error, received 0x%02x while expecting 0xce' % ch) def _read(self, n, initial=False): result = '' while len(result) < n: try: s = self.sslobj.read(n - len(result)) except socket.error as exc: if not initial and exc.errno in (errno.EAGAIN, errno.EINTR): continue raise if not s: raise IOError('Socket closed') result += s return result def __del__(self): try: self.close() except Exception: pass finally: self.sock = None transport.SSLTransport = SSLTransport class Connection(amqp.Connection): # pragma: no cover def _do_close(self, *args, **kwargs): # amqplib does not ignore socket errors when connection # is closed on the remote end. try: super(Connection, self)._do_close(*args, **kwargs) except socket.error: pass def _dispatch_basic_return(self, channel, args, msg): reply_code = args.read_short() reply_text = args.read_shortstr() exchange = args.read_shortstr() routing_key = args.read_shortstr() exc = AMQPChannelException(reply_code, reply_text, (50, 60)) if channel.events['basic_return']: for callback in channel.events['basic_return']: callback(exc, exchange, routing_key, msg) else: raise exc def __init__(self, *args, **kwargs): super(Connection, self).__init__(*args, **kwargs) self._method_override = {(60, 50): self._dispatch_basic_return} def drain_events(self, timeout=None): """Wait for an event on a channel.""" chanmap = self.channels chanid, method_sig, args, content = self._wait_multiple( chanmap, None, timeout=timeout) channel = chanmap[chanid] if (content and channel.auto_decode and hasattr(content, 'content_encoding')): try: content.body = content.body.decode(content.content_encoding) except Exception: pass amqp_method = self._method_override.get(method_sig) or \ channel._METHOD_MAP.get(method_sig, None) if amqp_method is None: raise Exception('Unknown AMQP method (%d, %d)' % method_sig) if content is None: return amqp_method(channel, args) else: return amqp_method(channel, args, content) def read_timeout(self, timeout=None): if timeout is None: return self.method_reader.read_method() sock = self.transport.sock prev = sock.gettimeout() if prev != timeout: sock.settimeout(timeout) try: try: return self.method_reader.read_method() except SSLError as exc: # http://bugs.python.org/issue10272 if 'timed out' in str(exc): raise socket.timeout() # Non-blocking SSL sockets can throw SSLError if 'The operation did not complete' in str(exc): raise socket.timeout() raise finally: if prev != timeout: sock.settimeout(prev) def _wait_multiple(self, channels, allowed_methods, timeout=None): for channel_id, channel in items(channels): method_queue = channel.method_queue for queued_method in method_queue: method_sig = queued_method[0] if (allowed_methods is None or method_sig in allowed_methods or method_sig == (20, 40)): method_queue.remove(queued_method) method_sig, args, content = queued_method return channel_id, method_sig, args, content # Nothing queued, need to wait for a method from the peer read_timeout = self.read_timeout wait = self.wait while 1: channel, method_sig, args, content = read_timeout(timeout) if (channel in channels and allowed_methods is None or method_sig in allowed_methods or method_sig == (20, 40)): return channel, method_sig, args, content # Not the channel and/or method we were looking for. Queue # this method for later channels[channel].method_queue.append((method_sig, args, content)) # # If we just queued up a method for channel 0 (the Connection # itself) it's probably a close method in reaction to some # error, so deal with it right away. # if channel == 0: wait() def channel(self, channel_id=None): try: return self.channels[channel_id] except KeyError: return Channel(self, channel_id) class Message(base.Message): def __init__(self, channel, msg, **kwargs): props = msg.properties super(Message, self).__init__( channel, body=msg.body, delivery_tag=msg.delivery_tag, content_type=props.get('content_type'), content_encoding=props.get('content_encoding'), delivery_info=msg.delivery_info, properties=msg.properties, headers=props.get('application_headers') or {}, **kwargs) class Channel(_Channel, base.StdChannel): Message = Message events = {'basic_return': set()} def __init__(self, *args, **kwargs): self.no_ack_consumers = set() super(Channel, self).__init__(*args, **kwargs) def prepare_message(self, body, priority=None, content_type=None, content_encoding=None, headers=None, properties=None): """Encapsulate data into a AMQP message.""" return amqp.Message(body, priority=priority, content_type=content_type, content_encoding=content_encoding, application_headers=headers, **properties) def message_to_python(self, raw_message): """Convert encoded message body back to a Python value.""" return self.Message(self, raw_message) def close(self): try: super(Channel, self).close() finally: self.connection = None def basic_consume(self, *args, **kwargs): consumer_tag = super(Channel, self).basic_consume(*args, **kwargs) if kwargs['no_ack']: self.no_ack_consumers.add(consumer_tag) return consumer_tag def basic_cancel(self, consumer_tag, **kwargs): self.no_ack_consumers.discard(consumer_tag) return super(Channel, self).basic_cancel(consumer_tag, **kwargs) class Transport(base.Transport): Connection = Connection default_port = DEFAULT_PORT # it's very annoying that amqplib sometimes raises AttributeError # if the connection is lost, but nothing we can do about that here. connection_errors = ( base.Transport.connection_errors + ( AMQPConnectionException, socket.error, IOError, OSError, AttributeError) ) channel_errors = base.Transport.channel_errors + (AMQPChannelException, ) driver_name = 'amqplib' driver_type = 'amqp' supports_ev = True def __init__(self, client, **kwargs): self.client = client self.default_port = kwargs.get('default_port') or self.default_port def create_channel(self, connection): return connection.channel() def drain_events(self, connection, **kwargs): return connection.drain_events(**kwargs) def establish_connection(self): """Establish connection to the AMQP broker.""" conninfo = self.client for name, default_value in items(self.default_connection_params): if not getattr(conninfo, name, None): setattr(conninfo, name, default_value) if conninfo.hostname == 'localhost': conninfo.hostname = '127.0.0.1' conn = self.Connection(host=conninfo.host, userid=conninfo.userid, password=conninfo.password, login_method=conninfo.login_method, virtual_host=conninfo.virtual_host, insist=conninfo.insist, ssl=conninfo.ssl, connect_timeout=conninfo.connect_timeout) conn.client = self.client return conn def close_connection(self, connection): """Close the AMQP broker connection.""" connection.client = None connection.close() def is_alive(self, connection): if HAS_MSG_PEEK: sock = connection.transport.sock prev = sock.gettimeout() sock.settimeout(0.0001) try: sock.recv(1, socket.MSG_PEEK) except socket.timeout: pass except socket.error: return False finally: sock.settimeout(prev) return True def verify_connection(self, connection): return connection.channels is not None and self.is_alive(connection) def register_with_event_loop(self, connection, loop): loop.add_reader(connection.method_reader.source.sock, self.on_readable, connection, loop) @property def default_connection_params(self): return {'userid': 'guest', 'password': 'guest', 'port': self.default_port, 'hostname': 'localhost', 'login_method': 'AMQPLAIN'} def get_manager(self, *args, **kwargs): return get_manager(self.client, *args, **kwargs) kombu-3.0.7/kombu/transport/base.py0000644000076500000000000001061012237554371017714 0ustar asksolwheel00000000000000""" kombu.transport.base ==================== Base transport interface. """ from __future__ import absolute_import import errno import socket from kombu.exceptions import ChannelError, ConnectionError from kombu.message import Message from kombu.utils import cached_property from kombu.utils.compat import get_errno __all__ = ['Message', 'StdChannel', 'Management', 'Transport'] def _LeftBlank(obj, method): return NotImplementedError( 'Transport {0.__module__}.{0.__name__} does not implement {1}'.format( obj.__class__, method)) class StdChannel(object): no_ack_consumers = None def Consumer(self, *args, **kwargs): from kombu.messaging import Consumer return Consumer(self, *args, **kwargs) def Producer(self, *args, **kwargs): from kombu.messaging import Producer return Producer(self, *args, **kwargs) def get_bindings(self): raise _LeftBlank(self, 'get_bindings') def after_reply_message_received(self, queue): """reply queue semantics: can be used to delete the queue after transient reply message received.""" pass def __enter__(self): return self def __exit__(self, *exc_info): self.close() class Management(object): def __init__(self, transport): self.transport = transport def get_bindings(self): raise _LeftBlank(self, 'get_bindings') class Transport(object): """Base class for transports.""" Management = Management #: The :class:`~kombu.Connection` owning this instance. client = None #: Set to True if :class:`~kombu.Connection` should pass the URL #: unmodified. can_parse_url = False #: Default port used when no port has been specified. default_port = None #: Tuple of errors that can happen due to connection failure. connection_errors = (ConnectionError, ) #: Tuple of errors that can happen due to channel/method failure. channel_errors = (ChannelError, ) #: Type of driver, can be used to separate transports #: using the AMQP protocol (driver_type: 'amqp'), #: Redis (driver_type: 'redis'), etc... driver_type = 'N/A' #: Name of driver library (e.g. 'py-amqp', 'redis', 'beanstalkc'). driver_name = 'N/A' #: Whether this transports support heartbeats, #: and that the :meth:`heartbeat_check` method has any effect. supports_heartbeats = False #: Set to true if the transport supports the AIO interface. supports_ev = False __reader = None def __init__(self, client, **kwargs): self.client = client def establish_connection(self): raise _LeftBlank(self, 'establish_connection') def close_connection(self, connection): raise _LeftBlank(self, 'close_connection') def create_channel(self, connection): raise _LeftBlank(self, 'create_channel') def close_channel(self, connection): raise _LeftBlank(self, 'close_channel') def drain_events(self, connection, **kwargs): raise _LeftBlank(self, 'drain_events') def heartbeat_check(self, connection, rate=2): pass def driver_version(self): return 'N/A' def register_with_event_loop(self, loop): pass def unregister_from_event_loop(self, loop): pass def verify_connection(self, connection): return True def _make_reader(self, connection, timeout=socket.timeout, error=socket.error, get_errno=get_errno, _unavail=(errno.EAGAIN, errno.EINTR)): drain_events = connection.drain_events def _read(loop): if not connection.connected: raise ConnectionError('Socket was disconnected') try: drain_events(timeout=0) except timeout: return except error as exc: if get_errno(exc) in _unavail: return raise loop.call_soon(_read, loop) return _read def on_readable(self, connection, loop): reader = self.__reader if reader is None: reader = self.__reader = self._make_reader(connection) reader(loop) @property def default_connection_params(self): return {} def get_manager(self, *args, **kwargs): return self.Management(self) @cached_property def manager(self): return self.get_manager() kombu-3.0.7/kombu/transport/beanstalk.py0000644000076500000000000000710412243671543020750 0ustar asksolwheel00000000000000""" kombu.transport.beanstalk ========================= Beanstalk transport. :copyright: (c) 2010 - 2013 by David Ziegler. :license: BSD, see LICENSE for more details. """ from __future__ import absolute_import import beanstalkc import socket from anyjson import loads, dumps from kombu.five import Empty from kombu.utils.encoding import bytes_to_str from . import virtual DEFAULT_PORT = 11300 __author__ = 'David Ziegler ' class Channel(virtual.Channel): _client = None def _parse_job(self, job): item, dest = None, None if job: try: item = loads(bytes_to_str(job.body)) dest = job.stats()['tube'] except Exception: job.bury() else: job.delete() else: raise Empty() return item, dest def _put(self, queue, message, **kwargs): extra = {} priority = message['properties']['delivery_info']['priority'] ttr = message['properties'].get('ttr') if ttr is not None: extra['ttr'] = ttr self.client.use(queue) self.client.put(dumps(message), priority=priority, **extra) def _get(self, queue): if queue not in self.client.watching(): self.client.watch(queue) [self.client.ignore(active) for active in self.client.watching() if active != queue] job = self.client.reserve(timeout=1) item, dest = self._parse_job(job) return item def _get_many(self, queues, timeout=1): # timeout of None will cause beanstalk to timeout waiting # for a new request if timeout is None: timeout = 1 watching = self.client.watching() [self.client.watch(active) for active in queues if active not in watching] [self.client.ignore(active) for active in watching if active not in queues] job = self.client.reserve(timeout=timeout) return self._parse_job(job) def _purge(self, queue): if queue not in self.client.watching(): self.client.watch(queue) [self.client.ignore(active) for active in self.client.watching() if active != queue] count = 0 while 1: job = self.client.reserve(timeout=1) if job: job.delete() count += 1 else: break return count def _size(self, queue): return 0 def _open(self): conninfo = self.connection.client host = conninfo.hostname or 'localhost' port = conninfo.port or DEFAULT_PORT conn = beanstalkc.Connection(host=host, port=port) conn.connect() return conn def close(self): if self._client is not None: return self._client.close() super(Channel, self).close() @property def client(self): if self._client is None: self._client = self._open() return self._client class Transport(virtual.Transport): Channel = Channel polling_interval = 1 default_port = DEFAULT_PORT connection_errors = ( virtual.Transport.connection_errors + ( socket.error, beanstalkc.SocketError, IOError) ) channel_errors = ( virtual.Transport.channel_errors + ( socket.error, IOError, beanstalkc.SocketError, beanstalkc.BeanstalkcException) ) driver_type = 'beanstalk' driver_name = 'beanstalkc' def driver_version(self): return beanstalkc.__version__ kombu-3.0.7/kombu/transport/couchdb.py0000644000076500000000000000660512243671543020420 0ustar asksolwheel00000000000000""" kombu.transport.couchdb ======================= CouchDB transport. :copyright: (c) 2010 - 2013 by David Clymer. :license: BSD, see LICENSE for more details. """ from __future__ import absolute_import import socket import couchdb from anyjson import loads, dumps from kombu.five import Empty from kombu.utils import uuid4 from kombu.utils.encoding import bytes_to_str from . import virtual DEFAULT_PORT = 5984 DEFAULT_DATABASE = 'kombu_default' __author__ = 'David Clymer ' def create_message_view(db): from couchdb import design view = design.ViewDefinition('kombu', 'messages', """ function (doc) { if (doc.queue && doc.payload) emit(doc.queue, doc); } """) if not view.get_doc(db): view.sync(db) class Channel(virtual.Channel): _client = None view_created = False def _put(self, queue, message, **kwargs): self.client.save({'_id': uuid4().hex, 'queue': queue, 'payload': dumps(message)}) def _get(self, queue): result = self._query(queue, limit=1) if not result: raise Empty() item = result.rows[0].value self.client.delete(item) return loads(bytes_to_str(item['payload'])) def _purge(self, queue): result = self._query(queue) for item in result: self.client.delete(item.value) return len(result) def _size(self, queue): return len(self._query(queue)) def _open(self): conninfo = self.connection.client dbname = conninfo.virtual_host proto = conninfo.ssl and 'https' or 'http' if not dbname or dbname == '/': dbname = DEFAULT_DATABASE port = conninfo.port or DEFAULT_PORT server = couchdb.Server('%s://%s:%s/' % (proto, conninfo.hostname, port)) # Use username and password if avaliable try: server.resource.credentials = (conninfo.userid, conninfo.password) except AttributeError: pass try: return server[dbname] except couchdb.http.ResourceNotFound: return server.create(dbname) def _query(self, queue, **kwargs): if not self.view_created: # if the message view is not yet set up, we'll need it now. create_message_view(self.client) self.view_created = True return self.client.view('kombu/messages', key=queue, **kwargs) @property def client(self): if self._client is None: self._client = self._open() return self._client class Transport(virtual.Transport): Channel = Channel polling_interval = 1 default_port = DEFAULT_PORT connection_errors = ( virtual.Transport.connection_errors + ( socket.error, couchdb.HTTPError, couchdb.ServerError, couchdb.Unauthorized) ) channel_errors = ( virtual.Transport.channel_errors + ( couchdb.HTTPError, couchdb.ServerError, couchdb.PreconditionFailed, couchdb.ResourceConflict, couchdb.ResourceNotFound) ) driver_type = 'couchdb' driver_name = 'couchdb' def driver_version(self): return couchdb.__version__ kombu-3.0.7/kombu/transport/django/0000755000076500000000000000000012247127370017670 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/__init__.py0000644000076500000000000000354312243671543022010 0ustar asksolwheel00000000000000"""Kombu transport using the Django database as a message store.""" from __future__ import absolute_import from anyjson import loads, dumps from django.conf import settings from django.core import exceptions as errors from kombu.five import Empty from kombu.transport import virtual from kombu.utils.encoding import bytes_to_str from .models import Queue VERSION = (1, 0, 0) __version__ = '.'.join(map(str, VERSION)) POLLING_INTERVAL = getattr(settings, 'KOMBU_POLLING_INTERVAL', getattr(settings, 'DJKOMBU_POLLING_INTERVAL', 5.0)) class Channel(virtual.Channel): def _new_queue(self, queue, **kwargs): Queue.objects.get_or_create(name=queue) def _put(self, queue, message, **kwargs): Queue.objects.publish(queue, dumps(message)) def basic_consume(self, queue, *args, **kwargs): qinfo = self.state.bindings[queue] exchange = qinfo[0] if self.typeof(exchange).type == 'fanout': return super(Channel, self).basic_consume(queue, *args, **kwargs) def _get(self, queue): #self.refresh_connection() m = Queue.objects.fetch(queue) if m: return loads(bytes_to_str(m)) raise Empty() def _size(self, queue): return Queue.objects.size(queue) def _purge(self, queue): return Queue.objects.purge(queue) def refresh_connection(self): from django import db db.close_connection() class Transport(virtual.Transport): Channel = Channel default_port = 0 polling_interval = POLLING_INTERVAL channel_errors = ( virtual.Transport.channel_errors + ( errors.ObjectDoesNotExist, errors.MultipleObjectsReturned) ) driver_type = 'sql' driver_name = 'django' def driver_version(self): import django return '.'.join(map(str, django.VERSION)) kombu-3.0.7/kombu/transport/django/management/0000755000076500000000000000000012247127370022004 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/management/__init__.py0000644000076500000000000000000012064115765024105 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/management/commands/0000755000076500000000000000000012247127370023605 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/management/commands/__init__.py0000644000076500000000000000000012064115765025706 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/management/commands/clean_kombu_messages.py0000644000076500000000000000106412237554371030332 0ustar asksolwheel00000000000000from __future__ import absolute_import from django.core.management.base import BaseCommand def pluralize(desc, value): if value > 1: return desc + 's' return desc class Command(BaseCommand): requires_model_validation = True def handle(self, *args, **options): from kombu.transport.django.models import Message count = Message.objects.filter(visible=False).count() print('Removing {0} invisible {1} from database... '.format( count, pluralize('message', count))) Message.objects.cleanup() kombu-3.0.7/kombu/transport/django/managers.py0000644000076500000000000000447612223041316022037 0ustar asksolwheel00000000000000from __future__ import absolute_import from django.db import transaction, connection, models try: from django.db import connections, router except ImportError: # pre-Django 1.2 connections = router = None # noqa class QueueManager(models.Manager): def publish(self, queue_name, payload): queue, created = self.get_or_create(name=queue_name) queue.messages.create(payload=payload) def fetch(self, queue_name): try: queue = self.get(name=queue_name) except self.model.DoesNotExist: return return queue.messages.pop() def size(self, queue_name): return self.get(name=queue_name).messages.count() def purge(self, queue_name): try: queue = self.get(name=queue_name) except self.model.DoesNotExist: return messages = queue.messages.all() count = messages.count() messages.delete() return count def select_for_update(qs): try: return qs.select_for_update() except AttributeError: return qs class MessageManager(models.Manager): _messages_received = [0] cleanup_every = 10 @transaction.commit_manually def pop(self): try: resultset = select_for_update( self.filter(visible=True).order_by('sent_at', 'id') ) result = resultset[0:1].get() result.visible = False result.save() recv = self.__class__._messages_received recv[0] += 1 if not recv[0] % self.cleanup_every: self.cleanup() transaction.commit() return result.payload except self.model.DoesNotExist: transaction.commit() except: transaction.rollback() def cleanup(self): cursor = self.connection_for_write().cursor() try: cursor.execute( 'DELETE FROM %s WHERE visible=%%s' % ( self.model._meta.db_table, ), (False, ) ) except: transaction.rollback_unless_managed() else: transaction.commit_unless_managed() def connection_for_write(self): if connections: return connections[router.db_for_write(self.model)] return connection kombu-3.0.7/kombu/transport/django/migrations/0000755000076500000000000000000012247127370022044 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/migrations/0001_initial.py0000644000076500000000000000460312237554371024516 0ustar asksolwheel00000000000000# encoding: utf-8 from __future__ import absolute_import # flake8: noqa import datetime from south.db import db from south.v2 import SchemaMigration from django.db import models class Migration(SchemaMigration): def forwards(self, orm): # Adding model 'Queue' db.create_table('djkombu_queue', ( ('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=200)), )) db.send_create_signal('django', ['Queue']) # Adding model 'Message' db.create_table('djkombu_message', ( ('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('visible', self.gf('django.db.models.fields.BooleanField')(default=True, db_index=True)), ('sent_at', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, null=True, db_index=True, blank=True)), ('payload', self.gf('django.db.models.fields.TextField')()), ('queue', self.gf('django.db.models.fields.related.ForeignKey')(related_name='messages', to=orm['django.Queue'])), )) db.send_create_signal('django', ['Message']) def backwards(self, orm): # Deleting model 'Queue' db.delete_table('djkombu_queue') # Deleting model 'Message' db.delete_table('djkombu_message') models = { 'django.message': { 'Meta': {'object_name': 'Message', 'db_table': "'djkombu_message'"}, 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'payload': ('django.db.models.fields.TextField', [], {}), 'queue': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'messages'", 'to': "orm['django.Queue']"}), 'sent_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'db_index': 'True', 'blank': 'True'}), 'visible': ('django.db.models.fields.BooleanField', [], {'default': 'True', 'db_index': 'True'}) }, 'django.queue': { 'Meta': {'object_name': 'Queue', 'db_table': "'djkombu_queue'"}, 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'}) } } complete_apps = ['django'] kombu-3.0.7/kombu/transport/django/migrations/__init__.py0000644000076500000000000000000012064115765024145 0ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/django/models.py0000644000076500000000000000165212223041316021516 0ustar asksolwheel00000000000000from __future__ import absolute_import from django.db import models from django.utils.translation import ugettext_lazy as _ from .managers import QueueManager, MessageManager class Queue(models.Model): name = models.CharField(_('name'), max_length=200, unique=True) objects = QueueManager() class Meta: db_table = 'djkombu_queue' verbose_name = _('queue') verbose_name_plural = _('queues') class Message(models.Model): visible = models.BooleanField(default=True, db_index=True) sent_at = models.DateTimeField(null=True, blank=True, db_index=True, auto_now_add=True) payload = models.TextField(_('payload'), null=False) queue = models.ForeignKey(Queue, related_name='messages') objects = MessageManager() class Meta: db_table = 'djkombu_message' verbose_name = _('message') verbose_name_plural = _('messages') kombu-3.0.7/kombu/transport/filesystem.py0000644000076500000000000001263112237554371021173 0ustar asksolwheel00000000000000""" kombu.transport.filesystem ========================== Transport using the file system as the message store. """ from __future__ import absolute_import from anyjson import loads, dumps import os import shutil import uuid import tempfile from . import virtual from kombu.exceptions import ChannelError from kombu.five import Empty, monotonic from kombu.utils import cached_property from kombu.utils.encoding import bytes_to_str, str_to_bytes VERSION = (1, 0, 0) __version__ = ".".join(map(str, VERSION)) # needs win32all to work on Windows if os.name == 'nt': import win32con import win32file import pywintypes LOCK_EX = win32con.LOCKFILE_EXCLUSIVE_LOCK # 0 is the default LOCK_SH = 0 # noqa LOCK_NB = win32con.LOCKFILE_FAIL_IMMEDIATELY # noqa __overlapped = pywintypes.OVERLAPPED() def lock(file, flags): hfile = win32file._get_osfhandle(file.fileno()) win32file.LockFileEx(hfile, flags, 0, 0xffff0000, __overlapped) def unlock(file): hfile = win32file._get_osfhandle(file.fileno()) win32file.UnlockFileEx(hfile, 0, 0xffff0000, __overlapped) elif os.name == 'posix': import fcntl from fcntl import LOCK_EX, LOCK_SH, LOCK_NB # noqa def lock(file, flags): # noqa fcntl.flock(file.fileno(), flags) def unlock(file): # noqa fcntl.flock(file.fileno(), fcntl.LOCK_UN) else: raise RuntimeError( 'Filesystem plugin only defined for NT and POSIX platforms') class Channel(virtual.Channel): def _put(self, queue, payload, **kwargs): """Put `message` onto `queue`.""" filename = '%s_%s.%s.msg' % (int(round(monotonic() * 1000)), uuid.uuid4(), queue) filename = os.path.join(self.data_folder_out, filename) try: f = open(filename, 'wb') lock(f, LOCK_EX) f.write(str_to_bytes(dumps(payload))) except (IOError, OSError): raise ChannelError( 'Cannot add file {0!r} to directory'.format(filename)) finally: unlock(f) f.close() def _get(self, queue): """Get next message from `queue`.""" queue_find = '.' + queue + '.msg' folder = os.listdir(self.data_folder_in) folder = sorted(folder) while len(folder) > 0: filename = folder.pop(0) # only handle message for the requested queue if filename.find(queue_find) < 0: continue if self.store_processed: processed_folder = self.processed_folder else: processed_folder = tempfile.gettempdir() try: # move the file to the tmp/processed folder shutil.move(os.path.join(self.data_folder_in, filename), processed_folder) except IOError: pass # file could be locked, or removed in meantime so ignore filename = os.path.join(processed_folder, filename) try: f = open(filename, 'rb') payload = f.read() f.close() if not self.store_processed: os.remove(filename) except (IOError, OSError): raise ChannelError( 'Cannot read file {0!r} from queue.'.format(filename)) return loads(bytes_to_str(payload)) raise Empty() def _purge(self, queue): """Remove all messages from `queue`.""" count = 0 queue_find = '.' + queue + '.msg' folder = os.listdir(self.data_folder_in) while len(folder) > 0: filename = folder.pop() try: # only purge messages for the requested queue if filename.find(queue_find) < 0: continue filename = os.path.join(self.data_folder_in, filename) os.remove(filename) count += 1 except OSError: # we simply ignore its existence, as it was probably # processed by another worker pass return count def _size(self, queue): """Return the number of messages in `queue` as an :class:`int`.""" count = 0 queue_find = "." + queue + '.msg' folder = os.listdir(self.data_folder_in) while len(folder) > 0: filename = folder.pop() # only handle message for the requested queue if filename.find(queue_find) < 0: continue count += 1 return count @property def transport_options(self): return self.connection.client.transport_options @cached_property def data_folder_in(self): return self.transport_options.get('data_folder_in', 'data_in') @cached_property def data_folder_out(self): return self.transport_options.get('data_folder_out', 'data_out') @cached_property def store_processed(self): return self.transport_options.get('store_processed', False) @cached_property def processed_folder(self): return self.transport_options.get('processed_folder', 'processed') class Transport(virtual.Transport): Channel = Channel default_port = 0 driver_type = 'filesystem' driver_name = 'filesystem' def driver_version(self): return 'N/A' kombu-3.0.7/kombu/transport/librabbitmq.py0000644000076500000000000001136312237554371021300 0ustar asksolwheel00000000000000""" kombu.transport.librabbitmq =========================== `librabbitmq`_ transport. .. _`librabbitmq`: http://pypi.python.org/librabbitmq/ """ from __future__ import absolute_import import os import socket try: import librabbitmq as amqp from librabbitmq import ChannelError, ConnectionError except ImportError: # pragma: no cover try: import pylibrabbitmq as amqp # noqa from pylibrabbitmq import ChannelError, ConnectionError # noqa except ImportError: raise ImportError('No module named librabbitmq') from kombu.five import items, values from kombu.utils.amq_manager import get_manager from . import base DEFAULT_PORT = 5672 NO_SSL_ERROR = """\ ssl not supported by librabbitmq, please use pyamqp:// or stunnel\ """ class Message(base.Message): def __init__(self, channel, props, info, body): super(Message, self).__init__( channel, body=body, delivery_info=info, properties=props, delivery_tag=info.get('delivery_tag'), content_type=props.get('content_type'), content_encoding=props.get('content_encoding'), headers=props.get('headers')) class Channel(amqp.Channel, base.StdChannel): Message = Message def prepare_message(self, body, priority=None, content_type=None, content_encoding=None, headers=None, properties=None): """Encapsulate data into a AMQP message.""" properties = properties if properties is not None else {} properties.update({'content_type': content_type, 'content_encoding': content_encoding, 'headers': headers, 'priority': priority}) return body, properties class Connection(amqp.Connection): Channel = Channel Message = Message class Transport(base.Transport): Connection = Connection default_port = DEFAULT_PORT connection_errors = ( base.Transport.connection_errors + ( ConnectionError, socket.error, IOError, OSError) ) channel_errors = ( base.Transport.channel_errors + (ChannelError, ) ) driver_type = 'amqp' driver_name = 'librabbitmq' supports_ev = True def __init__(self, client, **kwargs): self.client = client self.default_port = kwargs.get('default_port') or self.default_port self.__reader = None def driver_version(self): return amqp.__version__ def create_channel(self, connection): return connection.channel() def drain_events(self, connection, **kwargs): return connection.drain_events(**kwargs) def establish_connection(self): """Establish connection to the AMQP broker.""" conninfo = self.client for name, default_value in items(self.default_connection_params): if not getattr(conninfo, name, None): setattr(conninfo, name, default_value) if conninfo.ssl: raise NotImplementedError(NO_SSL_ERROR) opts = dict({ 'host': conninfo.host, 'userid': conninfo.userid, 'password': conninfo.password, 'virtual_host': conninfo.virtual_host, 'login_method': conninfo.login_method, 'insist': conninfo.insist, 'ssl': conninfo.ssl, 'connect_timeout': conninfo.connect_timeout, }, **conninfo.transport_options or {}) conn = self.Connection(**opts) conn.client = self.client self.client.drain_events = conn.drain_events return conn def close_connection(self, connection): """Close the AMQP broker connection.""" self.client.drain_events = None connection.close() def _collect(self, connection): if connection is not None: for channel in values(connection.channels): channel.connection = None try: os.close(connection.fileno()) except OSError: pass connection.channels.clear() connection.callbacks.clear() self.client.drain_events = None self.client = None def verify_connection(self, connection): return connection.connected def register_with_event_loop(self, connection, loop): loop.add_reader( connection.fileno(), self.on_readable, connection, loop, ) def get_manager(self, *args, **kwargs): return get_manager(self.client, *args, **kwargs) @property def default_connection_params(self): return {'userid': 'guest', 'password': 'guest', 'port': self.default_port, 'hostname': 'localhost', 'login_method': 'AMQPLAIN'} kombu-3.0.7/kombu/transport/memory.py0000644000076500000000000000337012237554371020317 0ustar asksolwheel00000000000000""" kombu.transport.memory ====================== In-memory transport. """ from __future__ import absolute_import from kombu.five import Queue, values from . import virtual class Channel(virtual.Channel): queues = {} do_restore = False supports_fanout = True def _has_queue(self, queue, **kwargs): return queue in self.queues def _new_queue(self, queue, **kwargs): if queue not in self.queues: self.queues[queue] = Queue() def _get(self, queue, timeout=None): return self._queue_for(queue).get(block=False) def _queue_for(self, queue): if queue not in self.queues: self.queues[queue] = Queue() return self.queues[queue] def _queue_bind(self, *args): pass def _put_fanout(self, exchange, message, routing_key=None, **kwargs): for queue in self._lookup(exchange, routing_key): self._queue_for(queue).put(message) def _put(self, queue, message, **kwargs): self._queue_for(queue).put(message) def _size(self, queue): return self._queue_for(queue).qsize() def _delete(self, queue, *args): self.queues.pop(queue, None) def _purge(self, queue): q = self._queue_for(queue) size = q.qsize() q.queue.clear() return size def close(self): super(Channel, self).close() for queue in values(self.queues): queue.empty() self.queues = {} def after_reply_message_received(self, queue): pass class Transport(virtual.Transport): Channel = Channel #: memory backend state is global. state = virtual.BrokerState() driver_type = 'memory' driver_name = 'memory' def driver_version(self): return 'N/A' kombu-3.0.7/kombu/transport/mongodb.py0000644000076500000000000001631112243671543020431 0ustar asksolwheel00000000000000""" kombu.transport.mongodb ======================= MongoDB transport. :copyright: (c) 2010 - 2013 by Flavio Percoco Premoli. :license: BSD, see LICENSE for more details. """ from __future__ import absolute_import import pymongo from pymongo import errors from anyjson import loads, dumps from pymongo import MongoClient from kombu.five import Empty from kombu.syn import _detect_environment from kombu.utils.encoding import bytes_to_str from . import virtual DEFAULT_HOST = '127.0.0.1' DEFAULT_PORT = 27017 __author__ = """\ Flavio [FlaPer87] Percoco Premoli ;\ Scott Lyons ;\ """ class Channel(virtual.Channel): _client = None supports_fanout = True _fanout_queues = {} def __init__(self, *vargs, **kwargs): super_ = super(Channel, self) super_.__init__(*vargs, **kwargs) self._queue_cursors = {} self._queue_readcounts = {} def _new_queue(self, queue, **kwargs): pass def _get(self, queue): try: if queue in self._fanout_queues: msg = next(self._queue_cursors[queue]) self._queue_readcounts[queue] += 1 return loads(bytes_to_str(msg['payload'])) else: msg = self.client.command( 'findandmodify', 'messages', query={'queue': queue}, sort={'_id': pymongo.ASCENDING}, remove=True, ) except errors.OperationFailure as exc: if 'No matching object found' in exc.args[0]: raise Empty() raise except StopIteration: raise Empty() # as of mongo 2.0 empty results won't raise an error if msg['value'] is None: raise Empty() return loads(bytes_to_str(msg['value']['payload'])) def _size(self, queue): if queue in self._fanout_queues: return (self._queue_cursors[queue].count() - self._queue_readcounts[queue]) return self.client.messages.find({'queue': queue}).count() def _put(self, queue, message, **kwargs): self.client.messages.insert({'payload': dumps(message), 'queue': queue}) def _purge(self, queue): size = self._size(queue) if queue in self._fanout_queues: cursor = self._queue_cursors[queue] cursor.rewind() self._queue_cursors[queue] = cursor.skip(cursor.count()) else: self.client.messages.remove({'queue': queue}) return size def _open(self, scheme='mongodb://'): # See mongodb uri documentation: # http://www.mongodb.org/display/DOCS/Connections client = self.connection.client options = client.transport_options hostname = client.hostname or DEFAULT_HOST dbname = client.virtual_host if dbname in ['/', None]: dbname = "kombu_default" if not hostname.startswith(scheme): hostname = scheme + hostname if not hostname[len(scheme):]: hostname += 'localhost' # XXX What does this do? [ask] urest = hostname[len(scheme):] if '/' in urest: if not client.userid: urest = urest.replace('/' + client.virtual_host, '/') hostname = ''.join([scheme, urest]) # At this point we expect the hostname to be something like # (considering replica set form too): # # mongodb://[username:password@]host1[:port1][,host2[:port2], # ...[,hostN[:portN]]][/[?options]] options.setdefault('auto_start_request', True) mongoconn = MongoClient( host=hostname, ssl=client.ssl, auto_start_request=options['auto_start_request'], use_greenlets=_detect_environment() != 'default', ) database = getattr(mongoconn, dbname) version = mongoconn.server_info()['version'] if tuple(map(int, version.split('.')[:2])) < (1, 3): raise NotImplementedError( 'Kombu requires MongoDB version 1.3+ (server is {0})'.format( version)) self.db = database col = database.messages col.ensure_index([('queue', 1), ('_id', 1)], background=True) if 'messages.broadcast' not in database.collection_names(): capsize = options.get('capped_queue_size') or 100000 database.create_collection('messages.broadcast', size=capsize, capped=True) self.bcast = getattr(database, 'messages.broadcast') self.bcast.ensure_index([('queue', 1)]) self.routing = getattr(database, 'messages.routing') self.routing.ensure_index([('queue', 1), ('exchange', 1)]) return database #TODO: Store a more complete exchange metatable in the routing collection def get_table(self, exchange): """Get table of bindings for ``exchange``.""" localRoutes = frozenset(self.state.exchanges[exchange]['table']) brokerRoutes = self.client.messages.routing.find( {'exchange': exchange} ) return localRoutes | frozenset((r['routing_key'], r['pattern'], r['queue']) for r in brokerRoutes) def _put_fanout(self, exchange, message, **kwargs): """Deliver fanout message.""" self.client.messages.broadcast.insert({'payload': dumps(message), 'queue': exchange}) def _queue_bind(self, exchange, routing_key, pattern, queue): if self.typeof(exchange).type == 'fanout': cursor = self.bcast.find(query={'queue': exchange}, sort=[('$natural', 1)], tailable=True) # Fast forward the cursor past old events self._queue_cursors[queue] = cursor.skip(cursor.count()) self._queue_readcounts[queue] = cursor.count() self._fanout_queues[queue] = exchange meta = {'exchange': exchange, 'queue': queue, 'routing_key': routing_key, 'pattern': pattern} self.client.messages.routing.update(meta, meta, upsert=True) def queue_delete(self, queue, **kwargs): self.routing.remove({'queue': queue}) super(Channel, self).queue_delete(queue, **kwargs) if queue in self._fanout_queues: self._queue_cursors[queue].close() self._queue_cursors.pop(queue, None) self._fanout_queues.pop(queue, None) @property def client(self): if self._client is None: self._client = self._open() return self._client class Transport(virtual.Transport): Channel = Channel can_parse_url = True polling_interval = 1 default_port = DEFAULT_PORT connection_errors = ( virtual.Transport.connection_errors + (errors.ConnectionFailure, ) ) channel_errors = ( virtual.Transport.channel_errors + ( errors.ConnectionFailure, errors.OperationFailure) ) driver_type = 'mongodb' driver_name = 'pymongo' def driver_version(self): return pymongo.version kombu-3.0.7/kombu/transport/pyamqp.py0000644000076500000000000001037112243671543020313 0ustar asksolwheel00000000000000""" kombu.transport.pyamqp ====================== pure python amqp transport. """ from __future__ import absolute_import import amqp from kombu.five import items from kombu.utils.amq_manager import get_manager from . import base DEFAULT_PORT = 5672 class Message(base.Message): def __init__(self, channel, msg, **kwargs): props = msg.properties super(Message, self).__init__( channel, body=msg.body, delivery_tag=msg.delivery_tag, content_type=props.get('content_type'), content_encoding=props.get('content_encoding'), delivery_info=msg.delivery_info, properties=msg.properties, headers=props.get('application_headers') or {}, **kwargs) class Channel(amqp.Channel, base.StdChannel): Message = Message def prepare_message(self, body, priority=None, content_type=None, content_encoding=None, headers=None, properties=None, _Message=amqp.Message): """Prepares message so that it can be sent using this transport.""" return _Message( body, priority=priority, content_type=content_type, content_encoding=content_encoding, application_headers=headers, **properties or {} ) def message_to_python(self, raw_message): """Convert encoded message body back to a Python value.""" return self.Message(self, raw_message) class Connection(amqp.Connection): Channel = Channel class Transport(base.Transport): Connection = Connection default_port = DEFAULT_PORT # it's very annoying that pyamqp sometimes raises AttributeError # if the connection is lost, but nothing we can do about that here. connection_errors = amqp.Connection.connection_errors channel_errors = amqp.Connection.channel_errors recoverable_connection_errors = \ amqp.Connection.recoverable_connection_errors recoverable_channel_errors = amqp.Connection.recoverable_channel_errors driver_name = 'py-amqp' driver_type = 'amqp' supports_heartbeats = True supports_ev = True def __init__(self, client, default_port=None, **kwargs): self.client = client self.default_port = default_port or self.default_port def driver_version(self): return amqp.__version__ def create_channel(self, connection): return connection.channel() def drain_events(self, connection, **kwargs): return connection.drain_events(**kwargs) def establish_connection(self): """Establish connection to the AMQP broker.""" conninfo = self.client for name, default_value in items(self.default_connection_params): if not getattr(conninfo, name, None): setattr(conninfo, name, default_value) if conninfo.hostname == 'localhost': conninfo.hostname = '127.0.0.1' opts = dict({ 'host': conninfo.host, 'userid': conninfo.userid, 'password': conninfo.password, 'login_method': conninfo.login_method, 'virtual_host': conninfo.virtual_host, 'insist': conninfo.insist, 'ssl': conninfo.ssl, 'connect_timeout': conninfo.connect_timeout, 'heartbeat': conninfo.heartbeat, }, **conninfo.transport_options or {}) conn = self.Connection(**opts) conn.client = self.client return conn def verify_connection(self, connection): return connection.connected def close_connection(self, connection): """Close the AMQP broker connection.""" connection.client = None connection.close() def register_with_event_loop(self, connection, loop): loop.add_reader(connection.sock, self.on_readable, connection, loop) def heartbeat_check(self, connection, rate=2): return connection.heartbeat_tick(rate=rate) @property def default_connection_params(self): return {'userid': 'guest', 'password': 'guest', 'port': self.default_port, 'hostname': 'localhost', 'login_method': 'AMQPLAIN'} def get_manager(self, *args, **kwargs): return get_manager(self.client, *args, **kwargs) kombu-3.0.7/kombu/transport/pyro.py0000644000076500000000000000464412237554371020005 0ustar asksolwheel00000000000000""" kombu.transport.pyro ====================== Pyro transport. Requires the :mod:`Pyro4` library to be installed. """ from __future__ import absolute_import import sys from kombu.five import reraise from kombu.utils import cached_property from . import virtual try: import Pyro4 as pyro from Pyro4.errors import NamingError except ImportError: # pragma: no cover pyro = NamingError = None # noqa DEFAULT_PORT = 9090 E_LOOKUP = """\ Unable to locate pyro nameserver {0.virtual_host} on host {0.hostname}\ """ class Channel(virtual.Channel): def queues(self): return self.shared_queues.get_queue_names() def _new_queue(self, queue, **kwargs): if queue not in self.queues(): self.shared_queues.new_queue(queue) def _get(self, queue, timeout=None): queue = self._queue_for(queue) msg = self.shared_queues._get(queue) return msg def _queue_for(self, queue): if queue not in self.queues(): self.shared_queues.new_queue(queue) return queue def _put(self, queue, message, **kwargs): queue = self._queue_for(queue) self.shared_queues._put(queue, message) def _size(self, queue): return self.shared_queues._size(queue) def _delete(self, queue, *args): self.shared_queues._delete(queue) def _purge(self, queue): return self.shared_queues._purge(queue) def after_reply_message_received(self, queue): pass @cached_property def shared_queues(self): return self.connection.shared_queues class Transport(virtual.Transport): Channel = Channel #: memory backend state is global. state = virtual.BrokerState() default_port = DEFAULT_PORT driver_type = driver_name = 'pyro' def _open(self): conninfo = self.client pyro.config.HMAC_KEY = conninfo.virtual_host try: nameserver = pyro.locateNS(host=conninfo.hostname, port=self.default_port) # name of registered pyro object uri = nameserver.lookup(conninfo.virtual_host) return pyro.Proxy(uri) except NamingError: reraise(NamingError, NamingError(E_LOOKUP.format(conninfo)), sys.exc_info()[2]) def driver_version(self): return pyro.__version__ @cached_property def shared_queues(self): return self._open() kombu-3.0.7/kombu/transport/redis.py0000644000076500000000000007105712243752202020112 0ustar asksolwheel00000000000000""" kombu.transport.redis ===================== Redis transport. """ from __future__ import absolute_import import socket from bisect import bisect from contextlib import contextmanager from time import time from anyjson import loads, dumps from kombu.exceptions import InconsistencyError, VersionMismatch from kombu.five import Empty, values, string_t from kombu.log import get_logger from kombu.utils import cached_property, uuid from kombu.utils.eventio import poll, READ, ERR from kombu.utils.encoding import bytes_to_str from kombu.utils.url import _parse_url NO_ROUTE_ERROR = """ Cannot route message for exchange {0!r}: Table empty or key no longer exists. Probably the key ({1!r}) has been removed from the Redis database. """ try: from billiard.util import register_after_fork except ImportError: # pragma: no cover try: from multiprocessing.util import register_after_fork # noqa except ImportError: def register_after_fork(*args, **kwargs): # noqa pass try: import redis except ImportError: # pragma: no cover redis = None # noqa from . import virtual logger = get_logger('kombu.transport.redis') crit, warn = logger.critical, logger.warn DEFAULT_PORT = 6379 DEFAULT_DB = 0 PRIORITY_STEPS = [0, 3, 6, 9] # This implementation may seem overly complex, but I assure you there is # a good reason for doing it this way. # # Consuming from several connections enables us to emulate channels, # which means we can have different service guarantees for individual # channels. # # So we need to consume messages from multiple connections simultaneously, # and using epoll means we don't have to do so using multiple threads. # # Also it means we can easily use PUBLISH/SUBSCRIBE to do fanout # exchanges (broadcast), as an alternative to pushing messages to fanout-bound # queues manually. class MutexHeld(Exception): pass @contextmanager def Mutex(client, name, expire): lock_id = uuid() i_won = client.setnx(name, lock_id) try: if i_won: client.expire(name, expire) yield else: if not client.ttl(name): client.expire(name, expire) raise MutexHeld() finally: if i_won: pipe = client.pipeline(True) try: pipe.watch(name) if pipe.get(name) == lock_id: pipe.multi() pipe.delete(name) pipe.execute() pipe.unwatch() except redis.WatchError: pass class QoS(virtual.QoS): restore_at_shutdown = True def __init__(self, *args, **kwargs): super(QoS, self).__init__(*args, **kwargs) self._vrestore_count = 0 def append(self, message, delivery_tag): delivery = message.delivery_info EX, RK = delivery['exchange'], delivery['routing_key'] with self.pipe_or_acquire() as pipe: pipe.zadd(self.unacked_index_key, delivery_tag, time()) \ .hset(self.unacked_key, delivery_tag, dumps([message._raw, EX, RK])) \ .execute() super(QoS, self).append(message, delivery_tag) def restore_unacked(self): for tag in self._delivered: self.restore_by_tag(tag) self._delivered.clear() def ack(self, delivery_tag): self._remove_from_indices(delivery_tag).execute() super(QoS, self).ack(delivery_tag) def reject(self, delivery_tag, requeue=False): if requeue: self.restore_by_tag(delivery_tag, leftmost=True) self.ack(delivery_tag) @contextmanager def pipe_or_acquire(self, pipe=None): if pipe: yield pipe else: with self.channel.conn_or_acquire() as client: yield client.pipeline() def _remove_from_indices(self, delivery_tag, pipe=None): with self.pipe_or_acquire(pipe) as pipe: return pipe.zrem(self.unacked_index_key, delivery_tag) \ .hdel(self.unacked_key, delivery_tag) def restore_visible(self, start=0, num=10, interval=10): self._vrestore_count += 1 if (self._vrestore_count - 1) % interval: return with self.channel.conn_or_acquire() as client: ceil = time() - self.visibility_timeout try: with Mutex(client, self.unacked_mutex_key, self.unacked_mutex_expire): visible = client.zrevrangebyscore( self.unacked_index_key, ceil, 0, start=num and start, num=num, withscores=True) for tag, score in visible or []: self.restore_by_tag(tag, client) except MutexHeld: pass def restore_by_tag(self, tag, client=None, leftmost=False): with self.channel.conn_or_acquire(client) as client: p, _, _ = self._remove_from_indices( tag, client.pipeline().hget(self.unacked_key, tag)).execute() if p: M, EX, RK = loads(bytes_to_str(p)) # json is unicode self.channel._do_restore_message(M, EX, RK, client, leftmost) @cached_property def unacked_key(self): return self.channel.unacked_key @cached_property def unacked_index_key(self): return self.channel.unacked_index_key @cached_property def unacked_mutex_key(self): return self.channel.unacked_mutex_key @cached_property def unacked_mutex_expire(self): return self.channel.unacked_mutex_expire @cached_property def visibility_timeout(self): return self.channel.visibility_timeout class MultiChannelPoller(object): eventflags = READ | ERR def __init__(self): # active channels self._channels = set() # file descriptor -> channel map. self._fd_to_chan = {} # channel -> socket map self._chan_to_sock = {} # poll implementation (epoll/kqueue/select) self.poller = poll() def close(self): for fd in values(self._chan_to_sock): try: self.poller.unregister(fd) except (KeyError, ValueError): pass self._channels.clear() self._fd_to_chan.clear() self._chan_to_sock.clear() self.poller = None def add(self, channel): self._channels.add(channel) def discard(self, channel): self._channels.discard(channel) def _register(self, channel, client, type): if (channel, client, type) in self._chan_to_sock: self._unregister(channel, client, type) if client.connection._sock is None: # not connected yet. client.connection.connect() sock = client.connection._sock self._fd_to_chan[sock.fileno()] = (channel, type) self._chan_to_sock[(channel, client, type)] = sock self.poller.register(sock, self.eventflags) def _unregister(self, channel, client, type): self.poller.unregister(self._chan_to_sock[(channel, client, type)]) def _register_BRPOP(self, channel): """enable BRPOP mode for channel.""" ident = channel, channel.client, 'BRPOP' if channel.client.connection._sock is None or \ ident not in self._chan_to_sock: channel._in_poll = False self._register(*ident) if not channel._in_poll: # send BRPOP channel._brpop_start() def _register_LISTEN(self, channel): """enable LISTEN mode for channel.""" if channel.subclient.connection._sock is None: channel._in_listen = False self._register(channel, channel.subclient, 'LISTEN') if not channel._in_listen: channel._subscribe() # send SUBSCRIBE def on_poll_start(self): for channel in self._channels: if channel.active_queues: # BRPOP mode? if channel.qos.can_consume(): self._register_BRPOP(channel) if channel.active_fanout_queues: # LISTEN mode? self._register_LISTEN(channel) def on_poll_init(self, poller): self.poller = poller for channel in self._channels: return channel.qos.restore_visible( num=channel.unacked_restore_limit, ) def maybe_restore_messages(self): for channel in self._channels: if channel.active_queues: # only need to do this once, as they are not local to channel. return channel.qos.restore_visible( num=channel.unacked_restore_limit, ) def on_readable(self, fileno): chan, type = self._fd_to_chan[fileno] if chan.qos.can_consume(): return chan.handlers[type]() def handle_event(self, fileno, event): if event & READ: return self.on_readable(fileno), self elif event & ERR: chan, type = self._fd_to_chan[fileno] chan._poll_error(type) def get(self, timeout=None): for channel in self._channels: if channel.active_queues: # BRPOP mode? if channel.qos.can_consume(): self._register_BRPOP(channel) if channel.active_fanout_queues: # LISTEN mode? self._register_LISTEN(channel) events = self.poller.poll(timeout) for fileno, event in events or []: ret = self.handle_event(fileno, event) if ret: return ret # - no new data, so try to restore messages. # - reset active redis commands. self.maybe_restore_messages() raise Empty() @property def fds(self): return self._fd_to_chan class Channel(virtual.Channel): QoS = QoS _client = None _subclient = None supports_fanout = True keyprefix_queue = '_kombu.binding.%s' keyprefix_fanout = '/{db}.' sep = '\x06\x16' _in_poll = False _in_listen = False _fanout_queues = {} ack_emulation = True unacked_key = 'unacked' unacked_index_key = 'unacked_index' unacked_mutex_key = 'unacked_mutex' unacked_mutex_expire = 300 # 5 minutes unacked_restore_limit = None visibility_timeout = 3600 # 1 hour priority_steps = PRIORITY_STEPS socket_timeout = None max_connections = 10 #: Transport option to enable disable fanout keyprefix. #: Should be enabled by default, but that is not #: backwards compatible. Can also be string, in which #: case it changes the default prefix ('/{db}.') into to something #: else. The prefix must include a leading slash and a trailing dot. fanout_prefix = False _pool = None from_transport_options = ( virtual.Channel.from_transport_options + ('ack_emulation', 'unacked_key', 'unacked_index_key', 'unacked_mutex_key', 'unacked_mutex_expire', 'visibility_timeout', 'unacked_restore_limit', 'fanout_prefix', 'socket_timeout', 'max_connections', 'priority_steps') # <-- do not add comma here! ) def __init__(self, *args, **kwargs): super_ = super(Channel, self) super_.__init__(*args, **kwargs) if not self.ack_emulation: # disable visibility timeout self.QoS = virtual.QoS self._queue_cycle = [] self.Client = self._get_client() self.ResponseError = self._get_response_error() self.active_fanout_queues = set() self.auto_delete_queues = set() self._fanout_to_queue = {} self.handlers = {'BRPOP': self._brpop_read, 'LISTEN': self._receive} if self.fanout_prefix: if isinstance(self.fanout_prefix, string_t): self.keyprefix_fanout = self.fanout_prefix else: # previous versions did not set a fanout, so cannot enable # by default. self.keyprefix_fanout = '' # Evaluate connection. try: self.client.info() except Exception: if self._pool: self._pool.disconnect() raise self.connection.cycle.add(self) # add to channel poller. # copy errors, in case channel closed but threads still # are still waiting for data. self.connection_errors = self.connection.connection_errors register_after_fork(self, self._after_fork) def _after_fork(self): if self._pool is not None: self._pool.disconnect() def _do_restore_message(self, payload, exchange, routing_key, client=None, leftmost=False): with self.conn_or_acquire(client) as client: try: try: payload['headers']['redelivered'] = True except KeyError: pass for queue in self._lookup(exchange, routing_key): (client.lpush if leftmost else client.rpush)( queue, dumps(payload), ) except Exception: crit('Could not restore message: %r', payload, exc_info=True) def _restore(self, message, leftmost=False): tag = message.delivery_tag with self.conn_or_acquire() as client: P, _ = client.pipeline() \ .hget(self.unacked_key, tag) \ .hdel(self.unacked_key, tag) \ .execute() if P: M, EX, RK = loads(bytes_to_str(P)) # json is unicode self._do_restore_message(M, EX, RK, client, leftmost) def _restore_at_beginning(self, message): return self._restore(message, leftmost=True) def _next_delivery_tag(self): return uuid() def basic_consume(self, queue, *args, **kwargs): if queue in self._fanout_queues: exchange = self._fanout_queues[queue] self.active_fanout_queues.add(queue) self._fanout_to_queue[exchange] = queue ret = super(Channel, self).basic_consume(queue, *args, **kwargs) self._update_cycle() return ret def basic_cancel(self, consumer_tag): try: queue = self._tag_to_queue[consumer_tag] except KeyError: return try: self.active_fanout_queues.discard(queue) self._fanout_to_queue.pop(self._fanout_queues[queue]) except KeyError: pass ret = super(Channel, self).basic_cancel(consumer_tag) self._update_cycle() return ret def _subscribe(self): prefix = self.keyprefix_fanout keys = [''.join([prefix, self._fanout_queues[queue]]) for queue in self.active_fanout_queues] if not keys: return c = self.subclient if c.connection._sock is None: c.connection.connect() self._in_listen = True self.subclient.subscribe(keys) def _handle_message(self, client, r): if r[0] == 'unsubscribe' and r[2] == 0: client.subscribed = False elif r[0] == 'pmessage': return {'type': r[0], 'pattern': r[1], 'channel': r[2], 'data': r[3]} else: return {'type': r[0], 'pattern': None, 'channel': r[1], 'data': r[2]} def _receive(self): c = self.subclient response = None try: response = c.parse_response() except self.connection_errors: self._in_listen = False raise Empty() if response is not None: payload = self._handle_message(c, response) if bytes_to_str(payload['type']) == 'message': channel = bytes_to_str(payload['channel']) if payload['data']: if channel[0] == '/': _, _, channel = channel.partition('.') try: message = loads(bytes_to_str(payload['data'])) except (TypeError, ValueError): warn('Cannot process event on channel %r: %r', channel, payload, exc_info=1) return message, self._fanout_to_queue[channel] raise Empty() def _brpop_start(self, timeout=1): queues = self._consume_cycle() if not queues: return keys = [self._q_for_pri(queue, pri) for pri in PRIORITY_STEPS for queue in queues] + [timeout or 0] self._in_poll = True self.client.connection.send_command('BRPOP', *keys) def _brpop_read(self, **options): try: try: dest__item = self.client.parse_response(self.client.connection, 'BRPOP', **options) except self.connection_errors: # if there's a ConnectionError, disconnect so the next # iteration will reconnect automatically. self.client.connection.disconnect() raise Empty() if dest__item: dest, item = dest__item dest = bytes_to_str(dest).rsplit(self.sep, 1)[0] self._rotate_cycle(dest) return loads(bytes_to_str(item)), dest else: raise Empty() finally: self._in_poll = False def _poll_error(self, type, **options): try: self.client.parse_response(type) except self.connection_errors: pass def _get(self, queue): with self.conn_or_acquire() as client: for pri in PRIORITY_STEPS: item = client.rpop(self._q_for_pri(queue, pri)) if item: return loads(bytes_to_str(item)) raise Empty() def _size(self, queue): with self.conn_or_acquire() as client: cmds = client.pipeline() for pri in PRIORITY_STEPS: cmds = cmds.llen(self._q_for_pri(queue, pri)) sizes = cmds.execute() return sum(size for size in sizes if isinstance(size, int)) def _q_for_pri(self, queue, pri): pri = self.priority(pri) return '%s%s%s' % ((queue, self.sep, pri) if pri else (queue, '', '')) def priority(self, n): steps = self.priority_steps return steps[bisect(steps, n) - 1] def _put(self, queue, message, **kwargs): """Deliver message.""" try: pri = max(min(int( message['properties']['delivery_info']['priority']), 9), 0) except (TypeError, ValueError, KeyError): pri = 0 with self.conn_or_acquire() as client: client.lpush(self._q_for_pri(queue, pri), dumps(message)) def _put_fanout(self, exchange, message, **kwargs): """Deliver fanout message.""" with self.conn_or_acquire() as client: client.publish( ''.join([self.keyprefix_fanout, exchange]), dumps(message), ) def _new_queue(self, queue, auto_delete=False, **kwargs): if auto_delete: self.auto_delete_queues.add(queue) def _queue_bind(self, exchange, routing_key, pattern, queue): if self.typeof(exchange).type == 'fanout': # Mark exchange as fanout. self._fanout_queues[queue] = exchange with self.conn_or_acquire() as client: client.sadd(self.keyprefix_queue % (exchange, ), self.sep.join([routing_key or '', pattern or '', queue or ''])) def _delete(self, queue, exchange, routing_key, pattern, *args): self.auto_delete_queues.discard(queue) with self.conn_or_acquire() as client: client.srem(self.keyprefix_queue % (exchange, ), self.sep.join([routing_key or '', pattern or '', queue or ''])) cmds = client.pipeline() for pri in PRIORITY_STEPS: cmds = cmds.delete(self._q_for_pri(queue, pri)) cmds.execute() def _has_queue(self, queue, **kwargs): with self.conn_or_acquire() as client: cmds = client.pipeline() for pri in PRIORITY_STEPS: cmds = cmds.exists(self._q_for_pri(queue, pri)) return any(cmds.execute()) def get_table(self, exchange): key = self.keyprefix_queue % exchange with self.conn_or_acquire() as client: values = client.smembers(key) if not values: raise InconsistencyError(NO_ROUTE_ERROR.format(exchange, key)) return [tuple(bytes_to_str(val).split(self.sep)) for val in values] def _purge(self, queue): with self.conn_or_acquire() as client: cmds = client.pipeline() for pri in PRIORITY_STEPS: priq = self._q_for_pri(queue, pri) cmds = cmds.llen(priq).delete(priq) sizes = cmds.execute() return sum(sizes[::2]) def close(self): if self._pool: self._pool.disconnect() if not self.closed: # remove from channel poller. self.connection.cycle.discard(self) # delete fanout bindings for queue in self._fanout_queues: if queue in self.auto_delete_queues: self.queue_delete(queue) # Close connections for attr in 'client', 'subclient': try: self.__dict__[attr].connection.disconnect() except (KeyError, AttributeError, self.ResponseError): pass super(Channel, self).close() def _prepare_virtual_host(self, vhost): if not isinstance(vhost, int): if not vhost or vhost == '/': vhost = DEFAULT_DB elif vhost.startswith('/'): vhost = vhost[1:] try: vhost = int(vhost) except ValueError: raise ValueError( 'Database is int between 0 and limit - 1, not {0}'.format( vhost, )) return vhost def _connparams(self): conninfo = self.connection.client connparams = {'host': conninfo.hostname or '127.0.0.1', 'port': conninfo.port or DEFAULT_PORT, 'virtual_host': conninfo.virtual_host, 'password': conninfo.password, 'max_connections': self.max_connections, 'socket_timeout': self.socket_timeout} host = connparams['host'] if '://' in host: scheme, _, _, _, _, path, query = _parse_url(host) if scheme == 'socket': connparams.update({ 'connection_class': redis.UnixDomainSocketConnection, 'path': '/' + path}, **query) connparams.pop('host', None) connparams.pop('port', None) connparams['db'] = self._prepare_virtual_host( connparams.pop('virtual_host', None)) return connparams def _create_client(self): return self.Client(connection_pool=self.pool) def _get_pool(self): params = self._connparams() self.keyprefix_fanout = self.keyprefix_fanout.format(db=params['db']) return redis.ConnectionPool(**params) def _get_client(self): if redis.VERSION < (2, 4, 4): raise VersionMismatch( 'Redis transport requires redis-py versions 2.4.4 or later. ' 'You have {0.__version__}'.format(redis)) # KombuRedis maintains a connection attribute on it's instance and # uses that when executing commands # This was added after redis-py was changed. class KombuRedis(redis.Redis): # pragma: no cover def __init__(self, *args, **kwargs): super(KombuRedis, self).__init__(*args, **kwargs) self.connection = self.connection_pool.get_connection('_') return KombuRedis @contextmanager def conn_or_acquire(self, client=None): if client: yield client else: if self._in_poll: client = self._create_client() try: yield client finally: self.pool.release(client.connection) else: yield self.client @property def pool(self): if self._pool is None: self._pool = self._get_pool() return self._pool @cached_property def client(self): """Client used to publish messages, BRPOP etc.""" return self._create_client() @cached_property def subclient(self): """Pub/Sub connection used to consume fanout queues.""" client = self._create_client() pubsub = client.pubsub() pool = pubsub.connection_pool pubsub.connection = pool.get_connection('pubsub', pubsub.shard_hint) return pubsub def _update_cycle(self): """Update fair cycle between queues. We cycle between queues fairly to make sure that each queue is equally likely to be consumed from, so that a very busy queue will not block others. This works by using Redis's `BRPOP` command and by rotating the most recently used queue to the and of the list. See Kombu github issue #166 for more discussion of this method. """ self._queue_cycle = list(self.active_queues) def _consume_cycle(self): """Get a fresh list of queues from the queue cycle.""" active = len(self.active_queues) return self._queue_cycle[0:active] def _rotate_cycle(self, used): """Move most recently used queue to end of list.""" cycle = self._queue_cycle try: cycle.append(cycle.pop(cycle.index(used))) except ValueError: pass def _get_response_error(self): from redis import exceptions return exceptions.ResponseError @property def active_queues(self): """Set of queues being consumed from (excluding fanout queues).""" return set(queue for queue in self._active_queues if queue not in self.active_fanout_queues) class Transport(virtual.Transport): Channel = Channel polling_interval = None # disable sleep between unsuccessful polls. default_port = DEFAULT_PORT supports_ev = True driver_type = 'redis' driver_name = 'redis' def __init__(self, *args, **kwargs): super(Transport, self).__init__(*args, **kwargs) # Get redis-py exceptions. self.connection_errors, self.channel_errors = self._get_errors() # All channels share the same poller. self.cycle = MultiChannelPoller() def driver_version(self): return redis.__version__ def register_with_event_loop(self, connection, loop): cycle = self.cycle cycle.on_poll_init(loop.poller) cycle_poll_start = cycle.on_poll_start add_reader = loop.add_reader on_readable = self.on_readable def on_poll_start(): cycle_poll_start() [add_reader(fd, on_readable, fd) for fd in cycle.fds] loop.on_tick.add(on_poll_start) loop.call_repeatedly(10, cycle.maybe_restore_messages) def on_readable(self, fileno): """Handle AIO event for one of our file descriptors.""" item = self.cycle.on_readable(fileno) if item: message, queue = item if not queue or queue not in self._callbacks: raise KeyError( 'Message for queue {0!r} without consumers: {1}'.format( queue, message)) self._callbacks[queue](message) def _get_errors(self): """Utility to import redis-py's exceptions at runtime.""" from redis import exceptions # This exception suddenly changed name between redis-py versions if hasattr(exceptions, 'InvalidData'): DataError = exceptions.InvalidData else: DataError = exceptions.DataError return ( (virtual.Transport.connection_errors + ( InconsistencyError, socket.error, exceptions.ConnectionError, exceptions.AuthenticationError)), (virtual.Transport.channel_errors + ( DataError, exceptions.InvalidResponse, exceptions.ResponseError)), ) kombu-3.0.7/kombu/transport/SLMQ.py0000644000076500000000000001365112243671543017564 0ustar asksolwheel00000000000000""" kombu.transport.SLMQ ==================== SoftLayer Message Queue transport. """ from __future__ import absolute_import import socket import string from anyjson import loads, dumps import os from kombu.five import Empty, text_t from kombu.utils import cached_property # , uuid from kombu.utils.encoding import bytes_to_str, safe_str from . import virtual try: from softlayer_messaging import get_client from softlayer_messaging.errors import ResponseError except ImportError: # pragma: no cover get_client = ResponseError = None # noqa # dots are replaced by dash, all other punctuation replaced by underscore. CHARS_REPLACE_TABLE = dict( (ord(c), 0x5f) for c in string.punctuation if c not in '_') class Channel(virtual.Channel): default_visibility_timeout = 1800 # 30 minutes. domain_format = 'kombu%(vhost)s' _slmq = None _queue_cache = {} _noack_queues = set() def __init__(self, *args, **kwargs): if get_client is None: raise ImportError( 'SLMQ transport requires the softlayer_messaging library', ) super(Channel, self).__init__(*args, **kwargs) queues = self.slmq.queues() for queue in queues: self._queue_cache[queue] = queue def basic_consume(self, queue, no_ack, *args, **kwargs): if no_ack: self._noack_queues.add(queue) return super(Channel, self).basic_consume(queue, no_ack, *args, **kwargs) def basic_cancel(self, consumer_tag): if consumer_tag in self._consumers: queue = self._tag_to_queue[consumer_tag] self._noack_queues.discard(queue) return super(Channel, self).basic_cancel(consumer_tag) def entity_name(self, name, table=CHARS_REPLACE_TABLE): """Format AMQP queue name into a valid SLQS queue name.""" return text_t(safe_str(name)).translate(table) def _new_queue(self, queue, **kwargs): """Ensures a queue exists in SLQS.""" queue = self.entity_name(self.queue_name_prefix + queue) try: return self._queue_cache[queue] except KeyError: try: self.slmq.create_queue( queue, visibility_timeout=self.visibility_timeout) except ResponseError: pass q = self._queue_cache[queue] = self.slmq.queue(queue) return q def _delete(self, queue, *args): """delete queue by name.""" queue_name = self.entity_name(queue) self._queue_cache.pop(queue_name, None) self.slmq.queue(queue_name).delete(force=True) super(Channel, self)._delete(queue_name) def _put(self, queue, message, **kwargs): """Put message onto queue.""" q = self._new_queue(queue) q.push(dumps(message)) def _get(self, queue): """Try to retrieve a single message off ``queue``.""" q = self._new_queue(queue) rs = q.pop(1) if rs['items']: m = rs['items'][0] payload = loads(bytes_to_str(m['body'])) if queue in self._noack_queues: q.message(m['id']).delete() else: payload['properties']['delivery_info'].update({ 'slmq_message_id': m['id'], 'slmq_queue_name': q.name}) return payload raise Empty() def basic_ack(self, delivery_tag): delivery_info = self.qos.get(delivery_tag).delivery_info try: queue = delivery_info['slmq_queue_name'] except KeyError: pass else: self.delete_message(queue, delivery_info['slmq_message_id']) super(Channel, self).basic_ack(delivery_tag) def _size(self, queue): """Return the number of messages in a queue.""" return self._new_queue(queue).detail()['message_count'] def _purge(self, queue): """Delete all current messages in a queue.""" q = self._new_queue(queue) n = 0 l = q.pop(10) while l['items']: for m in l['items']: self.delete_message(queue, m['id']) n += 1 l = q.pop(10) return n def delete_message(self, queue, message_id): q = self.slmq.queue(self.entity_name(queue)) return q.message(message_id).delete() @property def slmq(self): if self._slmq is None: conninfo = self.conninfo account = os.environ.get('SLMQ_ACCOUNT', conninfo.virtual_host) user = os.environ.get('SL_USERNAME', conninfo.userid) api_key = os.environ.get('SL_API_KEY', conninfo.password) host = os.environ.get('SLMQ_HOST', conninfo.hostname) port = os.environ.get('SLMQ_PORT', conninfo.port) secure = bool(os.environ.get( 'SLMQ_SECURE', self.transport_options.get('secure')) or True) if secure: endpoint = "https://%s" % host else: endpoint = "http://%s" % host if port: endpoint = "%s:%s" % (endpoint, port) self._slmq = get_client(account, endpoint=endpoint) self._slmq.authenticate(user, api_key) return self._slmq @property def conninfo(self): return self.connection.client @property def transport_options(self): return self.connection.client.transport_options @cached_property def visibility_timeout(self): return (self.transport_options.get('visibility_timeout') or self.default_visibility_timeout) @cached_property def queue_name_prefix(self): return self.transport_options.get('queue_name_prefix', '') class Transport(virtual.Transport): Channel = Channel polling_interval = 1 default_port = None connection_errors = ( virtual.Transport.connection_errors + ( ResponseError, socket.error ) ) kombu-3.0.7/kombu/transport/sqlalchemy/0000755000076500000000000000000012247127370020570 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/sqlalchemy/__init__.py0000644000076500000000000001202512243671543022703 0ustar asksolwheel00000000000000"""Kombu transport using SQLAlchemy as the message store.""" # SQLAlchemy overrides != False to have special meaning and pep8 complains # flake8: noqa from __future__ import absolute_import from anyjson import loads, dumps from sqlalchemy import create_engine from sqlalchemy.exc import OperationalError from sqlalchemy.orm import sessionmaker from kombu.five import Empty from kombu.transport import virtual from kombu.utils import cached_property from kombu.utils.encoding import bytes_to_str from .models import (ModelBase, Queue as QueueBase, Message as MessageBase, class_registry, metadata) VERSION = (1, 1, 0) __version__ = '.'.join(map(str, VERSION)) class Channel(virtual.Channel): _session = None _engines = {} # engine cache def __init__(self, connection, **kwargs): self._configure_entity_tablenames(connection.client.transport_options) super(Channel, self).__init__(connection, **kwargs) def _configure_entity_tablenames(self, opts): self.queue_tablename = opts.get('queue_tablename', 'kombu_queue') self.message_tablename = opts.get('message_tablename', 'kombu_message') # # Define the model definitions. This registers the declarative # classes with the active SQLAlchemy metadata object. This *must* be # done prior to the ``create_engine`` call. # self.queue_cls and self.message_cls def _engine_from_config(self): conninfo = self.connection.client transport_options = conninfo.transport_options.copy() transport_options.pop('queue_tablename', None) transport_options.pop('message_tablename', None) return create_engine(conninfo.hostname, **transport_options) def _open(self): conninfo = self.connection.client if conninfo.hostname not in self._engines: engine = self._engine_from_config() Session = sessionmaker(bind=engine) metadata.create_all(engine) self._engines[conninfo.hostname] = engine, Session return self._engines[conninfo.hostname] @property def session(self): if self._session is None: _, Session = self._open() self._session = Session() return self._session def _get_or_create(self, queue): obj = self.session.query(self.queue_cls) \ .filter(self.queue_cls.name == queue).first() if not obj: obj = self.queue_cls(queue) self.session.add(obj) try: self.session.commit() except OperationalError: self.session.rollback() return obj def _new_queue(self, queue, **kwargs): self._get_or_create(queue) def _put(self, queue, payload, **kwargs): obj = self._get_or_create(queue) message = self.message_cls(dumps(payload), obj) self.session.add(message) try: self.session.commit() except OperationalError: self.session.rollback() def _get(self, queue): obj = self._get_or_create(queue) if self.session.bind.name == 'sqlite': self.session.execute('BEGIN IMMEDIATE TRANSACTION') try: msg = self.session.query(self.message_cls) \ .with_lockmode('update') \ .filter(self.message_cls.queue_id == obj.id) \ .filter(self.message_cls.visible != False) \ .order_by(self.message_cls.sent_at) \ .order_by(self.message_cls.id) \ .limit(1) \ .first() if msg: msg.visible = False return loads(bytes_to_str(msg.payload)) raise Empty() finally: self.session.commit() def _query_all(self, queue): obj = self._get_or_create(queue) return self.session.query(self.message_cls) \ .filter(self.message_cls.queue_id == obj.id) def _purge(self, queue): count = self._query_all(queue).delete(synchronize_session=False) try: self.session.commit() except OperationalError: self.session.rollback() return count def _size(self, queue): return self._query_all(queue).count() def _declarative_cls(self, name, base, ns): if name in class_registry: return class_registry[name] return type(name, (base, ModelBase), ns) @cached_property def queue_cls(self): return self._declarative_cls( 'Queue', QueueBase, {'__tablename__': self.queue_tablename} ) @cached_property def message_cls(self): return self._declarative_cls( 'Message', MessageBase, {'__tablename__': self.message_tablename} ) class Transport(virtual.Transport): Channel = Channel can_parse_url = True default_port = 0 driver_type = 'sql' driver_name = 'sqlalchemy' def driver_version(self): import sqlalchemy return sqlalchemy.__version__ kombu-3.0.7/kombu/transport/sqlalchemy/models.py0000644000076500000000000000365112237554371022436 0ustar asksolwheel00000000000000from __future__ import absolute_import import datetime from sqlalchemy import (Column, Integer, String, Text, DateTime, Sequence, Boolean, ForeignKey, SmallInteger) from sqlalchemy.orm import relation from sqlalchemy.ext.declarative import declarative_base, declared_attr from sqlalchemy.schema import MetaData class_registry = {} metadata = MetaData() ModelBase = declarative_base(metadata=metadata, class_registry=class_registry) class Queue(object): __table_args__ = {'sqlite_autoincrement': True, 'mysql_engine': 'InnoDB'} id = Column(Integer, Sequence('queue_id_sequence'), primary_key=True, autoincrement=True) name = Column(String(200), unique=True) def __init__(self, name): self.name = name def __str__(self): return ''.format(self=self) @declared_attr def messages(cls): return relation('Message', backref='queue', lazy='noload') class Message(object): __table_args__ = {'sqlite_autoincrement': True, 'mysql_engine': 'InnoDB'} id = Column(Integer, Sequence('message_id_sequence'), primary_key=True, autoincrement=True) visible = Column(Boolean, default=True, index=True) sent_at = Column('timestamp', DateTime, nullable=True, index=True, onupdate=datetime.datetime.now) payload = Column(Text, nullable=False) version = Column(SmallInteger, nullable=False, default=1) __mapper_args__ = {'version_id_col': version} def __init__(self, payload, queue): self.payload = payload self.queue = queue def __str__(self): return ''.format(self) @declared_attr def queue_id(self): return Column( Integer, ForeignKey( '%s.id' % class_registry['Queue'].__tablename__, name='FK_kombu_message_queue' ) ) kombu-3.0.7/kombu/transport/SQS.py0000644000076500000000000003102112243671543017445 0ustar asksolwheel00000000000000""" kombu.transport.SQS =================== Amazon SQS transport. """ from __future__ import absolute_import import socket import string from anyjson import loads, dumps import boto from boto import exception from boto import sdb as _sdb from boto import sqs as _sqs from boto.sdb.domain import Domain from boto.sdb.connection import SDBConnection from boto.sqs.connection import SQSConnection from boto.sqs.message import Message from kombu.five import Empty, range, text_t from kombu.utils import cached_property, uuid from kombu.utils.encoding import bytes_to_str, safe_str from . import virtual # dots are replaced by dash, all other punctuation # replaced by underscore. CHARS_REPLACE_TABLE = dict((ord(c), 0x5f) for c in string.punctuation if c not in '-_.') CHARS_REPLACE_TABLE[0x2e] = 0x2d # '.' -> '-' def maybe_int(x): try: return int(x) except ValueError: return x BOTO_VERSION = tuple(maybe_int(part) for part in boto.__version__.split('.')) W_LONG_POLLING = BOTO_VERSION >= (2, 8) class Table(Domain): """Amazon SimpleDB domain describing the message routing table.""" # caches queues already bound, so we don't have to declare them again. _already_bound = set() def routes_for(self, exchange): """Iterator giving all routes for an exchange.""" return self.select("""WHERE exchange = '%s'""" % exchange) def get_queue(self, queue): """Get binding for queue.""" qid = self._get_queue_id(queue) if qid: return self.get_item(qid) def create_binding(self, queue): """Get binding item for queue. Creates the item if it doesn't exist. """ item = self.get_queue(queue) if item: return item, item['id'] id = uuid() return self.new_item(id), id def queue_bind(self, exchange, routing_key, pattern, queue): if queue not in self._already_bound: binding, id = self.create_binding(queue) binding.update(exchange=exchange, routing_key=routing_key or '', pattern=pattern or '', queue=queue or '', id=id) binding.save() self._already_bound.add(queue) def queue_delete(self, queue): """delete queue by name.""" self._already_bound.discard(queue) item = self._get_queue_item(queue) if item: self.delete_item(item) def exchange_delete(self, exchange): """Delete all routes for `exchange`.""" for item in self.routes_for(exchange): self.delete_item(item['id']) def get_item(self, item_name): """Uses `consistent_read` by default.""" # Domain is an old-style class, can't use super(). for consistent_read in (False, True): item = Domain.get_item(self, item_name, consistent_read) if item: return item def select(self, query='', next_token=None, consistent_read=True, max_items=None): """Uses `consistent_read` by default.""" query = """SELECT * FROM `%s` %s""" % (self.name, query) return Domain.select(self, query, next_token, consistent_read, max_items) def _try_first(self, query='', **kwargs): for c in (False, True): for item in self.select(query, consistent_read=c, **kwargs): return item def get_exchanges(self): return list(set(i['exchange'] for i in self.select())) def _get_queue_item(self, queue): return self._try_first("""WHERE queue = '%s' limit 1""" % queue) def _get_queue_id(self, queue): item = self._get_queue_item(queue) if item: return item['id'] class Channel(virtual.Channel): Table = Table default_region = 'us-east-1' default_visibility_timeout = 1800 # 30 minutes. default_wait_time_seconds = 0 # disabled see #198 domain_format = 'kombu%(vhost)s' _sdb = None _sqs = None _queue_cache = {} _noack_queues = set() def __init__(self, *args, **kwargs): super(Channel, self).__init__(*args, **kwargs) # SQS blows up when you try to create a new queue if one already # exists with a different visibility_timeout, so this prepopulates # the queue_cache to protect us from recreating # queues that are known to already exist. queues = self.sqs.get_all_queues(prefix=self.queue_name_prefix) for queue in queues: self._queue_cache[queue.name] = queue self._fanout_queues = set() def basic_consume(self, queue, no_ack, *args, **kwargs): if no_ack: self._noack_queues.add(queue) return super(Channel, self).basic_consume(queue, no_ack, *args, **kwargs) def basic_cancel(self, consumer_tag): if consumer_tag in self._consumers: queue = self._tag_to_queue[consumer_tag] self._noack_queues.discard(queue) return super(Channel, self).basic_cancel(consumer_tag) def entity_name(self, name, table=CHARS_REPLACE_TABLE): """Format AMQP queue name into a legal SQS queue name.""" return text_t(safe_str(name)).translate(table) def _new_queue(self, queue, **kwargs): """Ensures a queue exists in SQS.""" # Translate to SQS name for consistency with initial # _queue_cache population. queue = self.entity_name(self.queue_name_prefix + queue) try: return self._queue_cache[queue] except KeyError: q = self._queue_cache[queue] = self.sqs.create_queue( queue, self.visibility_timeout, ) return q def queue_bind(self, queue, exchange=None, routing_key='', arguments=None, **kwargs): super(Channel, self).queue_bind(queue, exchange, routing_key, arguments, **kwargs) if self.typeof(exchange).type == 'fanout': self._fanout_queues.add(queue) def _queue_bind(self, *args): """Bind ``queue`` to ``exchange`` with routing key. Route will be stored in SDB if so enabled. """ if self.supports_fanout: self.table.queue_bind(*args) def get_table(self, exchange): """Get routing table. Retrieved from SDB if :attr:`supports_fanout`. """ if self.supports_fanout: return [(r['routing_key'], r['pattern'], r['queue']) for r in self.table.routes_for(exchange)] return super(Channel, self).get_table(exchange) def get_exchanges(self): if self.supports_fanout: return self.table.get_exchanges() return super(Channel, self).get_exchanges() def _delete(self, queue, *args): """delete queue by name.""" self._queue_cache.pop(queue, None) if self.supports_fanout: self.table.queue_delete(queue) super(Channel, self)._delete(queue) def exchange_delete(self, exchange, **kwargs): """Delete exchange by name.""" if self.supports_fanout: self.table.exchange_delete(exchange) super(Channel, self).exchange_delete(exchange, **kwargs) def _has_queue(self, queue, **kwargs): """Return True if ``queue`` was previously declared.""" if self.supports_fanout: return bool(self.table.get_queue(queue)) return super(Channel, self)._has_queue(queue) def _put(self, queue, message, **kwargs): """Put message onto queue.""" q = self._new_queue(queue) m = Message() m.set_body(dumps(message)) q.write(m) def _put_fanout(self, exchange, message, **kwargs): """Deliver fanout message to all queues in ``exchange``.""" for route in self.table.routes_for(exchange): self._put(route['queue'], message, **kwargs) def _get(self, queue): """Try to retrieve a single message off ``queue``.""" q = self._new_queue(queue) if W_LONG_POLLING and queue not in self._fanout_queues: rs = q.get_messages(1, wait_time_seconds=self.wait_time_seconds) else: # boto < 2.8 rs = q.get_messages(1) if rs: m = rs[0] payload = loads(bytes_to_str(rs[0].get_body())) if queue in self._noack_queues: q.delete_message(m) else: payload['properties']['delivery_info'].update({ 'sqs_message': m, 'sqs_queue': q, }) return payload raise Empty() def _restore(self, message, unwanted_delivery_info=('sqs_message', 'sqs_queue')): for unwanted_key in unwanted_delivery_info: # Remove objects that aren't JSON serializable (Issue #1108). message.delivery_info.pop(unwanted_key, None) return super(Channel, self)._restore(message) def basic_ack(self, delivery_tag): delivery_info = self.qos.get(delivery_tag).delivery_info try: queue = delivery_info['sqs_queue'] except KeyError: pass else: queue.delete_message(delivery_info['sqs_message']) super(Channel, self).basic_ack(delivery_tag) def _size(self, queue): """Return the number of messages in a queue.""" return self._new_queue(queue).count() def _purge(self, queue): """Delete all current messages in a queue.""" q = self._new_queue(queue) # SQS is slow at registering messages, so run for a few # iterations to ensure messages are deleted. size = 0 for i in range(10): size += q.count() if not size: break q.clear() return size def close(self): super(Channel, self).close() for conn in (self._sqs, self._sdb): if conn: try: conn.close() except AttributeError as exc: # FIXME ??? if "can't set attribute" not in str(exc): raise def _get_regioninfo(self, regions): if self.region: for _r in regions: if _r.name == self.region: return _r def _aws_connect_to(self, fun, regions): conninfo = self.conninfo region = self._get_regioninfo(regions) return fun(region=region, aws_access_key_id=conninfo.userid, aws_secret_access_key=conninfo.password, port=conninfo.port) def _next_delivery_tag(self): return uuid() # See #73 @property def sqs(self): if self._sqs is None: self._sqs = self._aws_connect_to(SQSConnection, _sqs.regions()) return self._sqs @property def sdb(self): if self._sdb is None: self._sdb = self._aws_connect_to(SDBConnection, _sdb.regions()) return self._sdb @property def table(self): name = self.entity_name( self.domain_format % {'vhost': self.conninfo.virtual_host}) d = self.sdb.get_object( 'CreateDomain', {'DomainName': name}, self.Table) d.name = name return d @property def conninfo(self): return self.connection.client @property def transport_options(self): return self.connection.client.transport_options @cached_property def visibility_timeout(self): return (self.transport_options.get('visibility_timeout') or self.default_visibility_timeout) @cached_property def queue_name_prefix(self): return self.transport_options.get('queue_name_prefix', '') @cached_property def supports_fanout(self): return self.transport_options.get('sdb_persistence', False) @cached_property def region(self): return self.transport_options.get('region') or self.default_region @cached_property def wait_time_seconds(self): return self.transport_options.get('wait_time_seconds', self.default_wait_time_seconds) class Transport(virtual.Transport): Channel = Channel polling_interval = 1 wait_time_seconds = 0 default_port = None connection_errors = ( virtual.Transport.connection_errors + (exception.SQSError, socket.error) ) channel_errors = ( virtual.Transport.channel_errors + (exception.SQSDecodeError, ) ) driver_type = 'sqs' driver_name = 'sqs' kombu-3.0.7/kombu/transport/virtual/0000755000076500000000000000000012247127370020114 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/transport/virtual/__init__.py0000644000076500000000000006400012243752202022217 0ustar asksolwheel00000000000000""" kombu.transport.virtual ======================= Virtual transport implementation. Emulates the AMQ API for non-AMQ transports. """ from __future__ import absolute_import, unicode_literals import base64 import socket import sys import warnings from array import array from itertools import count from multiprocessing.util import Finalize from time import sleep from amqp.protocol import queue_declare_ok_t from kombu.exceptions import ResourceError, ChannelError from kombu.five import Empty, items, monotonic from kombu.utils import emergency_dump_state, kwdict, say, uuid from kombu.utils.compat import OrderedDict from kombu.utils.encoding import str_to_bytes, bytes_to_str from kombu.transport import base from .scheduling import FairCycle from .exchange import STANDARD_EXCHANGE_TYPES ARRAY_TYPE_H = 'H' if sys.version_info[0] == 3 else b'H' UNDELIVERABLE_FMT = """\ Message could not be delivered: No queues bound to exchange {exchange!r} \ using binding key {routing_key!r}. """ NOT_EQUIVALENT_FMT = """\ Cannot redeclare exchange {0!r} in vhost {1!r} with \ different type, durable, autodelete or arguments value.\ """ class Base64(object): def encode(self, s): return bytes_to_str(base64.b64encode(str_to_bytes(s))) def decode(self, s): return base64.b64decode(str_to_bytes(s)) class NotEquivalentError(Exception): """Entity declaration is not equivalent to the previous declaration.""" pass class UndeliverableWarning(UserWarning): """The message could not be delivered to a queue.""" pass class BrokerState(object): #: exchange declarations. exchanges = None #: active bindings. bindings = None def __init__(self, exchanges=None, bindings=None): self.exchanges = {} if exchanges is None else exchanges self.bindings = {} if bindings is None else bindings def clear(self): self.exchanges.clear() self.bindings.clear() class QoS(object): """Quality of Service guarantees. Only supports `prefetch_count` at this point. :param channel: AMQ Channel. :keyword prefetch_count: Initial prefetch count (defaults to 0). """ #: current prefetch count value prefetch_count = 0 #: :class:`~collections.OrderedDict` of active messages. #: *NOTE*: Can only be modified by the consuming thread. _delivered = None #: acks can be done by other threads than the consuming thread. #: Instead of a mutex, which doesn't perform well here, we mark #: the delivery tags as dirty, so subsequent calls to append() can remove #: them. _dirty = None #: If disabled, unacked messages won't be restored at shutdown. restore_at_shutdown = True def __init__(self, channel, prefetch_count=0): self.channel = channel self.prefetch_count = prefetch_count or 0 self._delivered = OrderedDict() self._delivered.restored = False self._dirty = set() self._quick_ack = self._dirty.add self._quick_append = self._delivered.__setitem__ self._on_collect = Finalize( self, self.restore_unacked_once, exitpriority=1, ) def can_consume(self): """Return true if the channel can be consumed from. Used to ensure the client adhers to currently active prefetch limits. """ pcount = self.prefetch_count return not pcount or len(self._delivered) - len(self._dirty) < pcount def append(self, message, delivery_tag): """Append message to transactional state.""" if self._dirty: self._flush() self._quick_append(delivery_tag, message) def get(self, delivery_tag): return self._delivered[delivery_tag] def _flush(self): """Flush dirty (acked/rejected) tags from.""" dirty = self._dirty delivered = self._delivered while 1: try: dirty_tag = dirty.pop() except KeyError: break delivered.pop(dirty_tag, None) def ack(self, delivery_tag): """Acknowledge message and remove from transactional state.""" self._quick_ack(delivery_tag) def reject(self, delivery_tag, requeue=False): """Remove from transactional state and requeue message.""" if requeue: self.channel._restore_at_beginning(self._delivered[delivery_tag]) self._quick_ack(delivery_tag) def restore_unacked(self): """Restore all unacknowledged messages.""" self._flush() delivered = self._delivered errors = [] restore = self.channel._restore pop_message = delivered.popitem while delivered: try: _, message = pop_message() except KeyError: # pragma: no cover break try: restore(message) except BaseException as exc: errors.append((exc, message)) delivered.clear() return errors def restore_unacked_once(self): """Restores all unacknowledged messages at shutdown/gc collect. Will only be done once for each instance. """ self._on_collect.cancel() self._flush() state = self._delivered if not self.restore_at_shutdown or not self.channel.do_restore: return if getattr(state, 'restored', None): assert not state return try: if state: say('Restoring {0!r} unacknowledged message(s).', len(self._delivered)) unrestored = self.restore_unacked() if unrestored: errors, messages = list(zip(*unrestored)) say('UNABLE TO RESTORE {0} MESSAGES: {1}', len(errors), errors) emergency_dump_state(messages) finally: state.restored = True class Message(base.Message): def __init__(self, channel, payload, **kwargs): self._raw = payload properties = payload['properties'] body = payload.get('body') if body: body = channel.decode_body(body, properties.get('body_encoding')) kwargs.update({ 'body': body, 'delivery_tag': properties['delivery_tag'], 'content_type': payload.get('content-type'), 'content_encoding': payload.get('content-encoding'), 'headers': payload.get('headers'), 'properties': properties, 'delivery_info': properties.get('delivery_info'), 'postencode': 'utf-8', }) super(Message, self).__init__(channel, **kwdict(kwargs)) def serializable(self): props = self.properties body, _ = self.channel.encode_body(self.body, props.get('body_encoding')) headers = dict(self.headers) # remove compression header headers.pop('compression', None) return { 'body': body, 'properties': props, 'content-type': self.content_type, 'content-encoding': self.content_encoding, 'headers': headers, } class AbstractChannel(object): """This is an abstract class defining the channel methods you'd usually want to implement in a virtual channel. Do not subclass directly, but rather inherit from :class:`Channel` instead. """ def _get(self, queue, timeout=None): """Get next message from `queue`.""" raise NotImplementedError('Virtual channels must implement _get') def _put(self, queue, message): """Put `message` onto `queue`.""" raise NotImplementedError('Virtual channels must implement _put') def _purge(self, queue): """Remove all messages from `queue`.""" raise NotImplementedError('Virtual channels must implement _purge') def _size(self, queue): """Return the number of messages in `queue` as an :class:`int`.""" return 0 def _delete(self, queue, *args, **kwargs): """Delete `queue`. This just purges the queue, if you need to do more you can override this method. """ self._purge(queue) def _new_queue(self, queue, **kwargs): """Create new queue. Your transport can override this method if it needs to do something whenever a new queue is declared. """ pass def _has_queue(self, queue, **kwargs): """Verify that queue exists. Should return :const:`True` if the queue exists or :const:`False` otherwise. """ return True def _poll(self, cycle, timeout=None): """Poll a list of queues for available messages.""" return cycle.get() class Channel(AbstractChannel, base.StdChannel): """Virtual channel. :param connection: The transport instance this channel is part of. """ #: message class used. Message = Message #: QoS class used. QoS = QoS #: flag to restore unacked messages when channel #: goes out of scope. do_restore = True #: mapping of exchange types and corresponding classes. exchange_types = dict(STANDARD_EXCHANGE_TYPES) #: flag set if the channel supports fanout exchanges. supports_fanout = False #: Binary <-> ASCII codecs. codecs = {'base64': Base64()} #: Default body encoding. #: NOTE: ``transport_options['body_encoding']`` will override this value. body_encoding = 'base64' #: counter used to generate delivery tags for this channel. _delivery_tags = count(1) #: Optional queue where messages with no route is delivered. #: Set by ``transport_options['deadletter_queue']``. deadletter_queue = None # List of options to transfer from :attr:`transport_options`. from_transport_options = ('body_encoding', 'deadletter_queue') def __init__(self, connection, **kwargs): self.connection = connection self._consumers = set() self._cycle = None self._tag_to_queue = {} self._active_queues = [] self._qos = None self.closed = False # instantiate exchange types self.exchange_types = dict( (typ, cls(self)) for typ, cls in items(self.exchange_types) ) try: self.channel_id = self.connection._avail_channel_ids.pop() except IndexError: raise ResourceError( 'No free channel ids, current={0}, channel_max={1}'.format( len(self.connection.channels), self.connection.channel_max), (20, 10), ) topts = self.connection.client.transport_options for opt_name in self.from_transport_options: try: setattr(self, opt_name, topts[opt_name]) except KeyError: pass def exchange_declare(self, exchange=None, type='direct', durable=False, auto_delete=False, arguments=None, nowait=False, passive=False): """Declare exchange.""" type = type or 'direct' exchange = exchange or 'amq.%s' % type if passive: if exchange not in self.state.exchanges: raise ChannelError( 'NOT_FOUND - no exchange {0!r} in vhost {1!r}'.format( exchange, self.connection.client.virtual_host or '/'), (50, 10), 'Channel.exchange_declare', '404', ) return try: prev = self.state.exchanges[exchange] if not self.typeof(exchange).equivalent(prev, exchange, type, durable, auto_delete, arguments): raise NotEquivalentError(NOT_EQUIVALENT_FMT.format( exchange, self.connection.client.virtual_host or '/')) except KeyError: self.state.exchanges[exchange] = { 'type': type, 'durable': durable, 'auto_delete': auto_delete, 'arguments': arguments or {}, 'table': [], } def exchange_delete(self, exchange, if_unused=False, nowait=False): """Delete `exchange` and all its bindings.""" for rkey, _, queue in self.get_table(exchange): self.queue_delete(queue, if_unused=True, if_empty=True) self.state.exchanges.pop(exchange, None) def queue_declare(self, queue=None, passive=False, **kwargs): """Declare queue.""" queue = queue or 'amq.gen-%s' % uuid() if passive and not self._has_queue(queue, **kwargs): raise ChannelError( 'NOT_FOUND - no queue {0!r} in vhost {1!r}'.format( queue, self.connection.client.virtual_host or '/'), (50, 10), 'Channel.queue_declare', '404', ) else: self._new_queue(queue, **kwargs) return queue_declare_ok_t(queue, self._size(queue), 0) def queue_delete(self, queue, if_unused=False, if_empty=False, **kwargs): """Delete queue.""" if if_empty and self._size(queue): return try: exchange, routing_key, arguments = self.state.bindings[queue] except KeyError: return meta = self.typeof(exchange).prepare_bind( queue, exchange, routing_key, arguments, ) self._delete(queue, exchange, *meta) self.state.bindings.pop(queue, None) def after_reply_message_received(self, queue): self.queue_delete(queue) def exchange_bind(self, destination, source='', routing_key='', nowait=False, arguments=None): raise NotImplementedError('transport does not support exchange_bind') def exchange_unbind(self, destination, source='', routing_key='', nowait=False, arguments=None): raise NotImplementedError('transport does not support exchange_unbind') def queue_bind(self, queue, exchange=None, routing_key='', arguments=None, **kwargs): """Bind `queue` to `exchange` with `routing key`.""" if queue in self.state.bindings: return exchange = exchange or 'amq.direct' table = self.state.exchanges[exchange].setdefault('table', []) self.state.bindings[queue] = exchange, routing_key, arguments meta = self.typeof(exchange).prepare_bind( queue, exchange, routing_key, arguments, ) table.append(meta) if self.supports_fanout: self._queue_bind(exchange, *meta) def queue_unbind(self, queue, exchange=None, routing_key='', arguments=None, **kwargs): raise NotImplementedError('transport does not support queue_unbind') def list_bindings(self): return ((queue, exchange, rkey) for exchange in self.state.exchanges for rkey, pattern, queue in self.get_table(exchange)) def queue_purge(self, queue, **kwargs): """Remove all ready messages from queue.""" return self._purge(queue) def basic_publish(self, message, exchange, routing_key, **kwargs): """Publish message.""" message['body'], body_encoding = self.encode_body( message['body'], self.body_encoding, ) props = message['properties'] props.update( body_encoding=body_encoding, delivery_tag=next(self._delivery_tags), ) props['delivery_info'].update( exchange=exchange, routing_key=routing_key, ) self.typeof(exchange).deliver( message, exchange, routing_key, **kwargs ) def basic_consume(self, queue, no_ack, callback, consumer_tag, **kwargs): """Consume from `queue`""" self._tag_to_queue[consumer_tag] = queue self._active_queues.append(queue) def _callback(raw_message): message = self.Message(self, raw_message) if not no_ack: self.qos.append(message, message.delivery_tag) return callback(message) self.connection._callbacks[queue] = _callback self._consumers.add(consumer_tag) self._reset_cycle() def basic_cancel(self, consumer_tag): """Cancel consumer by consumer tag.""" if consumer_tag in self._consumers: self._consumers.remove(consumer_tag) self._reset_cycle() queue = self._tag_to_queue.pop(consumer_tag, None) try: self._active_queues.remove(queue) except ValueError: pass self.connection._callbacks.pop(queue, None) def basic_get(self, queue, no_ack=False, **kwargs): """Get message by direct access (synchronous).""" try: message = self.Message(self, self._get(queue)) if not no_ack: self.qos.append(message, message.delivery_tag) return message except Empty: pass def basic_ack(self, delivery_tag): """Acknowledge message.""" self.qos.ack(delivery_tag) def basic_recover(self, requeue=False): """Recover unacked messages.""" if requeue: return self.qos.restore_unacked() raise NotImplementedError('Does not support recover(requeue=False)') def basic_reject(self, delivery_tag, requeue=False): """Reject message.""" self.qos.reject(delivery_tag, requeue=requeue) def basic_qos(self, prefetch_size=0, prefetch_count=0, apply_global=False): """Change QoS settings for this channel. Only `prefetch_count` is supported. """ self.qos.prefetch_count = prefetch_count def get_exchanges(self): return list(self.state.exchanges) def get_table(self, exchange): """Get table of bindings for `exchange`.""" return self.state.exchanges[exchange]['table'] def typeof(self, exchange, default='direct'): """Get the exchange type instance for `exchange`.""" try: type = self.state.exchanges[exchange]['type'] except KeyError: type = default return self.exchange_types[type] def _lookup(self, exchange, routing_key, default=None): """Find all queues matching `routing_key` for the given `exchange`. Must return the string `default` if no queues matched. """ if default is None: default = self.deadletter_queue try: R = self.typeof(exchange).lookup( self.get_table(exchange), exchange, routing_key, default, ) except KeyError: R = [] if not R and default is not None: warnings.warn(UndeliverableWarning(UNDELIVERABLE_FMT.format( exchange=exchange, routing_key=routing_key)), ) self._new_queue(default) R = [default] return R def _restore(self, message): """Redeliver message to its original destination.""" delivery_info = message.delivery_info message = message.serializable() message['redelivered'] = True for queue in self._lookup( delivery_info['exchange'], delivery_info['routing_key']): self._put(queue, message) def _restore_at_beginning(self, message): return self._restore(message) def drain_events(self, timeout=None): if self._consumers and self.qos.can_consume(): if hasattr(self, '_get_many'): return self._get_many(self._active_queues, timeout=timeout) return self._poll(self.cycle, timeout=timeout) raise Empty() def message_to_python(self, raw_message): """Convert raw message to :class:`Message` instance.""" if not isinstance(raw_message, self.Message): return self.Message(self, payload=raw_message) return raw_message def prepare_message(self, body, priority=None, content_type=None, content_encoding=None, headers=None, properties=None): """Prepare message data.""" properties = properties or {} info = properties.setdefault('delivery_info', {}) info['priority'] = priority or 0 return {'body': body, 'content-encoding': content_encoding, 'content-type': content_type, 'headers': headers or {}, 'properties': properties or {}} def flow(self, active=True): """Enable/disable message flow. :raises NotImplementedError: as flow is not implemented by the base virtual implementation. """ raise NotImplementedError('virtual channels do not support flow.') def close(self): """Close channel, cancel all consumers, and requeue unacked messages.""" if not self.closed: self.closed = True for consumer in list(self._consumers): self.basic_cancel(consumer) if self._qos: self._qos.restore_unacked_once() if self._cycle is not None: self._cycle.close() self._cycle = None if self.connection is not None: self.connection.close_channel(self) self.exchange_types = None def encode_body(self, body, encoding=None): if encoding: return self.codecs.get(encoding).encode(body), encoding return body, encoding def decode_body(self, body, encoding=None): if encoding: return self.codecs.get(encoding).decode(body) return body def _reset_cycle(self): self._cycle = FairCycle(self._get, self._active_queues, Empty) def __enter__(self): return self def __exit__(self, *exc_info): self.close() @property def state(self): """Broker state containing exchanges and bindings.""" return self.connection.state @property def qos(self): """:class:`QoS` manager for this channel.""" if self._qos is None: self._qos = self.QoS(self) return self._qos @property def cycle(self): if self._cycle is None: self._reset_cycle() return self._cycle class Management(base.Management): def __init__(self, transport): super(Management, self).__init__(transport) self.channel = transport.client.channel() def get_bindings(self): return [dict(destination=q, source=e, routing_key=r) for q, e, r in self.channel.list_bindings()] def close(self): self.channel.close() class Transport(base.Transport): """Virtual transport. :param client: :class:`~kombu.Connection` instance """ Channel = Channel Cycle = FairCycle Management = Management #: :class:`BrokerState` containing declared exchanges and #: bindings (set by constructor). state = BrokerState() #: :class:`~kombu.transport.virtual.scheduling.FairCycle` instance #: used to fairly drain events from channels (set by constructor). cycle = None #: port number used when no port is specified. default_port = None #: active channels. channels = None #: queue/callback map. _callbacks = None #: Time to sleep between unsuccessful polls. polling_interval = 1.0 #: Max number of channels channel_max = 65535 def __init__(self, client, **kwargs): self.client = client self.channels = [] self._avail_channels = [] self._callbacks = {} self.cycle = self.Cycle(self._drain_channel, self.channels, Empty) polling_interval = client.transport_options.get('polling_interval') if polling_interval is not None: self.polling_interval = polling_interval self._avail_channel_ids = array( ARRAY_TYPE_H, range(self.channel_max, 0, -1), ) def create_channel(self, connection): try: return self._avail_channels.pop() except IndexError: channel = self.Channel(connection) self.channels.append(channel) return channel def close_channel(self, channel): try: self._avail_channel_ids.append(channel.channel_id) try: self.channels.remove(channel) except ValueError: pass finally: channel.connection = None def establish_connection(self): # creates channel to verify connection. # this channel is then used as the next requested channel. # (returned by ``create_channel``). self._avail_channels.append(self.create_channel(self)) return self # for drain events def close_connection(self, connection): self.cycle.close() for l in self._avail_channels, self.channels: while l: try: channel = l.pop() except (IndexError, KeyError): # pragma: no cover pass else: channel.close() def drain_events(self, connection, timeout=None): loop = 0 time_start = monotonic() get = self.cycle.get polling_interval = self.polling_interval while 1: try: item, channel = get(timeout=timeout) except Empty: if timeout and monotonic() - time_start >= timeout: raise socket.timeout() loop += 1 if polling_interval is not None: sleep(polling_interval) else: break message, queue = item if not queue or queue not in self._callbacks: raise KeyError( 'Message for queue {0!r} without consumers: {1}'.format( queue, message)) self._callbacks[queue](message) def _drain_channel(self, channel, timeout=None): return channel.drain_events(timeout=timeout) @property def default_connection_params(self): return {'port': self.default_port, 'hostname': 'localhost'} kombu-3.0.7/kombu/transport/virtual/exchange.py0000644000076500000000000001070612237554371022260 0ustar asksolwheel00000000000000""" kombu.transport.virtual.exchange ================================ Implementations of the standard exchanges defined by the AMQ protocol (excluding the `headers` exchange). """ from __future__ import absolute_import from kombu.utils import escape_regex import re class ExchangeType(object): """Implements the specifics for an exchange type. :param channel: AMQ Channel """ type = None def __init__(self, channel): self.channel = channel def lookup(self, table, exchange, routing_key, default): """Lookup all queues matching `routing_key` in `exchange`. :returns: `default` if no queues matched. """ raise NotImplementedError('subclass responsibility') def prepare_bind(self, queue, exchange, routing_key, arguments): """Return tuple of `(routing_key, regex, queue)` to be stored for bindings to this exchange.""" return routing_key, None, queue def equivalent(self, prev, exchange, type, durable, auto_delete, arguments): """Return true if `prev` and `exchange` is equivalent.""" return (type == prev['type'] and durable == prev['durable'] and auto_delete == prev['auto_delete'] and (arguments or {}) == (prev['arguments'] or {})) class DirectExchange(ExchangeType): """The `direct` exchange routes based on exact routing keys.""" type = 'direct' def lookup(self, table, exchange, routing_key, default): return [queue for rkey, _, queue in table if rkey == routing_key] def deliver(self, message, exchange, routing_key, **kwargs): _lookup = self.channel._lookup _put = self.channel._put for queue in _lookup(exchange, routing_key): _put(queue, message, **kwargs) class TopicExchange(ExchangeType): """The `topic` exchange routes messages based on words separated by dots, using wildcard characters ``*`` (any single word), and ``#`` (one or more words).""" type = 'topic' #: map of wildcard to regex conversions wildcards = {'*': r'.*?[^\.]', '#': r'.*?'} #: compiled regex cache _compiled = {} def lookup(self, table, exchange, routing_key, default): return [queue for rkey, pattern, queue in table if self._match(pattern, routing_key)] def deliver(self, message, exchange, routing_key, **kwargs): _lookup = self.channel._lookup _put = self.channel._put deadletter = self.channel.deadletter_queue for queue in [q for q in _lookup(exchange, routing_key) if q and q != deadletter]: _put(queue, message, **kwargs) def prepare_bind(self, queue, exchange, routing_key, arguments): return routing_key, self.key_to_pattern(routing_key), queue def key_to_pattern(self, rkey): """Get the corresponding regex for any routing key.""" return '^%s$' % ('\.'.join( self.wildcards.get(word, word) for word in escape_regex(rkey, '.#*').split('.') )) def _match(self, pattern, string): """Same as :func:`re.match`, except the regex is compiled and cached, then reused on subsequent matches with the same pattern.""" try: compiled = self._compiled[pattern] except KeyError: compiled = self._compiled[pattern] = re.compile(pattern, re.U) return compiled.match(string) class FanoutExchange(ExchangeType): """The `fanout` exchange implements broadcast messaging by delivering copies of all messages to all queues bound to the exchange. To support fanout the virtual channel needs to store the table as shared state. This requires that the `Channel.supports_fanout` attribute is set to true, and the `Channel._queue_bind` and `Channel.get_table` methods are implemented. See the redis backend for an example implementation of these methods. """ type = 'fanout' def lookup(self, table, exchange, routing_key, default): return [queue for _, _, queue in table] def deliver(self, message, exchange, routing_key, **kwargs): if self.channel.supports_fanout: self.channel._put_fanout(exchange, message, **kwargs) #: Map of standard exchange types and corresponding classes. STANDARD_EXCHANGE_TYPES = {'direct': DirectExchange, 'topic': TopicExchange, 'fanout': FanoutExchange} kombu-3.0.7/kombu/transport/virtual/scheduling.py0000644000076500000000000000241512237554371022621 0ustar asksolwheel00000000000000""" kombu.transport.virtual.scheduling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consumer utilities. """ from __future__ import absolute_import from itertools import count class FairCycle(object): """Consume from a set of resources, where each resource gets an equal chance to be consumed from.""" def __init__(self, fun, resources, predicate=Exception): self.fun = fun self.resources = resources self.predicate = predicate self.pos = 0 def _next(self): while 1: try: resource = self.resources[self.pos] self.pos += 1 return resource except IndexError: self.pos = 0 if not self.resources: raise self.predicate() def get(self, **kwargs): for tried in count(0): # for infinity resource = self._next() try: return self.fun(resource, **kwargs), resource except self.predicate: if tried >= len(self.resources) - 1: raise def close(self): pass def __repr__(self): return ''.format( self=self, size=len(self.resources)) kombu-3.0.7/kombu/transport/zmq.py0000644000076500000000000002067312237554371017623 0ustar asksolwheel00000000000000""" kombu.transport.zmq =================== ZeroMQ transport. """ from __future__ import absolute_import import errno import os import socket try: import zmq from zmq import ZMQError except ImportError: zmq = ZMQError = None # noqa from kombu.five import Empty from kombu.log import get_logger from kombu.serialization import pickle from kombu.utils import cached_property from kombu.utils.eventio import poll, READ from . import virtual logger = get_logger('kombu.transport.zmq') DEFAULT_PORT = 5555 DEFAULT_HWM = 128 DEFAULT_INCR = 1 dumps, loads = pickle.dumps, pickle.loads class MultiChannelPoller(object): eventflags = READ def __init__(self): # active channels self._channels = set() # file descriptor -> channel map self._fd_to_chan = {} # poll implementation (epoll/kqueue/select) self.poller = poll() def close(self): for fd in self._fd_to_chan: try: self.poller.unregister(fd) except KeyError: pass self._channels.clear() self._fd_to_chan.clear() self.poller = None def add(self, channel): self._channels.add(channel) def discard(self, channel): self._channels.discard(channel) self._fd_to_chan.pop(channel.client.connection.fd, None) def _register(self, channel): conn = channel.client.connection self._fd_to_chan[conn.fd] = channel self.poller.register(conn.fd, self.eventflags) def on_poll_start(self): for channel in self._channels: self._register(channel) def on_readable(self, fileno): chan = self._fd_to_chan[fileno] return chan.drain_events(), chan def get(self, timeout=None): self.on_poll_start() events = self.poller.poll(timeout) for fileno, _ in events or []: return self.on_readable(fileno) raise Empty() @property def fds(self): return self._fd_to_chan class Client(object): def __init__(self, uri='tcp://127.0.0.1', port=DEFAULT_PORT, hwm=DEFAULT_HWM, swap_size=None, enable_sink=True, context=None): try: scheme, parts = uri.split('://') except ValueError: scheme = 'tcp' parts = uri endpoints = parts.split(';') self.port = port if scheme != 'tcp': raise NotImplementedError('Currently only TCP can be used') self.context = context or zmq.Context.instance() if enable_sink: self.sink = self.context.socket(zmq.PULL) self.sink.bind('tcp://*:{0.port}'.format(self)) else: self.sink = None self.vent = self.context.socket(zmq.PUSH) if hasattr(zmq, 'SNDHWM'): self.vent.setsockopt(zmq.SNDHWM, hwm) else: self.vent.setsockopt(zmq.HWM, hwm) if swap_size: self.vent.setsockopt(zmq.SWAP, swap_size) for endpoint in endpoints: if scheme == 'tcp' and ':' not in endpoint: endpoint += ':' + str(DEFAULT_PORT) endpoint = ''.join([scheme, '://', endpoint]) self.connect(endpoint) def connect(self, endpoint): self.vent.connect(endpoint) def get(self, queue=None, timeout=None): sink = self.sink try: if timeout is not None: prev_timeout, sink.RCVTIMEO = sink.RCVTIMEO, timeout try: return sink.recv() finally: sink.RCVTIMEO = prev_timeout else: return sink.recv() except ZMQError as exc: if exc.errno == zmq.EAGAIN: raise socket.error(errno.EAGAIN, exc.strerror) else: raise def put(self, queue, message, **kwargs): return self.vent.send(message) def close(self): if self.sink and not self.sink.closed: self.sink.close() if not self.vent.closed: self.vent.close() @property def connection(self): if self.sink: return self.sink return self.vent class Channel(virtual.Channel): Client = Client hwm = DEFAULT_HWM swap_size = None enable_sink = True port_incr = DEFAULT_INCR from_transport_options = ( virtual.Channel.from_transport_options + ('hwm', 'swap_size', 'enable_sink', 'port_incr') ) def __init__(self, *args, **kwargs): super_ = super(Channel, self) super_.__init__(*args, **kwargs) # Evaluate socket self.client.connection.closed self.connection.cycle.add(self) self.connection_errors = self.connection.connection_errors def _get(self, queue, timeout=None): try: return loads(self.client.get(queue, timeout)) except socket.error as exc: if exc.errno == errno.EAGAIN and timeout != 0: raise Empty() else: raise def _put(self, queue, message, **kwargs): self.client.put(queue, dumps(message, -1), **kwargs) def _purge(self, queue): return 0 def _poll(self, cycle, timeout=None): return cycle.get(timeout=timeout) def close(self): if not self.closed: self.connection.cycle.discard(self) try: self.__dict__['client'].close() except KeyError: pass super(Channel, self).close() def _prepare_port(self, port): return (port + self.channel_id - 1) * self.port_incr def _create_client(self): conninfo = self.connection.client port = self._prepare_port(conninfo.port or DEFAULT_PORT) return self.Client(uri=conninfo.hostname or 'tcp://127.0.0.1', port=port, hwm=self.hwm, swap_size=self.swap_size, enable_sink=self.enable_sink, context=self.connection.context) @cached_property def client(self): return self._create_client() class Transport(virtual.Transport): Channel = Channel can_parse_url = True default_port = DEFAULT_PORT driver_type = 'zeromq' driver_name = 'zmq' connection_errors = virtual.Transport.connection_errors + (ZMQError, ) supports_ev = True polling_interval = None def __init__(self, *args, **kwargs): if zmq is None: raise ImportError('The zmq library is not installed') super(Transport, self).__init__(*args, **kwargs) self.cycle = MultiChannelPoller() def driver_version(self): return zmq.__version__ def register_with_event_loop(self, connection, loop): cycle = self.cycle cycle.poller = loop.poller add_reader = loop.add_reader on_readable = self.on_readable cycle_poll_start = cycle.on_poll_start def on_poll_start(): cycle_poll_start() [add_reader(fd, on_readable, fd) for fd in cycle.fds] loop.on_tick.add(on_poll_start) def on_readable(self, fileno): self._handle_event(self.cycle.on_readable(fileno)) def drain_events(self, connection, timeout=None): more_to_read = False for channel in connection.channels: try: evt = channel.cycle.get(timeout=timeout) except socket.error as exc: if exc.errno == errno.EAGAIN: continue raise else: connection._handle_event((evt, channel)) more_to_read = True if not more_to_read: raise socket.error(errno.EAGAIN, os.strerror(errno.EAGAIN)) def _handle_event(self, evt): item, channel = evt message, queue = item if not queue or queue not in self._callbacks: raise KeyError( 'Message for queue {0!r} without consumers: {1}'.format( queue, message)) self._callbacks[queue](message) def establish_connection(self): self.context.closed return super(Transport, self).establish_connection() def close_connection(self, connection): super(Transport, self).close_connection(connection) try: connection.__dict__['context'].term() except KeyError: pass @cached_property def context(self): return zmq.Context(1) kombu-3.0.7/kombu/transport/zookeeper.py0000644000076500000000000001216112243671543021006 0ustar asksolwheel00000000000000""" kombu.transport.zookeeper ========================= Zookeeper transport. :copyright: (c) 2010 - 2013 by Mahendra M. :license: BSD, see LICENSE for more details. **Synopsis** Connects to a zookeeper node as :/ The becomes the base for all the other znodes. So we can use it like a vhost. This uses the built-in kazoo recipe for queues **References** - https://zookeeper.apache.org/doc/trunk/recipes.html#sc_recipes_Queues - https://kazoo.readthedocs.org/en/latest/api/recipe/queue.html **Limitations** This queue does not offer reliable consumption. An entry is removed from the queue prior to being processed. So if an error occurs, the consumer has to re-queue the item or it will be lost. """ from __future__ import absolute_import import os import socket from anyjson import loads, dumps from kombu.five import Empty from kombu.utils.encoding import bytes_to_str from . import virtual MAX_PRIORITY = 9 try: import kazoo from kazoo.client import KazooClient from kazoo.recipe.queue import Queue KZ_CONNECTION_ERRORS = ( kazoo.exceptions.SystemErrorException, kazoo.exceptions.ConnectionLossException, kazoo.exceptions.MarshallingErrorException, kazoo.exceptions.UnimplementedException, kazoo.exceptions.OperationTimeoutException, kazoo.exceptions.NoAuthException, kazoo.exceptions.InvalidACLException, kazoo.exceptions.AuthFailedException, kazoo.exceptions.SessionExpiredException, ) KZ_CHANNEL_ERRORS = ( kazoo.exceptions.RuntimeInconsistencyException, kazoo.exceptions.DataInconsistencyException, kazoo.exceptions.BadArgumentsException, kazoo.exceptions.MarshallingErrorException, kazoo.exceptions.UnimplementedException, kazoo.exceptions.OperationTimeoutException, kazoo.exceptions.ApiErrorException, kazoo.exceptions.NoNodeException, kazoo.exceptions.NoAuthException, kazoo.exceptions.NodeExistsException, kazoo.exceptions.NoChildrenForEphemeralsException, kazoo.exceptions.NotEmptyException, kazoo.exceptions.SessionExpiredException, kazoo.exceptions.InvalidCallbackException, socket.error, ) except ImportError: kazoo = None # noqa KZ_CONNECTION_ERRORS = KZ_CHANNEL_ERRORS = () # noqa DEFAULT_PORT = 2181 __author__ = 'Mahendra M ' class Channel(virtual.Channel): _client = None _queues = {} def _get_path(self, queue_name): return os.path.join(self.vhost, queue_name) def _get_queue(self, queue_name): queue = self._queues.get(queue_name, None) if queue is None: queue = Queue(self.client, self._get_path(queue_name)) self._queues[queue_name] = queue # Ensure that the queue is created len(queue) return queue def _put(self, queue, message, **kwargs): try: priority = message['properties']['delivery_info']['priority'] except KeyError: priority = 0 queue = self._get_queue(queue) queue.put(dumps(message), priority=(MAX_PRIORITY - priority)) def _get(self, queue): queue = self._get_queue(queue) msg = queue.get() if msg is None: raise Empty() return loads(bytes_to_str(msg)) def _purge(self, queue): count = 0 queue = self._get_queue(queue) while True: msg = queue.get() if msg is None: break count += 1 return count def _delete(self, queue, *args, **kwargs): if self._has_queue(queue): self._purge(queue) self.client.delete(self._get_path(queue)) def _size(self, queue): queue = self._get_queue(queue) return len(queue) def _new_queue(self, queue, **kwargs): if not self._has_queue(queue): queue = self._get_queue(queue) def _has_queue(self, queue): return self.client.exists(self._get_path(queue)) is not None def _open(self): conninfo = self.connection.client port = conninfo.port or DEFAULT_PORT conn_str = '%s:%s' % (conninfo.hostname, port) self.vhost = os.path.join('/', conninfo.virtual_host[0:-1]) conn = KazooClient(conn_str) conn.start() return conn @property def client(self): if self._client is None: self._client = self._open() return self._client class Transport(virtual.Transport): Channel = Channel polling_interval = 1 default_port = DEFAULT_PORT connection_errors = ( virtual.Transport.connection_errors + KZ_CONNECTION_ERRORS ) channel_errors = ( virtual.Transport.channel_errors + KZ_CHANNEL_ERRORS ) driver_type = 'zookeeper' driver_name = 'kazoo' def __init__(self, *args, **kwargs): if kazoo is None: raise ImportError('The kazoo library is not installed') super(Transport, self).__init__(*args, **kwargs) def driver_version(self): return kazoo.__version__ kombu-3.0.7/kombu/utils/0000755000076500000000000000000012247127370015532 5ustar asksolwheel00000000000000kombu-3.0.7/kombu/utils/__init__.py0000644000076500000000000002767112237554371017664 0ustar asksolwheel00000000000000""" kombu.utils =========== Internal utilities. """ from __future__ import absolute_import, print_function import importlib import random import sys from contextlib import contextmanager from itertools import count, repeat from time import sleep from uuid import UUID, uuid4 as _uuid4, _uuid_generate_random from kombu.five import int_types, items, reraise, string_t from .encoding import default_encode, safe_repr as _safe_repr try: import ctypes except: ctypes = None # noqa try: from io import UnsupportedOperation FILENO_ERRORS = (AttributeError, ValueError, UnsupportedOperation) except ImportError: # pragma: no cover # Py2 FILENO_ERRORS = (AttributeError, ValueError) # noqa __all__ = ['EqualityDict', 'say', 'uuid', 'kwdict', 'maybe_list', 'fxrange', 'fxrangemax', 'retry_over_time', 'emergency_dump_state', 'cached_property', 'reprkwargs', 'reprcall', 'nested', 'fileno', 'maybe_fileno'] def symbol_by_name(name, aliases={}, imp=None, package=None, sep='.', default=None, **kwargs): """Get symbol by qualified name. The name should be the full dot-separated path to the class:: modulename.ClassName Example:: celery.concurrency.processes.TaskPool ^- class name or using ':' to separate module and symbol:: celery.concurrency.processes:TaskPool If `aliases` is provided, a dict containing short name/long name mappings, the name is looked up in the aliases first. Examples: >>> symbol_by_name('celery.concurrency.processes.TaskPool') >>> symbol_by_name('default', { ... 'default': 'celery.concurrency.processes.TaskPool'}) # Does not try to look up non-string names. >>> from celery.concurrency.processes import TaskPool >>> symbol_by_name(TaskPool) is TaskPool True """ if imp is None: imp = importlib.import_module if not isinstance(name, string_t): return name # already a class name = aliases.get(name) or name sep = ':' if ':' in name else sep module_name, _, cls_name = name.rpartition(sep) if not module_name: cls_name, module_name = None, package if package else cls_name try: try: module = imp(module_name, package=package, **kwargs) except ValueError as exc: reraise(ValueError, ValueError("Couldn't import {0!r}: {1}".format(name, exc)), sys.exc_info()[2]) return getattr(module, cls_name) if cls_name else module except (ImportError, AttributeError): if default is None: raise return default def eqhash(o): try: return o.__eqhash__() except AttributeError: return hash(o) class EqualityDict(dict): def __getitem__(self, key): h = eqhash(key) if h not in self: return self.__missing__(key) return dict.__getitem__(self, h) def __setitem__(self, key, value): return dict.__setitem__(self, eqhash(key), value) def __delitem__(self, key): return dict.__delitem__(self, eqhash(key)) def say(m, *fargs, **fkwargs): print(str(m).format(*fargs, **fkwargs), file=sys.stderr) def uuid4(): # Workaround for http://bugs.python.org/issue4607 if ctypes and _uuid_generate_random: # pragma: no cover buffer = ctypes.create_string_buffer(16) _uuid_generate_random(buffer) return UUID(bytes=buffer.raw) return _uuid4() def uuid(): """Generate a unique id, having - hopefully - a very small chance of collision. For now this is provided by :func:`uuid.uuid4`. """ return str(uuid4()) gen_unique_id = uuid if sys.version_info >= (2, 6, 5): def kwdict(kwargs): return kwargs else: def kwdict(kwargs): # pragma: no cover # noqa """Make sure keyword arguments are not in Unicode. This should be fixed in newer Python versions, see: http://bugs.python.org/issue4978. """ return dict((key.encode('utf-8'), value) for key, value in items(kwargs)) def maybe_list(v): if v is None: return [] if hasattr(v, '__iter__'): return v return [v] def fxrange(start=1.0, stop=None, step=1.0, repeatlast=False): cur = start * 1.0 while 1: if not stop or cur <= stop: yield cur cur += step else: if not repeatlast: break yield cur - step def fxrangemax(start=1.0, stop=None, step=1.0, max=100.0): sum_, cur = 0, start * 1.0 while 1: if sum_ >= max: break yield cur if stop: cur = min(cur + step, stop) else: cur += step sum_ += cur def retry_over_time(fun, catch, args=[], kwargs={}, errback=None, max_retries=None, interval_start=2, interval_step=2, interval_max=30, callback=None): """Retry the function over and over until max retries is exceeded. For each retry we sleep a for a while before we try again, this interval is increased for every retry until the max seconds is reached. :param fun: The function to try :param catch: Exceptions to catch, can be either tuple or a single exception class. :keyword args: Positional arguments passed on to the function. :keyword kwargs: Keyword arguments passed on to the function. :keyword errback: Callback for when an exception in ``catch`` is raised. The callback must take two arguments: ``exc`` and ``interval``, where ``exc`` is the exception instance, and ``interval`` is the time in seconds to sleep next.. :keyword max_retries: Maximum number of retries before we give up. If this is not set, we will retry forever. :keyword interval_start: How long (in seconds) we start sleeping between retries. :keyword interval_step: By how much the interval is increased for each retry. :keyword interval_max: Maximum number of seconds to sleep between retries. """ retries = 0 interval_range = fxrange(interval_start, interval_max + interval_start, interval_step, repeatlast=True) for retries in count(): try: return fun(*args, **kwargs) except catch as exc: if max_retries is not None and retries >= max_retries: raise if callback: callback() tts = (errback(exc, interval_range, retries) if errback else next(interval_range)) if tts: for i in range(int(tts / interval_step)): if callback: callback() sleep(interval_step) def emergency_dump_state(state, open_file=open, dump=None): from pprint import pformat from tempfile import mktemp if dump is None: import pickle dump = pickle.dump persist = mktemp() say('EMERGENCY DUMP STATE TO FILE -> {0} <-', persist) fh = open_file(persist, 'w') try: try: dump(state, fh, protocol=0) except Exception as exc: say('Cannot pickle state: {0!r}. Fallback to pformat.', exc) fh.write(default_encode(pformat(state))) finally: fh.flush() fh.close() return persist class cached_property(object): """Property descriptor that caches the return value of the get function. *Examples* .. code-block:: python @cached_property def connection(self): return Connection() @connection.setter # Prepares stored value def connection(self, value): if value is None: raise TypeError('Connection must be a connection') return value @connection.deleter def connection(self, value): # Additional action to do at del(self.attr) if value is not None: print('Connection {0!r} deleted'.format(value) """ def __init__(self, fget=None, fset=None, fdel=None, doc=None): self.__get = fget self.__set = fset self.__del = fdel self.__doc__ = doc or fget.__doc__ self.__name__ = fget.__name__ self.__module__ = fget.__module__ def __get__(self, obj, type=None): if obj is None: return self try: return obj.__dict__[self.__name__] except KeyError: value = obj.__dict__[self.__name__] = self.__get(obj) return value def __set__(self, obj, value): if obj is None: return self if self.__set is not None: value = self.__set(obj, value) obj.__dict__[self.__name__] = value def __delete__(self, obj): if obj is None: return self try: value = obj.__dict__.pop(self.__name__) except KeyError: pass else: if self.__del is not None: self.__del(obj, value) def setter(self, fset): return self.__class__(self.__get, fset, self.__del) def deleter(self, fdel): return self.__class__(self.__get, self.__set, fdel) def reprkwargs(kwargs, sep=', ', fmt='{0}={1}'): return sep.join(fmt.format(k, _safe_repr(v)) for k, v in items(kwargs)) def reprcall(name, args=(), kwargs={}, sep=', '): return '{0}({1}{2}{3})'.format( name, sep.join(map(_safe_repr, args or ())), (args and kwargs) and sep or '', reprkwargs(kwargs, sep), ) @contextmanager def nested(*managers): # pragma: no cover # flake8: noqa """Combine multiple context managers into a single nested context manager.""" exits = [] vars = [] exc = (None, None, None) try: try: for mgr in managers: exit = mgr.__exit__ enter = mgr.__enter__ vars.append(enter()) exits.append(exit) yield vars except: exc = sys.exc_info() finally: while exits: exit = exits.pop() try: if exit(*exc): exc = (None, None, None) except: exc = sys.exc_info() if exc != (None, None, None): # Don't rely on sys.exc_info() still containing # the right information. Another exception may # have been raised and caught by an exit method reraise(exc[0], exc[1], exc[2]) finally: del(exc) def shufflecycle(it): it = list(it) # don't modify callers list shuffle = random.shuffle for _ in repeat(None): shuffle(it) yield it[0] def entrypoints(namespace): try: from pkg_resources import iter_entry_points except ImportError: return iter([]) return ((ep, ep.load()) for ep in iter_entry_points(namespace)) class ChannelPromise(object): def __init__(self, contract): self.__contract__ = contract def __call__(self): try: return self.__value__ except AttributeError: value = self.__value__ = self.__contract__() return value def __repr__(self): return '' % (self(), ) def escape_regex(p, white=''): # what's up with re.escape? that code must be neglected or someting return ''.join(c if c.isalnum() or c in white else ('\\000' if c == '\000' else '\\' + c) for c in p) def fileno(f): if isinstance(f, int_types): return f return f.fileno() def maybe_fileno(f): """Get object fileno, or :const:`None` if not defined.""" try: return fileno(f) except FILENO_ERRORS: pass kombu-3.0.7/kombu/utils/amq_manager.py0000644000076500000000000000121712237554371020361 0ustar asksolwheel00000000000000from __future__ import absolute_import def get_manager(client, hostname=None, port=None, userid=None, password=None): import pyrabbit opt = client.transport_options.get def get(name, val, default): return (val if val is not None else opt('manager_%s' % name) or getattr(client, name, None) or default) host = get('hostname', hostname, 'localhost') port = port if port is not None else opt('manager_port', 15672) userid = get('userid', userid, 'guest') password = get('password', password, 'guest') return pyrabbit.Client('%s:%s' % (host, port), userid, password) kombu-3.0.7/kombu/utils/compat.py0000644000076500000000000000302112241157622017360 0ustar asksolwheel00000000000000""" kombu.utils.compat ================== Helps compatibility with older Python versions. """ from __future__ import absolute_import ############## timedelta_seconds() -> delta.total_seconds #################### from datetime import timedelta HAVE_TIMEDELTA_TOTAL_SECONDS = hasattr(timedelta, 'total_seconds') if HAVE_TIMEDELTA_TOTAL_SECONDS: # pragma: no cover def timedelta_seconds(delta): """Convert :class:`datetime.timedelta` to seconds. Doesn't account for negative values. """ return max(delta.total_seconds(), 0) else: # pragma: no cover def timedelta_seconds(delta): # noqa """Convert :class:`datetime.timedelta` to seconds. Doesn't account for negative values. """ if delta.days < 0: return 0 return delta.days * 86400 + delta.seconds + (delta.microseconds / 10e5) ############## socket.error.errno ############################################ def get_errno(exc): """:exc:`socket.error` and :exc:`IOError` first got the ``.errno`` attribute in Py2.7""" try: return exc.errno except AttributeError: try: # e.args = (errno, reason) if isinstance(exc.args, tuple) and len(exc.args) == 2: return exc.args[0] except AttributeError: pass return 0 ############## collections.OrderedDict ####################################### try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict # noqa kombu-3.0.7/kombu/utils/debug.py0000644000076500000000000000320412243752077017175 0ustar asksolwheel00000000000000""" kombu.utils.debug ================= Debugging support. """ from __future__ import absolute_import import logging from functools import wraps from kombu.five import items from kombu.log import get_logger __all__ = ['setup_logging', 'Logwrapped'] def setup_logging(loglevel=logging.DEBUG, loggers=['kombu.connection', 'kombu.channel']): for logger in loggers: l = get_logger(logger) l.addHandler(logging.StreamHandler()) l.setLevel(loglevel) class Logwrapped(object): __ignore = ('__enter__', '__exit__') def __init__(self, instance, logger=None, ident=None): self.instance = instance self.logger = get_logger(logger) self.ident = ident def __getattr__(self, key): meth = getattr(self.instance, key) if not callable(meth) or key in self.__ignore: return meth @wraps(meth) def __wrapped(*args, **kwargs): info = '' if self.ident: info += self.ident.format(self.instance) info += '{0.__name__}('.format(meth) if args: info += ', '.join(map(repr, args)) if kwargs: if args: info += ', ' info += ', '.join('{k}={v!r}'.format(k=key, v=value) for key, value in items(kwargs)) info += ')' self.logger.debug(info) return meth(*args, **kwargs) return __wrapped def __repr__(self): return repr(self.instance) def __dir__(self): return dir(self.instance) kombu-3.0.7/kombu/utils/encoding.py0000644000076500000000000000601512241157704017672 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- """ kombu.utils.encoding ~~~~~~~~~~~~~~~~~~~~~ Utilities to encode text, and to safely emit text from running applications without crashing with the infamous :exc:`UnicodeDecodeError` exception. """ from __future__ import absolute_import import sys import traceback from kombu.five import text_t is_py3k = sys.version_info >= (3, 0) #: safe_str takes encoding from this file by default. #: :func:`set_default_encoding_file` can used to set the #: default output file. default_encoding_file = None def set_default_encoding_file(file): global default_encoding_file default_encoding_file = file def get_default_encoding_file(): return default_encoding_file if sys.platform.startswith('java'): # pragma: no cover def default_encoding(file=None): return 'utf-8' else: def default_encoding(file=None): # noqa file = file or get_default_encoding_file() return getattr(file, 'encoding', None) or sys.getfilesystemencoding() if is_py3k: # pragma: no cover def str_to_bytes(s): if isinstance(s, str): return s.encode() return s def bytes_to_str(s): if isinstance(s, bytes): return s.decode() return s def from_utf8(s, *args, **kwargs): return s def ensure_bytes(s): if not isinstance(s, bytes): return str_to_bytes(s) return s def default_encode(obj): return obj str_t = str else: def str_to_bytes(s): # noqa if isinstance(s, unicode): return s.encode() return s def bytes_to_str(s): # noqa return s def from_utf8(s, *args, **kwargs): # noqa return s.encode('utf-8', *args, **kwargs) def default_encode(obj, file=None): # noqa return unicode(obj, default_encoding(file)) str_t = unicode ensure_bytes = str_to_bytes try: bytes_t = bytes except NameError: # pragma: no cover bytes_t = str # noqa def safe_str(s, errors='replace'): s = bytes_to_str(s) if not isinstance(s, (text_t, bytes)): return safe_repr(s, errors) return _safe_str(s, errors) if is_py3k: def _safe_str(s, errors='replace', file=None): if isinstance(s, str): return s try: return str(s) except Exception as exc: return ''.format( type(s), exc, '\n'.join(traceback.format_stack())) else: def _safe_str(s, errors='replace', file=None): # noqa encoding = default_encoding(file) try: if isinstance(s, unicode): return s.encode(encoding, errors) return unicode(s, encoding, errors) except Exception as exc: return ''.format( type(s), exc, '\n'.join(traceback.format_stack())) def safe_repr(o, errors='replace'): try: return repr(o) except Exception: return _safe_str(o, errors) kombu-3.0.7/kombu/utils/eventio.py0000644000076500000000000001663312243671543017570 0ustar asksolwheel00000000000000""" kombu.utils.eventio =================== Evented IO support for multiple platforms. """ from __future__ import absolute_import import errno import select as __select__ import socket from kombu.five import int_types _selectf = __select__.select _selecterr = __select__.error epoll = getattr(__select__, 'epoll', None) kqueue = getattr(__select__, 'kqueue', None) kevent = getattr(__select__, 'kevent', None) KQ_EV_ADD = getattr(__select__, 'KQ_EV_ADD', 1) KQ_EV_DELETE = getattr(__select__, 'KQ_EV_DELETE', 2) KQ_EV_ENABLE = getattr(__select__, 'KQ_EV_ENABLE', 4) KQ_EV_CLEAR = getattr(__select__, 'KQ_EV_CLEAR', 32) KQ_EV_ERROR = getattr(__select__, 'KQ_EV_ERROR', 16384) KQ_EV_EOF = getattr(__select__, 'KQ_EV_EOF', 32768) KQ_FILTER_READ = getattr(__select__, 'KQ_FILTER_READ', -1) KQ_FILTER_WRITE = getattr(__select__, 'KQ_FILTER_WRITE', -2) KQ_FILTER_AIO = getattr(__select__, 'KQ_FILTER_AIO', -3) KQ_FILTER_VNODE = getattr(__select__, 'KQ_FILTER_VNODE', -4) KQ_FILTER_PROC = getattr(__select__, 'KQ_FILTER_PROC', -5) KQ_FILTER_SIGNAL = getattr(__select__, 'KQ_FILTER_SIGNAL', -6) KQ_FILTER_TIMER = getattr(__select__, 'KQ_FILTER_TIMER', -7) KQ_NOTE_LOWAT = getattr(__select__, 'KQ_NOTE_LOWAT', 1) KQ_NOTE_DELETE = getattr(__select__, 'KQ_NOTE_DELETE', 1) KQ_NOTE_WRITE = getattr(__select__, 'KQ_NOTE_WRITE', 2) KQ_NOTE_EXTEND = getattr(__select__, 'KQ_NOTE_EXTEND', 4) KQ_NOTE_ATTRIB = getattr(__select__, 'KQ_NOTE_ATTRIB', 8) KQ_NOTE_LINK = getattr(__select__, 'KQ_NOTE_LINK', 16) KQ_NOTE_RENAME = getattr(__select__, 'KQ_NOTE_RENAME', 32) KQ_NOTE_REVOKE = getattr(__select__, 'kQ_NOTE_REVOKE', 64) from kombu.syn import detect_environment from . import fileno from .compat import get_errno __all__ = ['poll'] READ = POLL_READ = 0x001 WRITE = POLL_WRITE = 0x004 ERR = POLL_ERR = 0x008 | 0x010 try: SELECT_BAD_FD = set((errno.EBADF, errno.WSAENOTSOCK)) except AttributeError: SELECT_BAD_FD = set((errno.EBADF,)) class Poller(object): def poll(self, timeout): try: return self._poll(timeout) except Exception as exc: if get_errno(exc) != errno.EINTR: raise class _epoll(Poller): def __init__(self): self._epoll = epoll() def register(self, fd, events): try: self._epoll.register(fd, events) except Exception as exc: if get_errno(exc) != errno.EEXIST: raise def unregister(self, fd): try: self._epoll.unregister(fd) except (socket.error, ValueError, KeyError): pass except (IOError, OSError) as exc: if get_errno(exc) != errno.ENOENT: raise def _poll(self, timeout): return self._epoll.poll(timeout if timeout is not None else -1) def close(self): self._epoll.close() class _kqueue(Poller): w_fflags = (KQ_NOTE_WRITE | KQ_NOTE_EXTEND | KQ_NOTE_ATTRIB | KQ_NOTE_DELETE) def __init__(self): self._kqueue = kqueue() self._active = {} self.on_file_change = None self._kcontrol = self._kqueue.control def register(self, fd, events): self._control(fd, events, KQ_EV_ADD) self._active[fd] = events def unregister(self, fd): events = self._active.pop(fd, None) if events: try: self._control(fd, events, KQ_EV_DELETE) except socket.error: pass def watch_file(self, fd): ev = kevent(fd, filter=KQ_FILTER_VNODE, flags=KQ_EV_ADD | KQ_EV_ENABLE | KQ_EV_CLEAR, fflags=self.w_fflags) self._kcontrol([ev], 0) def unwatch_file(self, fd): ev = kevent(fd, filter=KQ_FILTER_VNODE, flags=KQ_EV_DELETE, fflags=self.w_fflags) self._kcontrol([ev], 0) def _control(self, fd, events, flags): if not events: return kevents = [] if events & WRITE: kevents.append(kevent(fd, filter=KQ_FILTER_WRITE, flags=flags)) if not kevents or events & READ: kevents.append( kevent(fd, filter=KQ_FILTER_READ, flags=flags), ) control = self._kcontrol for e in kevents: try: control([e], 0) except ValueError: pass def _poll(self, timeout): kevents = self._kcontrol(None, 1000, timeout) events, file_changes = {}, [] for k in kevents: fd = k.ident if k.filter == KQ_FILTER_READ: events[fd] = events.get(fd, 0) | READ elif k.filter == KQ_FILTER_WRITE: if k.flags & KQ_EV_EOF: events[fd] = ERR else: events[fd] = events.get(fd, 0) | WRITE elif k.filter == KQ_EV_ERROR: events[fd] = events.get(fd, 0) | ERR elif k.filter == KQ_FILTER_VNODE: if k.fflags & KQ_NOTE_DELETE: self.unregister(fd) file_changes.append(k) if file_changes: self.on_file_change(file_changes) return list(events.items()) def close(self): self._kqueue.close() class _select(Poller): def __init__(self): self._all = (self._rfd, self._wfd, self._efd) = set(), set(), set() def register(self, fd, events): fd = fileno(fd) if events & ERR: self._efd.add(fd) if events & WRITE: self._wfd.add(fd) if events & READ: self._rfd.add(fd) def _remove_bad(self): for fd in self._rfd | self._wfd | self._efd: try: _selectf([fd], [], [], 0) except (_selecterr, socket.error) as exc: if get_errno(exc) in SELECT_BAD_FD: self.unregister(fd) def unregister(self, fd): fd = fileno(fd) self._rfd.discard(fd) self._wfd.discard(fd) self._efd.discard(fd) def _poll(self, timeout): try: read, write, error = _selectf( self._rfd, self._wfd, self._efd, timeout, ) except (_selecterr, socket.error) as exc: if get_errno(exc) == errno.EINTR: return elif get_errno(exc) in SELECT_BAD_FD: return self._remove_bad() raise events = {} for fd in read: if not isinstance(fd, int_types): fd = fd.fileno() events[fd] = events.get(fd, 0) | READ for fd in write: if not isinstance(fd, int_types): fd = fd.fileno() events[fd] = events.get(fd, 0) | WRITE for fd in error: if not isinstance(fd, int_types): fd = fd.fileno() events[fd] = events.get(fd, 0) | ERR return list(events.items()) def close(self): self._rfd.clear() self._wfd.clear() self._efd.clear() def _get_poller(): if detect_environment() != 'default': # greenlet return _select elif epoll: # Py2.6+ Linux return _epoll elif kqueue: # Py2.6+ on BSD / Darwin return _select # was: _kqueue else: return _select def poll(*args, **kwargs): return _get_poller()(*args, **kwargs) kombu-3.0.7/kombu/utils/functional.py0000644000076500000000000000402512237554371020253 0ustar asksolwheel00000000000000from __future__ import absolute_import import sys from collections import Iterable, Mapping from kombu.five import string_t __all__ = ['lazy', 'maybe_evaluate', 'is_list', 'maybe_list'] class lazy(object): """Holds lazy evaluation. Evaluated when called or if the :meth:`evaluate` method is called. The function is re-evaluated on every call. Overloaded operations that will evaluate the promise: :meth:`__str__`, :meth:`__repr__`, :meth:`__cmp__`. """ def __init__(self, fun, *args, **kwargs): self._fun = fun self._args = args self._kwargs = kwargs def __call__(self): return self.evaluate() def evaluate(self): return self._fun(*self._args, **self._kwargs) def __str__(self): return str(self()) def __repr__(self): return repr(self()) def __eq__(self, rhs): return self() == rhs def __ne__(self, rhs): return self() != rhs def __deepcopy__(self, memo): memo[id(self)] = self return self def __reduce__(self): return (self.__class__, (self._fun, ), {'_args': self._args, '_kwargs': self._kwargs}) if sys.version_info[0] < 3: def __cmp__(self, rhs): if isinstance(rhs, self.__class__): return -cmp(rhs, self()) return cmp(self(), rhs) def maybe_evaluate(value): """Evaluates if the value is a :class:`lazy` instance.""" if isinstance(value, lazy): return value.evaluate() return value def is_list(l, scalars=(Mapping, string_t), iters=(Iterable, )): """Return true if the object is iterable (but not if object is a mapping or string).""" return isinstance(l, iters) and not isinstance(l, scalars or ()) def maybe_list(l, scalars=(Mapping, string_t)): """Return list of one element if ``l`` is a scalar.""" return l if l is None or is_list(l, scalars) else [l] # Compat names (before kombu 3.0) promise = lazy maybe_promise = maybe_evaluate kombu-3.0.7/kombu/utils/limits.py0000644000076500000000000000347012237554371017415 0ustar asksolwheel00000000000000""" kombu.utils.limits ================== Token bucket implementation for rate limiting. """ from __future__ import absolute_import from kombu.five import monotonic __all__ = ['TokenBucket'] class TokenBucket(object): """Token Bucket Algorithm. See http://en.wikipedia.org/wiki/Token_Bucket Most of this code was stolen from an entry in the ASPN Python Cookbook: http://code.activestate.com/recipes/511490/ .. admonition:: Thread safety This implementation may not be thread safe. """ #: The rate in tokens/second that the bucket will be refilled fill_rate = None #: Maximum number of tokensin the bucket. capacity = 1 #: Timestamp of the last time a token was taken out of the bucket. timestamp = None def __init__(self, fill_rate, capacity=1): self.capacity = float(capacity) self._tokens = capacity self.fill_rate = float(fill_rate) self.timestamp = monotonic() def can_consume(self, tokens=1): """Return :const:`True` if the number of tokens can be consumed from the bucket.""" if tokens <= self._get_tokens(): self._tokens -= tokens return True return False def expected_time(self, tokens=1): """Return the time (in seconds) when a new token is expected to be available. This will also consume a token from the bucket. """ _tokens = self._get_tokens() tokens = max(tokens, _tokens) return (tokens - _tokens) / self.fill_rate def _get_tokens(self): if self._tokens < self.capacity: now = monotonic() delta = self.fill_rate * (now - self.timestamp) self._tokens = min(self.capacity, self._tokens + delta) self.timestamp = now return self._tokens kombu-3.0.7/kombu/utils/text.py0000644000076500000000000000076412234207745017100 0ustar asksolwheel00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import from difflib import SequenceMatcher def fmatch_iter(needle, haystack, min_ratio=0.6): for key in haystack: ratio = SequenceMatcher(None, needle, key).ratio() if ratio >= min_ratio: yield ratio, key def fmatch_best(needle, haystack, min_ratio=0.6): try: return sorted( fmatch_iter(needle, haystack, min_ratio), reverse=True, )[0][1] except IndexError: pass kombu-3.0.7/kombu/utils/url.py0000644000076500000000000000262412237554371016716 0ustar asksolwheel00000000000000from __future__ import absolute_import try: from urllib.parse import unquote, urlparse, parse_qsl except ImportError: from urllib import unquote # noqa from urlparse import urlparse, parse_qsl # noqa from . import kwdict def _parse_url(url): scheme = urlparse(url).scheme schemeless = url[len(scheme) + 3:] # parse with HTTP URL semantics parts = urlparse('http://' + schemeless) # The first pymongo.Connection() argument (host) can be # a mongodb connection URI. If this is the case, don't # use port but let pymongo get the port(s) from the URI instead. # This enables the use of replica sets and sharding. # See pymongo.Connection() for more info. port = scheme != 'mongodb' and parts.port or None hostname = schemeless if scheme == 'mongodb' else parts.hostname path = parts.path or '' path = path[1:] if path and path[0] == '/' else path return (scheme, unquote(hostname or '') or None, port, unquote(parts.username or '') or None, unquote(parts.password or '') or None, unquote(path or '') or None, kwdict(dict(parse_qsl(parts.query)))) def parse_url(url): scheme, host, port, user, password, path, query = _parse_url(url) return dict(transport=scheme, hostname=host, port=port, userid=user, password=password, virtual_host=path, **query) kombu-3.0.7/kombu.egg-info/0000755000076500000000000000000012247127370016064 5ustar asksolwheel00000000000000kombu-3.0.7/kombu.egg-info/dependency_links.txt0000644000076500000000000000000112247127365022136 0ustar asksolwheel00000000000000 kombu-3.0.7/kombu.egg-info/not-zip-safe0000644000076500000000000000000112247127365020316 0ustar asksolwheel00000000000000 kombu-3.0.7/kombu.egg-info/PKG-INFO0000644000076500000000000003503212247127365017170 0ustar asksolwheel00000000000000Metadata-Version: 1.1 Name: kombu Version: 3.0.7 Summary: Messaging library for Python Home-page: http://kombu.readthedocs.org Author: Ask Solem Author-email: ask@celeryproject.org License: UNKNOWN Description: .. _kombu-index: ======================================== kombu - Messaging library for Python ======================================== :Version: 3.0.7 `Kombu` is a messaging library for Python. The aim of `Kombu` is to make messaging in Python as easy as possible by providing an idiomatic high-level interface for the AMQ protocol, and also provide proven and tested solutions to common messaging problems. `AMQP`_ is the Advanced Message Queuing Protocol, an open standard protocol for message orientation, queuing, routing, reliability and security, for which the `RabbitMQ`_ messaging server is the most popular implementation. Features ======== * Allows application authors to support several message server solutions by using pluggable transports. * AMQP transport using the `py-amqp`_ or `librabbitmq`_ client libraries. * High performance AMQP transport written in C - when using `librabbitmq`_ This is automatically enabled if librabbitmq is installed:: $ pip install librabbitmq * Virtual transports makes it really easy to add support for non-AMQP transports. There is already built-in support for `Redis`_, `Beanstalk`_, `Amazon SQS`_, `CouchDB`_, `MongoDB`_, ZeroMQ, `ZooKeeper`_, `SoftLayer MQ`_ and `Pyro`_. * You can also use the SQLAlchemy and Django ORM transports to use a database as the broker. * In-memory transport for unit testing. * Supports automatic encoding, serialization and compression of message payloads. * Consistent exception handling across transports. * The ability to ensure that an operation is performed by gracefully handling connection and channel errors. * Several annoyances with `amqplib`_ has been fixed, like supporting timeouts and the ability to wait for events on more than one channel. * Projects already using `carrot`_ can easily be ported by using a compatibility layer. For an introduction to AMQP you should read the article `Rabbits and warrens`_, and the `Wikipedia article about AMQP`_. .. _`RabbitMQ`: http://www.rabbitmq.com/ .. _`AMQP`: http://amqp.org .. _`py-amqp`: http://pypi.python.org/pypi/amqp/ .. _`Redis`: http://code.google.com/p/redis/ .. _`Amazon SQS`: http://aws.amazon.com/sqs/ .. _`MongoDB`: http://www.mongodb.org/ .. _`CouchDB`: http://couchdb.apache.org/ .. _`Zookeeper`: https://zookeeper.apache.org/ .. _`Beanstalk`: http://kr.github.com/beanstalkd/ .. _`Rabbits and warrens`: http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/ .. _`amqplib`: http://barryp.org/software/py-amqplib/ .. _`Wikipedia article about AMQP`: http://en.wikipedia.org/wiki/AMQP .. _`carrot`: http://pypi.python.org/pypi/carrot/ .. _`librabbitmq`: http://pypi.python.org/pypi/librabbitmq .. _`Pyro`: http://pythonhosting.org/Pyro .. _`SoftLayer MQ`: http://www.softlayer.com/services/additional/message-queue .. _transport-comparison: Transport Comparison ==================== +---------------+----------+------------+------------+---------------+ | **Client** | **Type** | **Direct** | **Topic** | **Fanout** | +---------------+----------+------------+------------+---------------+ | *amqp* | Native | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *redis* | Virtual | Yes | Yes | Yes (PUB/SUB) | +---------------+----------+------------+------------+---------------+ | *mongodb* | Virtual | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *beanstalk* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SQS* | Virtual | Yes | Yes [#f1]_ | Yes [#f2]_ | +---------------+----------+------------+------------+---------------+ | *couchdb* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *zookeeper* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *in-memory* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *django* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *sqlalchemy* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SLMQ* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ .. [#f1] Declarations only kept in memory, so exchanges/queues must be declared by all clients that needs them. .. [#f2] Fanout supported via storing routing tables in SimpleDB. Disabled by default, but can be enabled by using the ``supports_fanout`` transport option. Documentation ------------- Kombu is using Sphinx, and the latest documentation can be found here: http://kombu.readthedocs.org/ Quick overview -------------- :: from kombu import Connection, Exchange, Queue media_exchange = Exchange('media', 'direct', durable=True) video_queue = Queue('video', exchange=media_exchange, routing_key='video') def process_media(body, message): print body message.ack() # connections with Connection('amqp://guest:guest@localhost//') as conn: # produce producer = conn.Producer(serializer='json') producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013}, exchange=media_exchange, routing_key='video', declare=[video_queue]) # the declare above, makes sure the video queue is declared # so that the messages can be delivered. # It's a best practice in Kombu to have both publishers and # consumers declare the queue. You can also declare the # queue manually using: # video_queue(conn).declare() # consume with conn.Consumer(video_queue, callbacks=[process_media]) as consumer: # Process messages and handle events on all channels while True: conn.drain_events() # Consume from several queues on the same channel: video_queue = Queue('video', exchange=media_exchange, key='video') image_queue = Queue('image', exchange=media_exchange, key='image') with connection.Consumer([video_queue, image_queue], callbacks=[process_media]) as consumer: while True: connection.drain_events() Or handle channels manually:: with connection.channel() as channel: producer = Producer(channel, ...) consumer = Producer(channel) All objects can be used outside of with statements too, just remember to close the objects after use:: from kombu import Connection, Consumer, Producer connection = Connection() # ... connection.release() consumer = Consumer(channel_or_connection, ...) consumer.register_callback(my_callback) consumer.consume() # .... consumer.cancel() `Exchange` and `Queue` are simply declarations that can be pickled and used in configuration files etc. They also support operations, but to do so they need to be bound to a channel. Binding exchanges and queues to a connection will make it use that connections default channel. :: >>> exchange = Exchange('tasks', 'direct') >>> connection = Connection() >>> bound_exchange = exchange(connection) >>> bound_exchange.delete() # the original exchange is not affected, and stays unbound. >>> exchange.delete() raise NotBoundError: Can't call delete on Exchange not bound to a channel. Installation ============ You can install `Kombu` either via the Python Package Index (PyPI) or from source. To install using `pip`,:: $ pip install kombu To install using `easy_install`,:: $ easy_install kombu If you have downloaded a source tarball you can install it by doing the following,:: $ python setup.py build # python setup.py install # as root Terminology =========== There are some concepts you should be familiar with before starting: * Producers Producers sends messages to an exchange. * Exchanges Messages are sent to exchanges. Exchanges are named and can be configured to use one of several routing algorithms. The exchange routes the messages to consumers by matching the routing key in the message with the routing key the consumer provides when binding to the exchange. * Consumers Consumers declares a queue, binds it to a exchange and receives messages from it. * Queues Queues receive messages sent to exchanges. The queues are declared by consumers. * Routing keys Every message has a routing key. The interpretation of the routing key depends on the exchange type. There are four default exchange types defined by the AMQP standard, and vendors can define custom types (so see your vendors manual for details). These are the default exchange types defined by AMQP/0.8: * Direct exchange Matches if the routing key property of the message and the `routing_key` attribute of the consumer are identical. * Fan-out exchange Always matches, even if the binding does not have a routing key. * Topic exchange Matches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (`"."`, like domain names), and two special characters are available; star (`"*"`) and hash (`"#"`). The star matches any word, and the hash matches zero or more words. For example `"*.stock.#"` matches the routing keys `"usd.stock"` and `"eur.stock.db"` but not `"stock.nasdaq"`. Getting Help ============ Mailing list ------------ Join the `carrot-users`_ mailing list. .. _`carrot-users`: http://groups.google.com/group/carrot-users/ Bug tracker =========== If you have any suggestions, bug reports or annoyances please report them to our issue tracker at http://github.com/celery/kombu/issues/ Contributing ============ Development of `Kombu` happens at Github: http://github.com/celery/kombu You are highly encouraged to participate in the development. If you don't like Github (for some reason) you're welcome to send regular patches. License ======= This software is licensed under the `New BSD License`. See the `LICENSE` file in the top distribution directory for the full license text. .. image:: https://d2weczhvl823v0.cloudfront.net/celery/kombu/trend.png :alt: Bitdeli badge :target: https://bitdeli.com/free Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Programming Language :: Python :: Implementation :: Jython Classifier: Intended Audience :: Developers Classifier: Topic :: Communications Classifier: Topic :: System :: Distributed Computing Classifier: Topic :: System :: Networking Classifier: Topic :: Software Development :: Libraries :: Python Modules kombu-3.0.7/kombu.egg-info/requires.txt0000644000076500000000000000054212247127365020471 0ustar asksolwheel00000000000000anyjson>=0.3.3 amqp>=1.3.3,<2.0 [sqs] boto>=2.13.3 [pyro] pyro4 [redis] redis>=2.8.0 [mongodb] pymongo>=2.6.2 [zeromq] pyzmq>=13.1.0 [zookeeper] kazoo>=1.3.1 [couchdb] couchdb [librabbitmq] librabbitmq>=1.0.1 [msgpack] msgpack-python>=0.3.0 [sqlalchemy] sqlalchemy [yaml] PyYAML>=3.10 [beanstalk] beanstalkc [slmq] softlayer_messaging>=1.0.3kombu-3.0.7/kombu.egg-info/SOURCES.txt0000644000076500000000000001721712247127365017764 0ustar asksolwheel00000000000000AUTHORS Changelog FAQ INSTALL LICENSE MANIFEST.in README.rst THANKS TODO setup.cfg setup.py docs/Makefile docs/changelog.rst docs/conf.py docs/faq.rst docs/index.rst docs/introduction.rst docs/.static/.keep docs/.templates/sidebarintro.html docs/.templates/sidebarlogo.html docs/_ext/applyxrefs.py docs/_ext/literals_to_xrefs.py docs/_theme/celery/theme.conf docs/_theme/celery/static/celery.css_t docs/images/kombu.jpg docs/images/kombusmall.jpg docs/reference/index.rst docs/reference/kombu.abstract.rst docs/reference/kombu.async.hub.rst docs/reference/kombu.async.rst docs/reference/kombu.async.semaphore.rst docs/reference/kombu.async.timer.rst docs/reference/kombu.clocks.rst docs/reference/kombu.common.rst docs/reference/kombu.compat.rst docs/reference/kombu.compression.rst docs/reference/kombu.connection.rst docs/reference/kombu.exceptions.rst docs/reference/kombu.five.rst docs/reference/kombu.log.rst docs/reference/kombu.message.rst docs/reference/kombu.mixins.rst docs/reference/kombu.pidbox.rst docs/reference/kombu.pools.rst docs/reference/kombu.rst docs/reference/kombu.serialization.rst docs/reference/kombu.simple.rst docs/reference/kombu.syn.rst docs/reference/kombu.transport.SLMQ.rst docs/reference/kombu.transport.SQS.rst docs/reference/kombu.transport.amqplib.rst docs/reference/kombu.transport.base.rst docs/reference/kombu.transport.beanstalk.rst docs/reference/kombu.transport.couchdb.rst docs/reference/kombu.transport.django.management.commands.clean_kombu_messages.rst docs/reference/kombu.transport.django.managers.rst docs/reference/kombu.transport.django.models.rst docs/reference/kombu.transport.django.rst docs/reference/kombu.transport.filesystem.rst docs/reference/kombu.transport.librabbitmq.rst docs/reference/kombu.transport.memory.rst docs/reference/kombu.transport.mongodb.rst docs/reference/kombu.transport.pyamqp.rst docs/reference/kombu.transport.pyro.rst docs/reference/kombu.transport.redis.rst docs/reference/kombu.transport.rst docs/reference/kombu.transport.sqlalchemy.models.rst docs/reference/kombu.transport.sqlalchemy.rst docs/reference/kombu.transport.virtual.exchange.rst docs/reference/kombu.transport.virtual.rst docs/reference/kombu.transport.virtual.scheduling.rst docs/reference/kombu.transport.zmq.rst docs/reference/kombu.transport.zookeeper.rst docs/reference/kombu.utils.amq_manager.rst docs/reference/kombu.utils.compat.rst docs/reference/kombu.utils.debug.rst docs/reference/kombu.utils.encoding.rst docs/reference/kombu.utils.eventio.rst docs/reference/kombu.utils.functional.rst docs/reference/kombu.utils.limits.rst docs/reference/kombu.utils.rst docs/reference/kombu.utils.text.rst docs/reference/kombu.utils.url.rst docs/userguide/connections.rst docs/userguide/consumers.rst docs/userguide/examples.rst docs/userguide/index.rst docs/userguide/introduction.rst docs/userguide/pools.rst docs/userguide/producers.rst docs/userguide/serialization.rst docs/userguide/simple.rst examples/complete_receive.py examples/complete_send.py examples/hello_consumer.py examples/hello_publisher.py examples/simple_eventlet_receive.py examples/simple_eventlet_send.py examples/simple_receive.py examples/simple_send.py examples/simple_task_queue/__init__.py examples/simple_task_queue/client.py examples/simple_task_queue/queues.py examples/simple_task_queue/tasks.py examples/simple_task_queue/worker.py extra/doc2ghpages extra/release/bump_version.py extra/release/doc4allmods extra/release/flakeplus.py extra/release/jython-run-tests extra/release/removepyc.sh extra/release/verify-reference-index.sh funtests/__init__.py funtests/setup.cfg funtests/setup.py funtests/transport.py funtests/tests/__init__.py funtests/tests/test_SLMQ.py funtests/tests/test_SQS.py funtests/tests/test_amqp.py funtests/tests/test_amqplib.py funtests/tests/test_beanstalk.py funtests/tests/test_couchdb.py funtests/tests/test_django.py funtests/tests/test_librabbitmq.py funtests/tests/test_mongodb.py funtests/tests/test_pyamqp.py funtests/tests/test_redis.py funtests/tests/test_sqla.py funtests/tests/test_zookeeper.py kombu/__init__.py kombu/abstract.py kombu/clocks.py kombu/common.py kombu/compat.py kombu/compression.py kombu/connection.py kombu/entity.py kombu/exceptions.py kombu/five.py kombu/log.py kombu/message.py kombu/messaging.py kombu/mixins.py kombu/pidbox.py kombu/pools.py kombu/serialization.py kombu/simple.py kombu/syn.py kombu.egg-info/PKG-INFO kombu.egg-info/SOURCES.txt kombu.egg-info/dependency_links.txt kombu.egg-info/not-zip-safe kombu.egg-info/requires.txt kombu.egg-info/top_level.txt kombu/async/__init__.py kombu/async/hub.py kombu/async/semaphore.py kombu/async/timer.py kombu/tests/__init__.py kombu/tests/case.py kombu/tests/mocks.py kombu/tests/test_clocks.py kombu/tests/test_common.py kombu/tests/test_compat.py kombu/tests/test_compression.py kombu/tests/test_connection.py kombu/tests/test_entities.py kombu/tests/test_log.py kombu/tests/test_messaging.py kombu/tests/test_mixins.py kombu/tests/test_pidbox.py kombu/tests/test_pools.py kombu/tests/test_serialization.py kombu/tests/test_simple.py kombu/tests/test_syn.py kombu/tests/async/__init__.py kombu/tests/async/test_hub.py kombu/tests/transport/__init__.py kombu/tests/transport/test_amqplib.py kombu/tests/transport/test_base.py kombu/tests/transport/test_filesystem.py kombu/tests/transport/test_librabbitmq.py kombu/tests/transport/test_memory.py kombu/tests/transport/test_mongodb.py kombu/tests/transport/test_pyamqp.py kombu/tests/transport/test_redis.py kombu/tests/transport/test_sqlalchemy.py kombu/tests/transport/test_transport.py kombu/tests/transport/virtual/__init__.py kombu/tests/transport/virtual/test_base.py kombu/tests/transport/virtual/test_exchange.py kombu/tests/transport/virtual/test_scheduling.py kombu/tests/utils/__init__.py kombu/tests/utils/test_amq_manager.py kombu/tests/utils/test_debug.py kombu/tests/utils/test_encoding.py kombu/tests/utils/test_functional.py kombu/tests/utils/test_utils.py kombu/transport/SLMQ.py kombu/transport/SQS.py kombu/transport/__init__.py kombu/transport/amqplib.py kombu/transport/base.py kombu/transport/beanstalk.py kombu/transport/couchdb.py kombu/transport/filesystem.py kombu/transport/librabbitmq.py kombu/transport/memory.py kombu/transport/mongodb.py kombu/transport/pyamqp.py kombu/transport/pyro.py kombu/transport/redis.py kombu/transport/zmq.py kombu/transport/zookeeper.py kombu/transport/django/__init__.py kombu/transport/django/managers.py kombu/transport/django/models.py kombu/transport/django/management/__init__.py kombu/transport/django/management/commands/__init__.py kombu/transport/django/management/commands/clean_kombu_messages.py kombu/transport/django/migrations/0001_initial.py kombu/transport/django/migrations/__init__.py kombu/transport/sqlalchemy/__init__.py kombu/transport/sqlalchemy/models.py kombu/transport/virtual/__init__.py kombu/transport/virtual/exchange.py kombu/transport/virtual/scheduling.py kombu/utils/__init__.py kombu/utils/amq_manager.py kombu/utils/compat.py kombu/utils/debug.py kombu/utils/encoding.py kombu/utils/eventio.py kombu/utils/functional.py kombu/utils/limits.py kombu/utils/text.py kombu/utils/url.py requirements/default.txt requirements/dev.txt requirements/docs.txt requirements/funtest.txt requirements/pkgutils.txt requirements/py26.txt requirements/test-ci.txt requirements/test.txt requirements/test3.txt requirements/extras/beanstalk.txt requirements/extras/couchdb.txt requirements/extras/kazoo.txt requirements/extras/librabbitmq.txt requirements/extras/mongodb.txt requirements/extras/msgpack.txt requirements/extras/pyro.txt requirements/extras/redis.txt requirements/extras/slmq.txt requirements/extras/sqlalchemy.txt requirements/extras/sqs.txt requirements/extras/yaml.txt requirements/extras/zeromq.txt requirements/extras/zookeeper.txtkombu-3.0.7/kombu.egg-info/top_level.txt0000644000076500000000000000000612247127365020616 0ustar asksolwheel00000000000000kombu kombu-3.0.7/LICENSE0000644000076500000000000000305112237554371014265 0ustar asksolwheel00000000000000Copyright (c) 2012-2013 GoPivotal, Inc. All rights reserved. Copyright (c) 2009-2012, Ask Solem & contributors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Ask Solem nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL Ask Solem OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. kombu-3.0.7/MANIFEST.in0000644000076500000000000000056112064115765015017 0ustar asksolwheel00000000000000include AUTHORS include Changelog include FAQ include INSTALL include LICENSE include MANIFEST.in include README.rst include README include THANKS include TODO include setup.cfg recursive-include extra * recursive-include docs * recursive-include kombu *.py recursive-include requirements *.txt recursive-include funtests *.py setup.cfg recursive-include examples *.py kombu-3.0.7/PKG-INFO0000644000076500000000000003503212247127370014355 0ustar asksolwheel00000000000000Metadata-Version: 1.1 Name: kombu Version: 3.0.7 Summary: Messaging library for Python Home-page: http://kombu.readthedocs.org Author: Ask Solem Author-email: ask@celeryproject.org License: UNKNOWN Description: .. _kombu-index: ======================================== kombu - Messaging library for Python ======================================== :Version: 3.0.7 `Kombu` is a messaging library for Python. The aim of `Kombu` is to make messaging in Python as easy as possible by providing an idiomatic high-level interface for the AMQ protocol, and also provide proven and tested solutions to common messaging problems. `AMQP`_ is the Advanced Message Queuing Protocol, an open standard protocol for message orientation, queuing, routing, reliability and security, for which the `RabbitMQ`_ messaging server is the most popular implementation. Features ======== * Allows application authors to support several message server solutions by using pluggable transports. * AMQP transport using the `py-amqp`_ or `librabbitmq`_ client libraries. * High performance AMQP transport written in C - when using `librabbitmq`_ This is automatically enabled if librabbitmq is installed:: $ pip install librabbitmq * Virtual transports makes it really easy to add support for non-AMQP transports. There is already built-in support for `Redis`_, `Beanstalk`_, `Amazon SQS`_, `CouchDB`_, `MongoDB`_, ZeroMQ, `ZooKeeper`_, `SoftLayer MQ`_ and `Pyro`_. * You can also use the SQLAlchemy and Django ORM transports to use a database as the broker. * In-memory transport for unit testing. * Supports automatic encoding, serialization and compression of message payloads. * Consistent exception handling across transports. * The ability to ensure that an operation is performed by gracefully handling connection and channel errors. * Several annoyances with `amqplib`_ has been fixed, like supporting timeouts and the ability to wait for events on more than one channel. * Projects already using `carrot`_ can easily be ported by using a compatibility layer. For an introduction to AMQP you should read the article `Rabbits and warrens`_, and the `Wikipedia article about AMQP`_. .. _`RabbitMQ`: http://www.rabbitmq.com/ .. _`AMQP`: http://amqp.org .. _`py-amqp`: http://pypi.python.org/pypi/amqp/ .. _`Redis`: http://code.google.com/p/redis/ .. _`Amazon SQS`: http://aws.amazon.com/sqs/ .. _`MongoDB`: http://www.mongodb.org/ .. _`CouchDB`: http://couchdb.apache.org/ .. _`Zookeeper`: https://zookeeper.apache.org/ .. _`Beanstalk`: http://kr.github.com/beanstalkd/ .. _`Rabbits and warrens`: http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/ .. _`amqplib`: http://barryp.org/software/py-amqplib/ .. _`Wikipedia article about AMQP`: http://en.wikipedia.org/wiki/AMQP .. _`carrot`: http://pypi.python.org/pypi/carrot/ .. _`librabbitmq`: http://pypi.python.org/pypi/librabbitmq .. _`Pyro`: http://pythonhosting.org/Pyro .. _`SoftLayer MQ`: http://www.softlayer.com/services/additional/message-queue .. _transport-comparison: Transport Comparison ==================== +---------------+----------+------------+------------+---------------+ | **Client** | **Type** | **Direct** | **Topic** | **Fanout** | +---------------+----------+------------+------------+---------------+ | *amqp* | Native | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *redis* | Virtual | Yes | Yes | Yes (PUB/SUB) | +---------------+----------+------------+------------+---------------+ | *mongodb* | Virtual | Yes | Yes | Yes | +---------------+----------+------------+------------+---------------+ | *beanstalk* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SQS* | Virtual | Yes | Yes [#f1]_ | Yes [#f2]_ | +---------------+----------+------------+------------+---------------+ | *couchdb* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *zookeeper* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *in-memory* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *django* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *sqlalchemy* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ | *SLMQ* | Virtual | Yes | Yes [#f1]_ | No | +---------------+----------+------------+------------+---------------+ .. [#f1] Declarations only kept in memory, so exchanges/queues must be declared by all clients that needs them. .. [#f2] Fanout supported via storing routing tables in SimpleDB. Disabled by default, but can be enabled by using the ``supports_fanout`` transport option. Documentation ------------- Kombu is using Sphinx, and the latest documentation can be found here: http://kombu.readthedocs.org/ Quick overview -------------- :: from kombu import Connection, Exchange, Queue media_exchange = Exchange('media', 'direct', durable=True) video_queue = Queue('video', exchange=media_exchange, routing_key='video') def process_media(body, message): print body message.ack() # connections with Connection('amqp://guest:guest@localhost//') as conn: # produce producer = conn.Producer(serializer='json') producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013}, exchange=media_exchange, routing_key='video', declare=[video_queue]) # the declare above, makes sure the video queue is declared # so that the messages can be delivered. # It's a best practice in Kombu to have both publishers and # consumers declare the queue. You can also declare the # queue manually using: # video_queue(conn).declare() # consume with conn.Consumer(video_queue, callbacks=[process_media]) as consumer: # Process messages and handle events on all channels while True: conn.drain_events() # Consume from several queues on the same channel: video_queue = Queue('video', exchange=media_exchange, key='video') image_queue = Queue('image', exchange=media_exchange, key='image') with connection.Consumer([video_queue, image_queue], callbacks=[process_media]) as consumer: while True: connection.drain_events() Or handle channels manually:: with connection.channel() as channel: producer = Producer(channel, ...) consumer = Producer(channel) All objects can be used outside of with statements too, just remember to close the objects after use:: from kombu import Connection, Consumer, Producer connection = Connection() # ... connection.release() consumer = Consumer(channel_or_connection, ...) consumer.register_callback(my_callback) consumer.consume() # .... consumer.cancel() `Exchange` and `Queue` are simply declarations that can be pickled and used in configuration files etc. They also support operations, but to do so they need to be bound to a channel. Binding exchanges and queues to a connection will make it use that connections default channel. :: >>> exchange = Exchange('tasks', 'direct') >>> connection = Connection() >>> bound_exchange = exchange(connection) >>> bound_exchange.delete() # the original exchange is not affected, and stays unbound. >>> exchange.delete() raise NotBoundError: Can't call delete on Exchange not bound to a channel. Installation ============ You can install `Kombu` either via the Python Package Index (PyPI) or from source. To install using `pip`,:: $ pip install kombu To install using `easy_install`,:: $ easy_install kombu If you have downloaded a source tarball you can install it by doing the following,:: $ python setup.py build # python setup.py install # as root Terminology =========== There are some concepts you should be familiar with before starting: * Producers Producers sends messages to an exchange. * Exchanges Messages are sent to exchanges. Exchanges are named and can be configured to use one of several routing algorithms. The exchange routes the messages to consumers by matching the routing key in the message with the routing key the consumer provides when binding to the exchange. * Consumers Consumers declares a queue, binds it to a exchange and receives messages from it. * Queues Queues receive messages sent to exchanges. The queues are declared by consumers. * Routing keys Every message has a routing key. The interpretation of the routing key depends on the exchange type. There are four default exchange types defined by the AMQP standard, and vendors can define custom types (so see your vendors manual for details). These are the default exchange types defined by AMQP/0.8: * Direct exchange Matches if the routing key property of the message and the `routing_key` attribute of the consumer are identical. * Fan-out exchange Always matches, even if the binding does not have a routing key. * Topic exchange Matches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (`"."`, like domain names), and two special characters are available; star (`"*"`) and hash (`"#"`). The star matches any word, and the hash matches zero or more words. For example `"*.stock.#"` matches the routing keys `"usd.stock"` and `"eur.stock.db"` but not `"stock.nasdaq"`. Getting Help ============ Mailing list ------------ Join the `carrot-users`_ mailing list. .. _`carrot-users`: http://groups.google.com/group/carrot-users/ Bug tracker =========== If you have any suggestions, bug reports or annoyances please report them to our issue tracker at http://github.com/celery/kombu/issues/ Contributing ============ Development of `Kombu` happens at Github: http://github.com/celery/kombu You are highly encouraged to participate in the development. If you don't like Github (for some reason) you're welcome to send regular patches. License ======= This software is licensed under the `New BSD License`. See the `LICENSE` file in the top distribution directory for the full license text. .. image:: https://d2weczhvl823v0.cloudfront.net/celery/kombu/trend.png :alt: Bitdeli badge :target: https://bitdeli.com/free Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Programming Language :: Python :: Implementation :: Jython Classifier: Intended Audience :: Developers Classifier: Topic :: Communications Classifier: Topic :: System :: Distributed Computing Classifier: Topic :: System :: Networking Classifier: Topic :: Software Development :: Libraries :: Python Modules kombu-3.0.7/README.rst0000600000076500000240000000000012247127137022712 1kombu-3.0.7/docs/introduction.rstustar asksolstaff00000000000000kombu-3.0.7/requirements/0000755000076500000000000000000012247127370016000 5ustar asksolwheel00000000000000kombu-3.0.7/requirements/default.txt0000644000076500000000000000004012243671543020161 0ustar asksolwheel00000000000000anyjson>=0.3.3 amqp>=1.3.3,<2.0 kombu-3.0.7/requirements/dev.txt0000644000076500000000000000006112237554371017320 0ustar asksolwheel00000000000000https://github.com/celery/py-amqp/zipball/master kombu-3.0.7/requirements/docs.txt0000644000076500000000000000005612064115765017474 0ustar asksolwheel00000000000000Sphinx sphinxcontrib-issuetracker>=0.9 Django kombu-3.0.7/requirements/extras/0000755000076500000000000000000012247127370017306 5ustar asksolwheel00000000000000kombu-3.0.7/requirements/extras/beanstalk.txt0000644000076500000000000000001312237554371022011 0ustar asksolwheel00000000000000beanstalkc kombu-3.0.7/requirements/extras/couchdb.txt0000644000076500000000000000001012237554371021451 0ustar asksolwheel00000000000000couchdb kombu-3.0.7/requirements/extras/kazoo.txt0000644000076500000000000000000112237554371021165 0ustar asksolwheel00000000000000 kombu-3.0.7/requirements/extras/librabbitmq.txt0000644000076500000000000000002312237554371022336 0ustar asksolwheel00000000000000librabbitmq>=1.0.1 kombu-3.0.7/requirements/extras/mongodb.txt0000644000076500000000000000001712237554371021476 0ustar asksolwheel00000000000000pymongo>=2.6.2 kombu-3.0.7/requirements/extras/msgpack.txt0000644000076500000000000000002612237554371021476 0ustar asksolwheel00000000000000msgpack-python>=0.3.0 kombu-3.0.7/requirements/extras/pyro.txt0000644000076500000000000000000612237554371021040 0ustar asksolwheel00000000000000pyro4 kombu-3.0.7/requirements/extras/redis.txt0000644000076500000000000000001512237554371021155 0ustar asksolwheel00000000000000redis>=2.8.0 kombu-3.0.7/requirements/extras/slmq.txt0000644000076500000000000000003312237554371021023 0ustar asksolwheel00000000000000softlayer_messaging>=1.0.3 kombu-3.0.7/requirements/extras/sqlalchemy.txt0000644000076500000000000000001312237554371022207 0ustar asksolwheel00000000000000sqlalchemy kombu-3.0.7/requirements/extras/sqs.txt0000644000076500000000000000001512237554371020655 0ustar asksolwheel00000000000000boto>=2.13.3 kombu-3.0.7/requirements/extras/yaml.txt0000644000076500000000000000001512237554371021011 0ustar asksolwheel00000000000000PyYAML>=3.10 kombu-3.0.7/requirements/extras/zeromq.txt0000644000076500000000000000001612237554371021365 0ustar asksolwheel00000000000000pyzmq>=13.1.0 kombu-3.0.7/requirements/extras/zookeeper.txt0000644000076500000000000000001512237554371022052 0ustar asksolwheel00000000000000kazoo>=1.3.1 kombu-3.0.7/requirements/funtest.txt0000644000076500000000000000037412223041317020223 0ustar asksolwheel00000000000000# redis transport redis # MongoDB transport pymongo # CouchDB transport couchdb # Beanstalk transport beanstalkc # Zookeeper transport kazoo # SQLAlchemy transport kombu-sqlalchemy # Django ORM transport Django django-kombu # SQS transport boto kombu-3.0.7/requirements/pkgutils.txt0000644000076500000000000000002412064115765020401 0ustar asksolwheel00000000000000paver flake8 Sphinx kombu-3.0.7/requirements/py26.txt0000644000076500000000000000002612064115765017341 0ustar asksolwheel00000000000000importlib ordereddict kombu-3.0.7/requirements/test-ci.txt0000644000076500000000000000011512237554371020112 0ustar asksolwheel00000000000000coverage>=3.0 redis PyYAML msgpack-python>0.2.0 # 0.2.0 dropped 2.5 support kombu-3.0.7/requirements/test.txt0000644000076500000000000000004712064115765017523 0ustar asksolwheel00000000000000nose nose-cover3 unittest2>=0.5.0 mock kombu-3.0.7/requirements/test3.txt0000644000076500000000000000005512234207745017604 0ustar asksolwheel00000000000000setuptools>=0.7 nose nose-cover3 mock>=0.7.0 kombu-3.0.7/setup.cfg0000644000076500000000000000136412247127370015102 0ustar asksolwheel00000000000000[nosetests] verbosity = 1 detailed-errors = 1 where = kombu/tests cover3-branch = 1 cover3-html = 1 cover3-package = kombu cover3-exclude = kombu kombu.five kombu.transport.mongodb kombu.transport.filesystem kombu.utils.compat kombu.utils.eventio kombu.utils.finalize kombu.transport.amqplib kombu.transport.couchdb kombu.transport.beanstalk kombu.transport.sqlalchemy* kombu.transport.SQS kombu.transport.zookeeper kombu.transport.zmq kombu.transport.django* kombu.transport.pyro [build_sphinx] source-dir = docs/ build-dir = docs/.build all_files = 1 [upload_sphinx] upload-dir = docs/.build/html [bdist_rpm] requires = anyjson >= 0.3.3 amqp >= 1.3.3 importlib ordereddict [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 kombu-3.0.7/setup.py0000644000076500000000000001151712243671543014776 0ustar asksolwheel00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import sys import codecs extra = {} PY3 = sys.version_info[0] == 3 if sys.version_info < (2, 6): raise Exception('Kombu requires Python 2.6 or higher.') try: from setuptools import setup except ImportError: from distutils.core import setup # noqa from distutils.command.install import INSTALL_SCHEMES # -- Parse meta import re re_meta = re.compile(r'__(\w+?)__\s*=\s*(.*)') re_vers = re.compile(r'VERSION\s*=.*?\((.*?)\)') re_doc = re.compile(r'^"""(.+?)"""') rq = lambda s: s.strip("\"'") def add_default(m): attr_name, attr_value = m.groups() return ((attr_name, rq(attr_value)), ) def add_version(m): v = list(map(rq, m.groups()[0].split(', '))) return (('VERSION', '.'.join(v[0:3]) + ''.join(v[3:])), ) def add_doc(m): return (('doc', m.groups()[0]), ) pats = {re_meta: add_default, re_vers: add_version, re_doc: add_doc} here = os.path.abspath(os.path.dirname(__file__)) meta_fh = open(os.path.join(here, 'kombu/__init__.py')) try: meta = {} for line in meta_fh: if line.strip() == '# -eof meta-': break for pattern, handler in pats.items(): m = pattern.match(line.strip()) if m: meta.update(handler(m)) finally: meta_fh.close() # -- packages, data_files = [], [] root_dir = os.path.dirname(__file__) if root_dir != '': os.chdir(root_dir) src_dir = 'kombu' def fullsplit(path, result=None): if result is None: result = [] head, tail = os.path.split(path) if head == '': return [tail] + result if head == path: return result return fullsplit(head, [tail] + result) for scheme in list(INSTALL_SCHEMES.values()): scheme['data'] = scheme['purelib'] for dirpath, dirnames, filenames in os.walk(src_dir): # Ignore dirnames that start with '.' for i, dirname in enumerate(dirnames): if dirname.startswith('.'): del dirnames[i] for filename in filenames: if filename.endswith('.py'): packages.append('.'.join(fullsplit(dirpath))) else: data_files.append( [dirpath, [os.path.join(dirpath, f) for f in filenames]], ) if os.path.exists('README.rst'): long_description = codecs.open('README.rst', 'r', 'utf-8').read() else: long_description = 'See http://pypi.python.org/pypi/kombu' # -*- Installation Requires -*- py_version = sys.version_info is_jython = sys.platform.startswith('java') is_pypy = hasattr(sys, 'pypy_version_info') def strip_comments(l): return l.split('#', 1)[0].strip() def reqs(*f): return [ r for r in ( strip_comments(l) for l in open( os.path.join(os.getcwd(), 'requirements', *f)).readlines() ) if r] install_requires = reqs('default.txt') if py_version[0:2] == (2, 6): install_requires.extend(reqs('py26.txt')) # -*- Tests Requires -*- tests_require = reqs('test3.txt' if PY3 else 'test.txt') extras = lambda *p: reqs('extras', *p) extras_require = extra['extras_require'] = { 'msgpack': extras('msgpack.txt'), 'yaml': extras('yaml.txt'), 'redis': extras('redis.txt'), 'mongodb': extras('mongodb.txt'), 'sqs': extras('sqs.txt'), 'couchdb': extras('couchdb.txt'), 'beanstalk': extras('beanstalk.txt'), 'zookeeper': extras('zookeeper.txt'), 'zeromq': extras('zeromq.txt'), 'sqlalchemy': extras('sqlalchemy.txt'), 'librabbitmq': extras('librabbitmq.txt'), 'pyro': extras('pyro.txt'), 'slmq': extras('slmq.txt'), } setup( name='kombu', version=meta['VERSION'], description=meta['doc'], author=meta['author'], author_email=meta['contact'], url=meta['homepage'], platforms=['any'], packages=packages, data_files=data_files, zip_safe=False, test_suite='nose.collector', install_requires=install_requires, tests_require=tests_require, classifiers=[ 'Development Status :: 5 - Production/Stable', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Programming Language :: Python :: Implementation :: Jython', 'Intended Audience :: Developers', 'Topic :: Communications', 'Topic :: System :: Distributed Computing', 'Topic :: System :: Networking', 'Topic :: Software Development :: Libraries :: Python Modules', ], long_description=long_description, **extra) kombu-3.0.7/THANKS0000644000076500000000000000172412064115765014176 0ustar asksolwheel00000000000000======== THANKS ======== From ``carrot`` THANKS file =========================== * Thanks to Barry Pederson for the py-amqplib library. * Thanks to Grégoire Cachet for bug reports. * Thanks to Martin Mahner for the Sphinx theme. * Thanks to jcater for bug reports. * Thanks to sebest for bug reports. * Thanks to greut for bug reports From ``django-kombu`` THANKS file ================================= * Thanks to Rajesh Dhawan and other authors of django-queue-service for the database model implementation. See http://code.google.com/p/django-queue-service/. From ``kombu-sqlalchemy`` THANKS file ===================================== * Thanks to Rajesh Dhawan and other authors of django-queue-service for the database model implementation. See http://code.google.com/p/django-queue-service/. * Thanks to haridsv for the draft SQLAlchemy port (which can still be found at http://github.com/haridsv/celery-alchemy-poc) kombu-3.0.7/TODO0000644000076500000000000000012212064115765013742 0ustar asksolwheel00000000000000Please see our Issue Tracker at GitHub: http://github.com/celery/kombu/issues