aiozmq-0.9.0/0000775000372000037200000000000013614330247013665 5ustar travistravis00000000000000aiozmq-0.9.0/MANIFEST.in0000664000372000037200000000013413614330211015410 0ustar travistravis00000000000000include LICENSE.txt include CHANGES.txt include README.rst graft aiozmq global-exclude *.pycaiozmq-0.9.0/LICENSE.txt0000664000372000037200000000245113614330211015501 0ustar travistravis00000000000000Copyright (c) 2013, 2014, 2015, Nikolay Kim and Andrew Svetlov All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. aiozmq-0.9.0/CHANGES.txt0000664000372000037200000000502013614330211015462 0ustar travistravis00000000000000CHANGES ------- 0.9.0 (2020-01-25) ^^^^^^^^^^^^^^^^^^ * Support Python 3.7 and 3.8 0.8.0 (2016-12-07) ^^^^^^^^^^^^^^^^^^ * Respect `events_backlog` parameter in zmq stream creation #86 0.7.1 (2015-09-20) ^^^^^^^^^^^^^^^^^^ * Fix monitoring events implementation * Make the library compatible with Python 3.5 0.7.0 (2015-07-31) ^^^^^^^^^^^^^^^^^^ * Implement monitoring ZMQ events #50 * Do deeper lookup for inhereted classes #54 * Relax endpont check #56 * Implement monitoring events for stream api #52 0.6.1 (2015-05-19) ^^^^^^^^^^^^^^^^^^ * Dynamically get list of pyzmq socket types 0.6.0 (2015-02-14) ^^^^^^^^^^^^^^^^^^ * Process asyncio specific exceptions as builtins. * Add repr(exception) to rpc server call logs if any * Add transport.get_write_buffer_limits() method * Add __repr__ to transport * Add zmq_type to tr.get_extra_info() * Add zmq streams 0.5.2 (2014-10-09) ^^^^^^^^^^^^^^^^^^ * Poll events after sending zmq message for eventless transport 0.5.1 (2014-09-27) ^^^^^^^^^^^^^^^^^^ * Fix loopless transport implementation. 0.5.0 (2014-08-23) ^^^^^^^^^^^^^^^^^^ * Support zmq devices in aiozmq.rpc.serve_rpc() * Add loopless 0MQ transport 0.4.1 (2014-07-03) ^^^^^^^^^^^^^^^^^^ * Add exclude_log_exceptions parameter to rpc servers. 0.4.0 (2014-05-28) ^^^^^^^^^^^^^^^^^^ * Implement pause_reading/resume_reading methods in ZmqTransport. 0.3.0 (2014-05-17) ^^^^^^^^^^^^^^^^^^ * Add limited support for Windows. * Fix unstable test execution, change ZmqEventLoop to use global shared zmq.Context by default. * Process cancellation on rpc servers and clients. 0.2.0 (2014-04-18) ^^^^^^^^^^^^^^^^^^ * msg in msg_received now is a list, not tuple * Allow to send empty msg by trsansport.write() * Add benchmarks * Derive ServiceClosedError from aiozmq.rpc.Error, not Exception * Implement logging from remote calls at server side (log_exceptions parameter). * Optimize byte counting in ZmqTransport. 0.1.3 (2014-04-10) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. Add check for libzmq version (should be >= 3.0) 0.1.2 (2014-04-01) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. 0.1.1 (2014-03-31) ^^^^^^^^^^^^^^^^^^ * Rename plural module names to single ones. 0.1.0 (2014-03-30) ^^^^^^^^^^^^^^^^^^ * Implement ZmqEventLoop with *create_zmq_connection* method which operates on zmq transport and protocol. * Implement ZmqEventLoopPolicy. * Introduce ZmqTransport and ZmqProtocol. * Implement zmq.rpc with RPC, PUSHPULL and PUBSUB protocols. aiozmq-0.9.0/aiozmq.egg-info/0000775000372000037200000000000013614330247016657 5ustar travistravis00000000000000aiozmq-0.9.0/aiozmq.egg-info/PKG-INFO0000664000372000037200000002135213614330246017756 0ustar travistravis00000000000000Metadata-Version: 2.1 Name: aiozmq Version: 0.9.0 Summary: ZeroMQ integration with asyncio. Home-page: http://aiozmq.readthedocs.org Author: Nikolay Kim Author-email: fafhrd91@gmail.com Maintainer: Jelle Zijlstra Maintainer-email: jelle.zijlstra@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiozmq Description: asyncio integration with ZeroMQ =============================== asyncio (PEP 3156) support for ZeroMQ. .. image:: https://travis-ci.com/aio-libs/aiozmq.svg?branch=master :target: https://travis-ci.com/aio-libs/aiozmq The difference between ``aiozmq`` and vanilla ``pyzmq`` (``zmq.asyncio``). ``zmq.asyncio`` works only by replacement *event loop* with custom one. This approach works but have two disadvantages: 1. ``zmq.asyncio.ZMQEventLoop`` cannot be combined with other loop implementations (most notable is ultra fast ``uvloop``). 2. It uses internal ZMQ Poller which has fast ZMQ Sockets support but doesn't intended to work fast with many (thousands) regular TCP sockets. In practice it means that ``zmq.asyncio`` is not recommended to be used with web servers like ``aiohttp``. See also https://github.com/zeromq/pyzmq/issues/894 Documentation ------------- See http://aiozmq.readthedocs.org Simple high-level client-server RPC example: .. code-block:: python import asyncio import aiozmq.rpc class ServerHandler(aiozmq.rpc.AttrHandler): @aiozmq.rpc.method def remote_func(self, a:int, b:int) -> int: return a + b @asyncio.coroutine def go(): server = yield from aiozmq.rpc.serve_rpc( ServerHandler(), bind='tcp://127.0.0.1:5555') client = yield from aiozmq.rpc.connect_rpc( connect='tcp://127.0.0.1:5555') ret = yield from client.call.remote_func(1, 2) assert 3 == ret server.close() client.close() asyncio.get_event_loop().run_until_complete(go()) Low-level request-reply example: .. code-block:: python import asyncio import aiozmq import zmq @asyncio.coroutine def go(): router = yield from aiozmq.create_zmq_stream( zmq.ROUTER, bind='tcp://127.0.0.1:*') addr = list(router.transport.bindings())[0] dealer = yield from aiozmq.create_zmq_stream( zmq.DEALER, connect=addr) for i in range(10): msg = (b'data', b'ask', str(i).encode('utf-8')) dealer.write(msg) data = yield from router.read() router.write(data) answer = yield from dealer.read() print(answer) dealer.close() router.close() asyncio.get_event_loop().run_until_complete(go()) Comparison to pyzmq ------------------- `zmq.asyncio` provides a *asyncio compatible loop* implementation. But it's based on `zmq.Poller` which doesn't work well with massive non-zmq sockets usage. E.g. if you build a web server for handling at least thousands of parallel web requests (1000-5000) `pyzmq` internal Poller will be slow. `aiozmq` works with epoll natively, it doesn't need custom loop implementation and cooperates pretty well with `uvloop` for example. For details see https://github.com/zeromq/pyzmq/issues/894 Requirements ------------ * Python_ 3.5+ * pyzmq_ 13.1+ * optional submodule ``aiozmq.rpc`` requires msgpack_ 0.5+ License ------- aiozmq is offered under the BSD license. .. _python: https://www.python.org/ .. _pyzmq: https://pypi.python.org/pypi/pyzmq .. _asyncio: https://pypi.python.org/pypi/asyncio .. _msgpack: https://pypi.python.org/pypi/msgpack CHANGES ------- 0.9.0 (2020-01-25) ^^^^^^^^^^^^^^^^^^ * Support Python 3.7 and 3.8 0.8.0 (2016-12-07) ^^^^^^^^^^^^^^^^^^ * Respect `events_backlog` parameter in zmq stream creation #86 0.7.1 (2015-09-20) ^^^^^^^^^^^^^^^^^^ * Fix monitoring events implementation * Make the library compatible with Python 3.5 0.7.0 (2015-07-31) ^^^^^^^^^^^^^^^^^^ * Implement monitoring ZMQ events #50 * Do deeper lookup for inhereted classes #54 * Relax endpont check #56 * Implement monitoring events for stream api #52 0.6.1 (2015-05-19) ^^^^^^^^^^^^^^^^^^ * Dynamically get list of pyzmq socket types 0.6.0 (2015-02-14) ^^^^^^^^^^^^^^^^^^ * Process asyncio specific exceptions as builtins. * Add repr(exception) to rpc server call logs if any * Add transport.get_write_buffer_limits() method * Add __repr__ to transport * Add zmq_type to tr.get_extra_info() * Add zmq streams 0.5.2 (2014-10-09) ^^^^^^^^^^^^^^^^^^ * Poll events after sending zmq message for eventless transport 0.5.1 (2014-09-27) ^^^^^^^^^^^^^^^^^^ * Fix loopless transport implementation. 0.5.0 (2014-08-23) ^^^^^^^^^^^^^^^^^^ * Support zmq devices in aiozmq.rpc.serve_rpc() * Add loopless 0MQ transport 0.4.1 (2014-07-03) ^^^^^^^^^^^^^^^^^^ * Add exclude_log_exceptions parameter to rpc servers. 0.4.0 (2014-05-28) ^^^^^^^^^^^^^^^^^^ * Implement pause_reading/resume_reading methods in ZmqTransport. 0.3.0 (2014-05-17) ^^^^^^^^^^^^^^^^^^ * Add limited support for Windows. * Fix unstable test execution, change ZmqEventLoop to use global shared zmq.Context by default. * Process cancellation on rpc servers and clients. 0.2.0 (2014-04-18) ^^^^^^^^^^^^^^^^^^ * msg in msg_received now is a list, not tuple * Allow to send empty msg by trsansport.write() * Add benchmarks * Derive ServiceClosedError from aiozmq.rpc.Error, not Exception * Implement logging from remote calls at server side (log_exceptions parameter). * Optimize byte counting in ZmqTransport. 0.1.3 (2014-04-10) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. Add check for libzmq version (should be >= 3.0) 0.1.2 (2014-04-01) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. 0.1.1 (2014-03-31) ^^^^^^^^^^^^^^^^^^ * Rename plural module names to single ones. 0.1.0 (2014-03-30) ^^^^^^^^^^^^^^^^^^ * Implement ZmqEventLoop with *create_zmq_connection* method which operates on zmq transport and protocol. * Implement ZmqEventLoopPolicy. * Introduce ZmqTransport and ZmqProtocol. * Implement zmq.rpc with RPC, PUSHPULL and PUBSUB protocols. Platform: POSIX Platform: Windows Platform: MacOS X Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 4 - Beta Classifier: Framework :: AsyncIO Provides-Extra: rpc aiozmq-0.9.0/aiozmq.egg-info/dependency_links.txt0000664000372000037200000000000113614330246022724 0ustar travistravis00000000000000 aiozmq-0.9.0/aiozmq.egg-info/top_level.txt0000664000372000037200000000000713614330246021405 0ustar travistravis00000000000000aiozmq aiozmq-0.9.0/aiozmq.egg-info/SOURCES.txt0000664000372000037200000000112013614330246020534 0ustar travistravis00000000000000CHANGES.txt LICENSE.txt MANIFEST.in README.rst setup.cfg setup.py aiozmq/__init__.py aiozmq/_test_util.py aiozmq/core.py aiozmq/interface.py aiozmq/log.py aiozmq/selector.py aiozmq/stream.py aiozmq/util.py aiozmq.egg-info/PKG-INFO aiozmq.egg-info/SOURCES.txt aiozmq.egg-info/dependency_links.txt aiozmq.egg-info/entry_points.txt aiozmq.egg-info/requires.txt aiozmq.egg-info/top_level.txt aiozmq/cli/__init__.py aiozmq/cli/proxy.py aiozmq/rpc/__init__.py aiozmq/rpc/base.py aiozmq/rpc/log.py aiozmq/rpc/packer.py aiozmq/rpc/pipeline.py aiozmq/rpc/pubsub.py aiozmq/rpc/rpc.py aiozmq/rpc/util.pyaiozmq-0.9.0/aiozmq.egg-info/requires.txt0000664000372000037200000000005313614330246021254 0ustar travistravis00000000000000pyzmq!=17.1.2,>=13.1 [rpc] msgpack>=0.5.0 aiozmq-0.9.0/aiozmq.egg-info/entry_points.txt0000664000372000037200000000007013614330246022151 0ustar travistravis00000000000000[console_scripts] aiozmq-proxy = aiozmq.cli.proxy:main aiozmq-0.9.0/PKG-INFO0000664000372000037200000002135213614330247014765 0ustar travistravis00000000000000Metadata-Version: 2.1 Name: aiozmq Version: 0.9.0 Summary: ZeroMQ integration with asyncio. Home-page: http://aiozmq.readthedocs.org Author: Nikolay Kim Author-email: fafhrd91@gmail.com Maintainer: Jelle Zijlstra Maintainer-email: jelle.zijlstra@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiozmq Description: asyncio integration with ZeroMQ =============================== asyncio (PEP 3156) support for ZeroMQ. .. image:: https://travis-ci.com/aio-libs/aiozmq.svg?branch=master :target: https://travis-ci.com/aio-libs/aiozmq The difference between ``aiozmq`` and vanilla ``pyzmq`` (``zmq.asyncio``). ``zmq.asyncio`` works only by replacement *event loop* with custom one. This approach works but have two disadvantages: 1. ``zmq.asyncio.ZMQEventLoop`` cannot be combined with other loop implementations (most notable is ultra fast ``uvloop``). 2. It uses internal ZMQ Poller which has fast ZMQ Sockets support but doesn't intended to work fast with many (thousands) regular TCP sockets. In practice it means that ``zmq.asyncio`` is not recommended to be used with web servers like ``aiohttp``. See also https://github.com/zeromq/pyzmq/issues/894 Documentation ------------- See http://aiozmq.readthedocs.org Simple high-level client-server RPC example: .. code-block:: python import asyncio import aiozmq.rpc class ServerHandler(aiozmq.rpc.AttrHandler): @aiozmq.rpc.method def remote_func(self, a:int, b:int) -> int: return a + b @asyncio.coroutine def go(): server = yield from aiozmq.rpc.serve_rpc( ServerHandler(), bind='tcp://127.0.0.1:5555') client = yield from aiozmq.rpc.connect_rpc( connect='tcp://127.0.0.1:5555') ret = yield from client.call.remote_func(1, 2) assert 3 == ret server.close() client.close() asyncio.get_event_loop().run_until_complete(go()) Low-level request-reply example: .. code-block:: python import asyncio import aiozmq import zmq @asyncio.coroutine def go(): router = yield from aiozmq.create_zmq_stream( zmq.ROUTER, bind='tcp://127.0.0.1:*') addr = list(router.transport.bindings())[0] dealer = yield from aiozmq.create_zmq_stream( zmq.DEALER, connect=addr) for i in range(10): msg = (b'data', b'ask', str(i).encode('utf-8')) dealer.write(msg) data = yield from router.read() router.write(data) answer = yield from dealer.read() print(answer) dealer.close() router.close() asyncio.get_event_loop().run_until_complete(go()) Comparison to pyzmq ------------------- `zmq.asyncio` provides a *asyncio compatible loop* implementation. But it's based on `zmq.Poller` which doesn't work well with massive non-zmq sockets usage. E.g. if you build a web server for handling at least thousands of parallel web requests (1000-5000) `pyzmq` internal Poller will be slow. `aiozmq` works with epoll natively, it doesn't need custom loop implementation and cooperates pretty well with `uvloop` for example. For details see https://github.com/zeromq/pyzmq/issues/894 Requirements ------------ * Python_ 3.5+ * pyzmq_ 13.1+ * optional submodule ``aiozmq.rpc`` requires msgpack_ 0.5+ License ------- aiozmq is offered under the BSD license. .. _python: https://www.python.org/ .. _pyzmq: https://pypi.python.org/pypi/pyzmq .. _asyncio: https://pypi.python.org/pypi/asyncio .. _msgpack: https://pypi.python.org/pypi/msgpack CHANGES ------- 0.9.0 (2020-01-25) ^^^^^^^^^^^^^^^^^^ * Support Python 3.7 and 3.8 0.8.0 (2016-12-07) ^^^^^^^^^^^^^^^^^^ * Respect `events_backlog` parameter in zmq stream creation #86 0.7.1 (2015-09-20) ^^^^^^^^^^^^^^^^^^ * Fix monitoring events implementation * Make the library compatible with Python 3.5 0.7.0 (2015-07-31) ^^^^^^^^^^^^^^^^^^ * Implement monitoring ZMQ events #50 * Do deeper lookup for inhereted classes #54 * Relax endpont check #56 * Implement monitoring events for stream api #52 0.6.1 (2015-05-19) ^^^^^^^^^^^^^^^^^^ * Dynamically get list of pyzmq socket types 0.6.0 (2015-02-14) ^^^^^^^^^^^^^^^^^^ * Process asyncio specific exceptions as builtins. * Add repr(exception) to rpc server call logs if any * Add transport.get_write_buffer_limits() method * Add __repr__ to transport * Add zmq_type to tr.get_extra_info() * Add zmq streams 0.5.2 (2014-10-09) ^^^^^^^^^^^^^^^^^^ * Poll events after sending zmq message for eventless transport 0.5.1 (2014-09-27) ^^^^^^^^^^^^^^^^^^ * Fix loopless transport implementation. 0.5.0 (2014-08-23) ^^^^^^^^^^^^^^^^^^ * Support zmq devices in aiozmq.rpc.serve_rpc() * Add loopless 0MQ transport 0.4.1 (2014-07-03) ^^^^^^^^^^^^^^^^^^ * Add exclude_log_exceptions parameter to rpc servers. 0.4.0 (2014-05-28) ^^^^^^^^^^^^^^^^^^ * Implement pause_reading/resume_reading methods in ZmqTransport. 0.3.0 (2014-05-17) ^^^^^^^^^^^^^^^^^^ * Add limited support for Windows. * Fix unstable test execution, change ZmqEventLoop to use global shared zmq.Context by default. * Process cancellation on rpc servers and clients. 0.2.0 (2014-04-18) ^^^^^^^^^^^^^^^^^^ * msg in msg_received now is a list, not tuple * Allow to send empty msg by trsansport.write() * Add benchmarks * Derive ServiceClosedError from aiozmq.rpc.Error, not Exception * Implement logging from remote calls at server side (log_exceptions parameter). * Optimize byte counting in ZmqTransport. 0.1.3 (2014-04-10) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. Add check for libzmq version (should be >= 3.0) 0.1.2 (2014-04-01) ^^^^^^^^^^^^^^^^^^ * Function default values are not passed to an annotaion. 0.1.1 (2014-03-31) ^^^^^^^^^^^^^^^^^^ * Rename plural module names to single ones. 0.1.0 (2014-03-30) ^^^^^^^^^^^^^^^^^^ * Implement ZmqEventLoop with *create_zmq_connection* method which operates on zmq transport and protocol. * Implement ZmqEventLoopPolicy. * Introduce ZmqTransport and ZmqProtocol. * Implement zmq.rpc with RPC, PUSHPULL and PUBSUB protocols. Platform: POSIX Platform: Windows Platform: MacOS X Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 4 - Beta Classifier: Framework :: AsyncIO Provides-Extra: rpc aiozmq-0.9.0/aiozmq/0000775000372000037200000000000013614330247015165 5ustar travistravis00000000000000aiozmq-0.9.0/aiozmq/interface.py0000664000372000037200000001736113614330211017476 0ustar travistravis00000000000000 import asyncio from asyncio import BaseProtocol, BaseTransport __all__ = ['ZmqTransport', 'ZmqProtocol'] class ZmqTransport(BaseTransport): """Interface for ZeroMQ transport.""" def write(self, data): """Write message to the transport. data is iterable to send as multipart message. This does not block; it buffers the data and arranges for it to be sent out asynchronously. """ raise NotImplementedError def abort(self): """Close the transport immediately. Buffered data will be lost. No more data will be received. The protocol's connection_lost() method will (eventually) be called with None as its argument. """ raise NotImplementedError def getsockopt(self, option): """Get ZeroMQ socket option. option is a constant like zmq.SUBSCRIBE, zmq.UNSUBSCRIBE, zmq.TYPE etc. For list of available options please see: http://api.zeromq.org/master:zmq-getsockopt """ raise NotImplementedError def setsockopt(self, option, value): """Set ZeroMQ socket option. option is a constant like zmq.SUBSCRIBE, zmq.UNSUBSCRIBE, zmq.TYPE etc. value is a new option value, it's type depend of option name. For list of available options please see: http://api.zeromq.org/master:zmq-setsockopt """ raise NotImplementedError def set_write_buffer_limits(self, high=None, low=None): """Set the high- and low-water limits for write flow control. These two values control when to call the protocol's pause_writing() and resume_writing() methods. If specified, the low-water limit must be less than or equal to the high-water limit. Neither value can be negative. The defaults are implementation-specific. If only the high-water limit is given, the low-water limit defaults to a implementation-specific value less than or equal to the high-water limit. Setting high to zero forces low to zero as well, and causes pause_writing() to be called whenever the buffer becomes non-empty. Setting low to zero causes resume_writing() to be called only once the buffer is empty. Use of zero for either limit is generally sub-optimal as it reduces opportunities for doing I/O and computation concurrently. """ raise NotImplementedError def get_write_buffer_limits(self): raise NotImplementedError def get_write_buffer_size(self): """Return the current size of the write buffer.""" raise NotImplementedError def pause_reading(self): """Pause the receiving end. No data will be passed to the protocol's msg_received() method until resume_reading() is called. """ raise NotImplementedError def resume_reading(self): """Resume the receiving end. Data received will once again be passed to the protocol's msg_received() method. """ raise NotImplementedError def bind(self, endpoint): """Bind transpot to endpoint. endpoint is a string in format transport://address as ZeroMQ requires. Return bound endpoint, unwinding wildcards if needed. """ raise NotImplementedError def unbind(self, endpoint): """Unbind transpot from endpoint. """ raise NotImplementedError def bindings(self): """Return immutable set of endpoints bound to transport. N.B. returned endpoints includes only ones that has been bound via transport.bind or event_loop.create_zmq_connection calls and does not includes bindings that has been done to zmq_sock before create_zmq_connection has been called. """ raise NotImplementedError def connect(self, endpoint): """Connect transpot to endpoint. endpoint is a string in format transport://address as ZeroMQ requires. For TCP connections endpoint should specify IPv4 or IPv6 address, not DNS name. Use yield from get_event_loop().getaddrinfo(host, port) for translating DNS into address. Raise ValueError if endpoint is tcp DNS address. Return bound connection, unwinding wildcards if needed. """ raise NotImplementedError def disconnect(self, endpoint): """Disconnect transpot from endpoint. """ raise NotImplementedError def connections(self): """Return immutable set of endpoints connected to transport. N.B. returned endpoints includes only ones that has been connected via transport.connect or event_loop.create_zmq_connection calls and does not includes connections that has been done to zmq_sock before create_zmq_connection has been called. """ raise NotImplementedError def subscribe(self, value): """Establish a new message filter on SUB transport. Newly created SUB transports filters out all incoming messages, therefore you should to call this method to establish an initial message filter. Value should be bytes. An empty (b'') value subscribes to all incoming messages. A non-empty value subscribes to all messages beginning with the specified prefix. Multiple filters may be attached to a single SUB transport, in which case a message shall be accepted if it matches at least one filter. """ raise NotImplementedError def unsubscribe(self, value): """Remove an existing message filter on a SUB transport. Value should be bytes. The filter specified must match an existing filter previously established with the .subscribe(). If the transport has several instances of the same filter attached the .unsubscribe() removes only one instance, leaving the rest in place and functional. """ raise NotImplementedError def subscriptions(self): """Return immutable set of subscriptions (bytes) subscribed on transport. N.B. returned subscriptions includes only ones that has been subscribed via transport.subscribe call and does not includes subscribtions that has been done to zmq_sock before create_zmq_connection has been called. """ raise NotImplementedError @asyncio.coroutine def enable_monitor(self, events=None): """Enables socket events to be reported for this socket. Socket events are passed to the protocol's ZmqProtocol's event_received method. This method is a coroutine. The socket event monitor capability requires libzmq >= 4 and pyzmq >= 14.4. events is a bitmask (e.g zmq.EVENT_CONNECTED) defining the events to monitor. Default is all events (i.e. zmq.EVENT_ALL). For list of available events please see: http://api.zeromq.org/4-0:zmq-socket-monitor Raise NotImplementedError if libzmq or pyzmq versions do not support socket monitoring. """ raise NotImplementedError def disable_monitor(self): """Stop the socket event monitor. """ raise NotImplementedError class ZmqProtocol(BaseProtocol): """Interface for ZeroMQ protocol.""" def msg_received(self, data): """Called when some ZeroMQ message is received. data is the multipart tuple of bytes with at least one item. """ def event_received(self, event): """Called when a ZeroMQ socket event is received. This method is only called when a socket monitor is enabled. :param event: A namedtuple containing 3 items `event`, `value`, and `endpoint`. """ aiozmq-0.9.0/aiozmq/stream.py0000664000372000037200000002232013614330211017020 0ustar travistravis00000000000000import collections import asyncio from .core import create_zmq_connection from .interface import ZmqProtocol class ZmqStreamClosed(Exception): """A stream was closed""" @asyncio.coroutine def create_zmq_stream(zmq_type, *, bind=None, connect=None, loop=None, zmq_sock=None, high_read=None, low_read=None, high_write=None, low_write=None, events_backlog=100): """A wrapper for create_zmq_connection() returning a Stream instance. The arguments are all the usual arguments to create_zmq_connection() except protocol_factory; most common are positional host and port, with various optional keyword arguments following. Additional optional keyword arguments are loop (to set the event loop instance to use) and high_read, low_read, high_write, low_write -- high and low watermarks for reading and writing respectively. events_backlog -- backlog size for monitoring events, 100 by default. It specifies size of event queue. If count of unread events exceeds events_backlog the oldest events are discarded. """ if loop is None: loop = asyncio.get_event_loop() stream = ZmqStream(loop=loop, high=high_read, low=low_read, events_backlog=events_backlog) tr, _ = yield from create_zmq_connection( lambda: stream._protocol, zmq_type, bind=bind, connect=connect, zmq_sock=zmq_sock, loop=loop) tr.set_write_buffer_limits(high_write, low_write) return stream class ZmqStreamProtocol(ZmqProtocol): """Helper class to adapt between ZmqProtocol and ZmqStream. This is a helper class to use ZmqStream instead of subclassing ZmqProtocol. """ def __init__(self, stream, loop): self._loop = loop self._stream = stream self._paused = False self._drain_waiter = None self._connection_lost = False def pause_writing(self): assert not self._paused self._paused = True def resume_writing(self): assert self._paused self._paused = False waiter = self._drain_waiter if waiter is not None: self._drain_waiter = None if not waiter.done(): waiter.set_result(None) def connection_made(self, transport): self._stream.set_transport(transport) def connection_lost(self, exc): self._connection_lost = True if exc is None: self._stream.feed_closing() else: self._stream.set_exception(exc) if not self._paused: return waiter = self._drain_waiter if waiter is None: return self._drain_waiter = None if waiter.done(): return if exc is None: waiter.set_result(None) else: waiter.set_exception(exc) @asyncio.coroutine def _drain_helper(self): if self._connection_lost: raise ConnectionResetError('Connection lost') if not self._paused: return waiter = self._drain_waiter assert waiter is None or waiter.cancelled() waiter = asyncio.Future(loop=self._loop) self._drain_waiter = waiter yield from waiter def msg_received(self, msg): self._stream.feed_msg(msg) def event_received(self, event): self._stream.feed_event(event) class ZmqStream: """Wraps a ZmqTransport. Has write() method and read() coroutine for writing and reading ZMQ messages. It adds drain() coroutine which can be used for waiting for flow control. It also adds a transport property which references the ZmqTransport directly. """ def __init__(self, loop, *, high=None, low=None, events_backlog=100): self._transport = None self._protocol = ZmqStreamProtocol(self, loop=loop) self._loop = loop self._queue = collections.deque() self._event_queue = collections.deque(maxlen=events_backlog) self._closing = False # Whether we're done. self._waiter = None # A future. self._event_waiter = None # A future. self._exception = None self._paused = False self._set_read_buffer_limits(high, low) self._queue_len = 0 @property def transport(self): return self._transport def write(self, msg): self._transport.write(msg) def close(self): return self._transport.close() def get_extra_info(self, name, default=None): return self._transport.get_extra_info(name, default) @asyncio.coroutine def drain(self): """Flush the write buffer. The intended use is to write w.write(data) yield from w.drain() """ if self._exception is not None: raise self._exception yield from self._protocol._drain_helper() def exception(self): return self._exception def set_exception(self, exc): """Private""" self._exception = exc waiter = self._waiter if waiter is not None: self._waiter = None if not waiter.cancelled(): waiter.set_exception(exc) waiter = self._event_waiter if waiter is not None: self._event_waiter = None if not waiter.cancelled(): waiter.set_exception(exc) def set_transport(self, transport): """Private""" assert self._transport is None, 'Transport already set' self._transport = transport def _set_read_buffer_limits(self, high=None, low=None): if high is None: if low is None: high = 64*1024 else: high = 4*low if low is None: low = high // 4 if not high >= low >= 0: raise ValueError('high (%r) must be >= low (%r) must be >= 0' % (high, low)) self._high_water = high self._low_water = low def set_read_buffer_limits(self, high=None, low=None): self._set_read_buffer_limits(high, low) self._maybe_resume_transport() def _maybe_resume_transport(self): if self._paused and self._queue_len <= self._low_water: self._paused = False self._transport.resume_reading() def feed_closing(self): """Private""" self._closing = True self._transport = None waiter = self._waiter if waiter is not None: self._waiter = None if not waiter.cancelled(): waiter.set_exception(ZmqStreamClosed()) waiter = self._event_waiter if waiter is not None: self._event_waiter = None if not waiter.cancelled(): waiter.set_exception(ZmqStreamClosed()) def at_closing(self): """Return True if the buffer is empty and 'feed_closing' was called.""" return self._closing and not self._queue def feed_msg(self, msg): """Private""" assert not self._closing, 'feed_msg after feed_closing' msg_len = sum(len(i) for i in msg) self._queue.append((msg_len, msg)) self._queue_len += msg_len waiter = self._waiter if waiter is not None: self._waiter = None if not waiter.cancelled(): waiter.set_result(None) if (self._transport is not None and not self._paused and self._queue_len > self._high_water): self._transport.pause_reading() self._paused = True def feed_event(self, event): """Private""" assert not self._closing, 'feed_event after feed_closing' self._event_queue.append(event) event_waiter = self._event_waiter if event_waiter is not None: self._event_waiter = None if not event_waiter.cancelled(): event_waiter.set_result(None) @asyncio.coroutine def read(self): if self._exception is not None: raise self._exception if self._closing: raise ZmqStreamClosed() if not self._queue_len: if self._waiter is not None: raise RuntimeError('read called while another coroutine is ' 'already waiting for incoming data') self._waiter = asyncio.Future(loop=self._loop) try: yield from self._waiter finally: self._waiter = None msg_len, msg = self._queue.popleft() self._queue_len -= msg_len self._maybe_resume_transport() return msg @asyncio.coroutine def read_event(self): if self._closing: raise ZmqStreamClosed() if not self._event_queue: if self._event_waiter is not None: raise RuntimeError('read_event called while another coroutine' ' is already waiting for incoming data') self._event_waiter = asyncio.Future(loop=self._loop) try: yield from self._event_waiter finally: self._event_waiter = None event = self._event_queue.popleft() return event aiozmq-0.9.0/aiozmq/util.py0000664000372000037200000000075013614330211016505 0ustar travistravis00000000000000from collections.abc import Set class _EndpointsSet(Set): __slots__ = ('_collection',) def __init__(self, collection): self._collection = collection def __len__(self): return len(self._collection) def __contains__(self, endpoint): return endpoint in self._collection def __iter__(self): return iter(self._collection) def __repr__(self): return '{' + ', '.join(sorted(self._collection)) + '}' __str__ = __repr__ aiozmq-0.9.0/aiozmq/_test_util.py0000664000372000037200000002432713614330211017711 0ustar travistravis00000000000000"""Private test support utulities""" import contextlib import functools import logging import platform import socket import sys import time import unittest class Error(Exception): """Base class for regression test exceptions.""" class TestFailed(Error): """Test failed.""" def _requires_unix_version(sysname, min_version): # pragma: no cover """Decorator raising SkipTest if the OS is `sysname` and the version is less than `min_version`. For example, @_requires_unix_version('FreeBSD', (7, 2)) raises SkipTest if the FreeBSD version is less than 7.2. """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if platform.system() == sysname: version_txt = platform.release().split('-', 1)[0] try: version = tuple(map(int, version_txt.split('.'))) except ValueError: pass else: if version < min_version: min_version_txt = '.'.join(map(str, min_version)) raise unittest.SkipTest( "%s version %s or higher required, not %s" % (sysname, min_version_txt, version_txt)) return func(*args, **kw) wrapper.min_version = min_version return wrapper return decorator def requires_freebsd_version(*min_version): # pragma: no cover """Decorator raising SkipTest if the OS is FreeBSD and the FreeBSD version is less than `min_version`. For example, @requires_freebsd_version(7, 2) raises SkipTest if the FreeBSD version is less than 7.2. """ return _requires_unix_version('FreeBSD', min_version) def requires_linux_version(*min_version): # pragma: no cover """Decorator raising SkipTest if the OS is Linux and the Linux version is less than `min_version`. For example, @requires_linux_version(2, 6, 32) raises SkipTest if the Linux version is less than 2.6.32. """ return _requires_unix_version('Linux', min_version) def requires_mac_ver(*min_version): # pragma: no cover """Decorator raising SkipTest if the OS is Mac OS X and the OS X version if less than min_version. For example, @requires_mac_ver(10, 5) raises SkipTest if the OS X version is lesser than 10.5. """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if sys.platform == 'darwin': version_txt = platform.mac_ver()[0] try: version = tuple(map(int, version_txt.split('.'))) except ValueError: pass else: if version < min_version: min_version_txt = '.'.join(map(str, min_version)) raise unittest.SkipTest( "Mac OS X %s or higher required, not %s" % (min_version_txt, version_txt)) return func(*args, **kw) wrapper.min_version = min_version return wrapper return decorator # Don't use "localhost", since resolving it uses the DNS under recent # Windows versions (see issue #18792). HOST = "127.0.0.1" HOSTv6 = "::1" def _is_ipv6_enabled(): # pragma: no cover """Check whether IPv6 is enabled on this host.""" if socket.has_ipv6: sock = None try: sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) sock.bind((HOSTv6, 0)) return True except OSError: pass finally: if sock: sock.close() return False IPV6_ENABLED = _is_ipv6_enabled() def find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM): # pragma: no cover """Returns an unused port that should be suitable for binding. This is achieved by creating a temporary socket with the same family and type as the 'sock' parameter (default is AF_INET, SOCK_STREAM), and binding it to the specified host address (defaults to 0.0.0.0) with the port set to 0, eliciting an unused ephemeral port from the OS. The temporary socket is then closed and deleted, and the ephemeral port is returned. Either this method or bind_port() should be used for any tests where a server socket needs to be bound to a particular port for the duration of the test. Which one to use depends on whether the calling code is creating a python socket, or if an unused port needs to be provided in a constructor or passed to an external program (i.e. the -accept argument to openssl's s_server mode). Always prefer bind_port() over find_unused_port() where possible. Hard coded ports should *NEVER* be used. As soon as a server socket is bound to a hard coded port, the ability to run multiple instances of the test simultaneously on the same host is compromised, which makes the test a ticking time bomb in a buildbot environment. On Unix buildbots, this may simply manifest as a failed test, which can be recovered from without intervention in most cases, but on Windows, the entire python process can completely and utterly wedge, requiring someone to log in to the buildbot and manually kill the affected process. (This is easy to reproduce on Windows, unfortunately, and can be traced to the SO_REUSEADDR socket option having different semantics on Windows versus Unix/Linux. On Unix, you can't have two AF_INET SOCK_STREAM sockets bind, listen and then accept connections on identical host/ports. An EADDRINUSE OSError will be raised at some point (depending on the platform and the order bind and listen were called on each socket). However, on Windows, if SO_REUSEADDR is set on the sockets, no EADDRINUSE will ever be raised when attempting to bind two identical host/ports. When accept() is called on each socket, the second caller's process will steal the port from the first caller, leaving them both in an awkwardly wedged state where they'll no longer respond to any signals or graceful kills, and must be forcibly killed via OpenProcess()/TerminateProcess(). The solution on Windows is to use the SO_EXCLUSIVEADDRUSE socket option instead of SO_REUSEADDR, which effectively affords the same semantics as SO_REUSEADDR on Unix. Given the propensity of Unix developers in the Open Source world compared to Windows ones, this is a common mistake. A quick look over OpenSSL's 0.9.8g source shows that they use SO_REUSEADDR when openssl.exe is called with the 's_server' option, for example. See http://bugs.python.org/issue2550 for more info. The following site also has a very thorough description about the implications of both REUSEADDR and EXCLUSIVEADDRUSE on Windows: http://msdn2.microsoft.com/en-us/library/ms740621(VS.85).aspx) XXX: although this approach is a vast improvement on previous attempts to elicit unused ports, it rests heavily on the assumption that the ephemeral port returned to us by the OS won't immediately be dished back out to some other process when we close and delete our temporary socket but before our calling code has a chance to bind the returned port. We can deal with this issue if/when we come across it. """ tempsock = socket.socket(family, socktype) port = bind_port(tempsock) tempsock.close() del tempsock return port def bind_port(sock, host=HOST): # pragma: no cover """Bind the socket to a free port and return the port number. Relies on ephemeral ports in order to ensure we are using an unbound port. This is important as many tests may be running simultaneously, especially in a buildbot environment. This method raises an exception if the sock.family is AF_INET and sock.type is SOCK_STREAM, *and* the socket has SO_REUSEADDR or SO_REUSEPORT set on it. Tests should *never* set these socket options for TCP/IP sockets. The only case for setting these options is testing multicasting via multiple UDP sockets. Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e. on Windows), it will be set on the socket. This will prevent anyone else from bind()'ing to our host/port for the duration of the test. """ if sock.family == socket.AF_INET and sock.type == socket.SOCK_STREAM: if hasattr(socket, 'SO_REUSEADDR'): if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 1: raise TestFailed("tests should never set the SO_REUSEADDR " "socket option on TCP/IP sockets!") if hasattr(socket, 'SO_REUSEPORT'): try: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) if opt == 1: raise TestFailed("tests should never set the SO_REUSEPORT " "socket option on TCP/IP sockets!") except OSError: # Python's socket module was compiled using modern headers # thus defining SO_REUSEPORT but this process is running # under an older kernel that does not support SO_REUSEPORT. pass if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'): sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) sock.bind((host, 0)) port = sock.getsockname()[1] return port def check_errno(errno, exc): assert isinstance(exc, OSError), exc assert exc.errno == errno, (exc, errno) class TestHandler(logging.Handler): def __init__(self, queue): super().__init__() self.queue = queue def emit(self, record): time.sleep(0) self.queue.put_nowait(record) @contextlib.contextmanager def log_hook(logname, queue): logger = logging.getLogger(logname) handler = TestHandler(queue) logger.addHandler(handler) level = logger.level logger.setLevel(logging.DEBUG) try: yield finally: logger.removeHandler(handler) logger.level = level class RpcMixin: def close_service(self, service): if service is None: return loop = service._loop service.close() loop.run_until_complete(service.wait_closed()) aiozmq-0.9.0/aiozmq/core.py0000664000372000037200000007566313614330211016477 0ustar travistravis00000000000000import asyncio import asyncio.events import errno import struct import sys import threading import weakref import zmq from collections import deque, namedtuple from collections.abc import Iterable from .interface import ZmqTransport, ZmqProtocol from .log import logger from .selector import ZmqSelector from .util import _EndpointsSet if sys.platform == 'win32': from asyncio.windows_events import SelectorEventLoop else: from asyncio.unix_events import SelectorEventLoop, SafeChildWatcher __all__ = ['ZmqEventLoop', 'ZmqEventLoopPolicy', 'create_zmq_connection'] SocketEvent = namedtuple('SocketEvent', 'event value endpoint') @asyncio.coroutine def create_zmq_connection(protocol_factory, zmq_type, *, bind=None, connect=None, zmq_sock=None, loop=None): """A coroutine which creates a ZeroMQ connection endpoint. The return value is a pair of (transport, protocol), where transport support ZmqTransport interface. protocol_factory should instantiate object with ZmqProtocol interface. zmq_type is type of ZeroMQ socket (zmq.REQ, zmq.REP, zmq.PUB, zmq.SUB, zmq.PAIR, zmq.DEALER, zmq.ROUTER, zmq.PULL, zmq.PUSH, etc.) bind is string or iterable of strings that specifies enpoints. Every endpoint creates ending for acceptin connections and binds it to the transport. Other side should use connect parameter to connect to this transport. See http://api.zeromq.org/master:zmq-bind for details. connect is string or iterable of strings that specifies enpoints. Every endpoint connects transport to specified transport. Other side should use bind parameter to wait for incoming connections. See http://api.zeromq.org/master:zmq-connect for details. endpoint is a string consisting of two parts as follows: transport://address. The transport part specifies the underlying transport protocol to use. The meaning of the address part is specific to the underlying transport protocol selected. The following transports are defined: inproc - local in-process (inter-thread) communication transport, see http://api.zeromq.org/master:zmq-inproc. ipc - local inter-process communication transport, see http://api.zeromq.org/master:zmq-ipc tcp - unicast transport using TCP, see http://api.zeromq.org/master:zmq_tcp pgm, epgm - reliable multicast transport using PGM, see http://api.zeromq.org/master:zmq_pgm zmq_sock is a zmq.Socket instance to use preexisting object with created transport. """ if loop is None: loop = asyncio.get_event_loop() if isinstance(loop, ZmqEventLoop): ret = yield from loop.create_zmq_connection(protocol_factory, zmq_type, bind=bind, connect=connect, zmq_sock=zmq_sock) return ret try: if zmq_sock is None: zmq_sock = zmq.Context.instance().socket(zmq_type) elif zmq_sock.getsockopt(zmq.TYPE) != zmq_type: raise ValueError('Invalid zmq_sock type') except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc protocol = protocol_factory() waiter = asyncio.Future(loop=loop) transport = _ZmqLooplessTransportImpl(loop, zmq_type, zmq_sock, protocol, waiter) yield from waiter try: if bind is not None: if isinstance(bind, str): bind = [bind] else: if not isinstance(bind, Iterable): raise ValueError('bind should be str or iterable') for endpoint in bind: yield from transport.bind(endpoint) if connect is not None: if isinstance(connect, str): connect = [connect] else: if not isinstance(connect, Iterable): raise ValueError('connect should be ' 'str or iterable') for endpoint in connect: yield from transport.connect(endpoint) return transport, protocol except OSError: # don't care if zmq_sock.close can raise exception # that should never happen zmq_sock.close() raise class ZmqEventLoop(SelectorEventLoop): """ZeroMQ event loop. Follows asyncio.AbstractEventLoop specification, in addition implements create_zmq_connection method for working with ZeroMQ sockets. """ def __init__(self, *, zmq_context=None): super().__init__(selector=ZmqSelector()) if zmq_context is None: self._zmq_context = zmq.Context.instance() else: self._zmq_context = zmq_context self._zmq_sockets = weakref.WeakSet() def close(self): for zmq_sock in self._zmq_sockets: if not zmq_sock.closed: zmq_sock.close() super().close() @asyncio.coroutine def create_zmq_connection(self, protocol_factory, zmq_type, *, bind=None, connect=None, zmq_sock=None): """A coroutine which creates a ZeroMQ connection endpoint. See aiozmq.create_zmq_connection() coroutine for details. """ try: if zmq_sock is None: zmq_sock = self._zmq_context.socket(zmq_type) elif zmq_sock.getsockopt(zmq.TYPE) != zmq_type: raise ValueError('Invalid zmq_sock type') except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc protocol = protocol_factory() waiter = asyncio.Future(loop=self) transport = _ZmqTransportImpl(self, zmq_type, zmq_sock, protocol, waiter) yield from waiter try: if bind is not None: if isinstance(bind, str): bind = [bind] else: if not isinstance(bind, Iterable): raise ValueError('bind should be str or iterable') for endpoint in bind: yield from transport.bind(endpoint) if connect is not None: if isinstance(connect, str): connect = [connect] else: if not isinstance(connect, Iterable): raise ValueError('connect should be ' 'str or iterable') for endpoint in connect: yield from transport.connect(endpoint) self._zmq_sockets.add(zmq_sock) return transport, protocol except OSError: # don't care if zmq_sock.close can raise exception # that should never happen zmq_sock.close() raise class _ZmqEventProtocol(ZmqProtocol): """This protocol is used internally by aiozmq to receive messages from a socket event monitor socket. This protocol decodes each event message into a namedtuple and then passes them through to the protocol running the socket that is being monitored via the ZmqProtocol.event_received method. This design simplifies the API visible to the developer at the cost of adding some internal complexity - a hidden protocol that transfers events from the monitor protocol to the monitored socket's protocol. """ def __init__(self, loop, main_protocol): self._protocol = main_protocol self.wait_ready = asyncio.Future(loop=loop) self.wait_closed = asyncio.Future(loop=loop) def connection_made(self, transport): self.transport = transport self.wait_ready.set_result(True) def connection_lost(self, exc): self.wait_closed.set_result(exc) def msg_received(self, data): if len(data) != 2 or len(data[0]) != 6: raise RuntimeError( "Invalid event message format: {}".format(data)) event, value = struct.unpack("=hi", data[0]) endpoint = data[1].decode() self.event_received(SocketEvent(event, value, endpoint)) def event_received(self, evt): self._protocol.event_received(evt) class _BaseTransport(ZmqTransport): LOG_THRESHOLD_FOR_CONNLOST_WRITES = 5 ZMQ_TYPES = {getattr(zmq, name): name for name in ('PUB', 'SUB', 'REP', 'REQ', 'PUSH', 'PULL', 'DEALER', 'ROUTER', 'XPUB', 'XSUB', 'PAIR', 'STREAM') if hasattr(zmq, name)} def __init__(self, loop, zmq_type, zmq_sock, protocol): super().__init__(None) self._protocol_paused = False self._set_write_buffer_limits() self._extra['zmq_socket'] = zmq_sock self._extra['zmq_type'] = zmq_type self._loop = loop self._zmq_sock = zmq_sock self._zmq_type = zmq_type self._protocol = protocol self._closing = False self._buffer = deque() self._buffer_size = 0 self._bindings = set() self._connections = set() self._subscriptions = set() self._paused = False self._conn_lost = 0 self._monitor = None def __repr__(self): info = ['ZmqTransport', 'sock={}'.format(self._zmq_sock), 'type={}'.format(self.ZMQ_TYPES[self._zmq_type])] try: events = self._zmq_sock.getsockopt(zmq.EVENTS) if events & zmq.POLLIN: info.append('read=polling') else: info.append('read=idle') if events & zmq.POLLOUT: state = 'polling' else: state = 'idle' bufsize = self.get_write_buffer_size() info.append('write=<{}, bufsize={}>'.format(state, bufsize)) except zmq.ZMQError: pass return '<{}>'.format(' '.join(info)) def write(self, data): if not data: return for part in data: if not isinstance(part, (bytes, bytearray, memoryview)): raise TypeError('data argument must be iterable of ' 'byte-ish (%r)' % data) data_len = sum(len(part) for part in data) if self._conn_lost: if self._conn_lost >= self.LOG_THRESHOLD_FOR_CONNLOST_WRITES: logger.warning('write to closed ZMQ socket.') self._conn_lost += 1 return if not self._buffer: try: if self._do_send(data): return except Exception as exc: self._fatal_error(exc, 'Fatal write error on zmq socket transport') return self._buffer.append((data_len, data)) self._buffer_size += data_len self._maybe_pause_protocol() def can_write_eof(self): return False def abort(self): self._force_close(None) def _fatal_error(self, exc, message='Fatal error on transport'): # Should be called from exception handler only. self._loop.call_exception_handler({ 'message': message, 'exception': exc, 'transport': self, 'protocol': self._protocol, }) self._force_close(exc) def _call_connection_lost(self, exc): try: self._protocol.connection_lost(exc) finally: if not self._zmq_sock.closed: self._zmq_sock.close() self._zmq_sock = None self._protocol = None self._loop = None def _maybe_pause_protocol(self): size = self.get_write_buffer_size() if size <= self._high_water: return if not self._protocol_paused: self._protocol_paused = True try: self._protocol.pause_writing() except Exception as exc: self._loop.call_exception_handler({ 'message': 'protocol.pause_writing() failed', 'exception': exc, 'transport': self, 'protocol': self._protocol, }) def _maybe_resume_protocol(self): if (self._protocol_paused and self.get_write_buffer_size() <= self._low_water): self._protocol_paused = False try: self._protocol.resume_writing() except Exception as exc: self._loop.call_exception_handler({ 'message': 'protocol.resume_writing() failed', 'exception': exc, 'transport': self, 'protocol': self._protocol, }) def _set_write_buffer_limits(self, high=None, low=None): if high is None: if low is None: high = 64*1024 else: high = 4*low if low is None: low = high // 4 if not high >= low >= 0: raise ValueError('high (%r) must be >= low (%r) must be >= 0' % (high, low)) self._high_water = high self._low_water = low def get_write_buffer_limits(self): return (self._low_water, self._high_water) def set_write_buffer_limits(self, high=None, low=None): self._set_write_buffer_limits(high=high, low=low) self._maybe_pause_protocol() def pause_reading(self): if self._closing: raise RuntimeError('Cannot pause_reading() when closing') if self._paused: raise RuntimeError('Already paused') self._paused = True self._do_pause_reading() def resume_reading(self): if not self._paused: raise RuntimeError('Not paused') self._paused = False if self._closing: return self._do_resume_reading() def getsockopt(self, option): while True: try: ret = self._zmq_sock.getsockopt(option) if option == zmq.LAST_ENDPOINT: ret = ret.decode('utf-8').rstrip('\x00') return ret except zmq.ZMQError as exc: if exc.errno == errno.EINTR: continue raise OSError(exc.errno, exc.strerror) from exc def setsockopt(self, option, value): while True: try: self._zmq_sock.setsockopt(option, value) if option == zmq.SUBSCRIBE: self._subscriptions.add(value) elif option == zmq.UNSUBSCRIBE: self._subscriptions.discard(value) return except zmq.ZMQError as exc: if exc.errno == errno.EINTR: continue raise OSError(exc.errno, exc.strerror) from exc def get_write_buffer_size(self): return self._buffer_size def bind(self, endpoint): fut = asyncio.Future(loop=self._loop) try: if not isinstance(endpoint, str): raise TypeError('endpoint should be str, got {!r}' .format(endpoint)) try: self._zmq_sock.bind(endpoint) real_endpoint = self.getsockopt(zmq.LAST_ENDPOINT) except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: fut.set_exception(exc) else: self._bindings.add(real_endpoint) fut.set_result(real_endpoint) return fut def unbind(self, endpoint): fut = asyncio.Future(loop=self._loop) try: if not isinstance(endpoint, str): raise TypeError('endpoint should be str, got {!r}' .format(endpoint)) try: self._zmq_sock.unbind(endpoint) except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc else: self._bindings.discard(endpoint) except Exception as exc: fut.set_exception(exc) else: fut.set_result(None) return fut def bindings(self): return _EndpointsSet(self._bindings) def connect(self, endpoint): fut = asyncio.Future(loop=self._loop) try: if not isinstance(endpoint, str): raise TypeError('endpoint should be str, got {!r}' .format(endpoint)) try: self._zmq_sock.connect(endpoint) except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: fut.set_exception(exc) else: self._connections.add(endpoint) fut.set_result(endpoint) return fut def disconnect(self, endpoint): fut = asyncio.Future(loop=self._loop) try: if not isinstance(endpoint, str): raise TypeError('endpoint should be str, got {!r}' .format(endpoint)) try: self._zmq_sock.disconnect(endpoint) except zmq.ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: fut.set_exception(exc) else: self._connections.discard(endpoint) fut.set_result(None) return fut def connections(self): return _EndpointsSet(self._connections) def subscribe(self, value): if self._zmq_type != zmq.SUB: raise NotImplementedError("Not supported ZMQ socket type") if not isinstance(value, bytes): raise TypeError("value argument should be bytes") if value in self._subscriptions: return self.setsockopt(zmq.SUBSCRIBE, value) def unsubscribe(self, value): if self._zmq_type != zmq.SUB: raise NotImplementedError("Not supported ZMQ socket type") if not isinstance(value, bytes): raise TypeError("value argument should be bytes") self.setsockopt(zmq.UNSUBSCRIBE, value) def subscriptions(self): if self._zmq_type != zmq.SUB: raise NotImplementedError("Not supported ZMQ socket type") return _EndpointsSet(self._subscriptions) @asyncio.coroutine def enable_monitor(self, events=None): # The standard approach of binding and then connecting does not # work in this specific case. The event loop does not properly # detect messages on the inproc transport which means that event # messages get missed. # pyzmq's 'get_monitor_socket' method can't be used because this # performs the actions in the wrong order for use with an event # loop. # For more information on this issue see: # http://lists.zeromq.org/pipermail/zeromq-dev/2015-July/029181.html if (zmq.zmq_version_info() < (4,) or zmq.pyzmq_version_info() < (14, 4,)): raise NotImplementedError( "Socket monitor requires libzmq >= 4 and pyzmq >= 14.4, " "have libzmq:{}, pyzmq:{}".format( zmq.zmq_version(), zmq.pyzmq_version())) if self._monitor is None: addr = "inproc://monitor.s-{}".format(self._zmq_sock.FD) events = events or zmq.EVENT_ALL _, self._monitor = yield from create_zmq_connection( lambda: _ZmqEventProtocol(self._loop, self._protocol), zmq.PAIR, connect=addr, loop=self._loop) # bind must come after connect self._zmq_sock.monitor(addr, events) yield from self._monitor.wait_ready @asyncio.coroutine def disable_monitor(self): self._disable_monitor() def _disable_monitor(self): if self._monitor: self._zmq_sock.disable_monitor() self._monitor.transport.close() self._monitor = None class _ZmqTransportImpl(_BaseTransport): def __init__(self, loop, zmq_type, zmq_sock, protocol, waiter=None): super().__init__(loop, zmq_type, zmq_sock, protocol) self._loop.add_reader(self._zmq_sock, self._read_ready) self._loop.call_soon(self._protocol.connection_made, self) if waiter is not None: self._loop.call_soon(waiter.set_result, None) def _read_ready(self): try: try: data = self._zmq_sock.recv_multipart(zmq.NOBLOCK) except zmq.ZMQError as exc: if exc.errno in (errno.EAGAIN, errno.EINTR): return else: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: self._fatal_error(exc, 'Fatal read error on zmq socket transport') else: self._protocol.msg_received(data) def _do_send(self, data): try: self._zmq_sock.send_multipart(data, zmq.DONTWAIT) return True except zmq.ZMQError as exc: if exc.errno in (errno.EAGAIN, errno.EINTR): self._loop.add_writer(self._zmq_sock, self._write_ready) return False else: raise OSError(exc.errno, exc.strerror) from exc def _write_ready(self): assert self._buffer, 'Data should not be empty' try: try: self._zmq_sock.send_multipart(self._buffer[0][1], zmq.DONTWAIT) except zmq.ZMQError as exc: if exc.errno in (errno.EAGAIN, errno.EINTR): return else: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: self._fatal_error(exc, 'Fatal write error on zmq socket transport') else: sent_len, sent_data = self._buffer.popleft() self._buffer_size -= sent_len self._maybe_resume_protocol() if not self._buffer: self._loop.remove_writer(self._zmq_sock) if self._closing: self._call_connection_lost(None) def close(self): if self._closing: return self._closing = True if self._monitor: self._disable_monitor() if not self._paused: self._loop.remove_reader(self._zmq_sock) if not self._buffer: self._conn_lost += 1 self._loop.call_soon(self._call_connection_lost, None) def _force_close(self, exc): if self._conn_lost: return if self._monitor: self._disable_monitor() if self._buffer: self._buffer.clear() self._buffer_size = 0 self._loop.remove_writer(self._zmq_sock) if not self._closing: self._closing = True if not self._paused: if self._zmq_sock.closed: self._loop._remove_reader(self._zmq_sock) else: self._loop.remove_reader(self._zmq_sock) self._conn_lost += 1 self._loop.call_soon(self._call_connection_lost, exc) def _do_pause_reading(self): self._loop.remove_reader(self._zmq_sock) def _do_resume_reading(self): self._loop.add_reader(self._zmq_sock, self._read_ready) class _ZmqLooplessTransportImpl(_BaseTransport): def __init__(self, loop, zmq_type, zmq_sock, protocol, waiter): super().__init__(loop, zmq_type, zmq_sock, protocol) fd = zmq_sock.getsockopt(zmq.FD) self._fd = fd self._loop.add_reader(fd, self._read_ready) self._loop.call_soon(self._protocol.connection_made, self) self._loop.call_soon(waiter.set_result, None) self._soon_call = None def _read_ready(self): self._soon_call = None if self._zmq_sock is None: return events = self._zmq_sock.getsockopt(zmq.EVENTS) try_again = False if not self._paused and events & zmq.POLLIN: self._do_read() try_again = True if self._buffer and events & zmq.POLLOUT: self._do_write() if not try_again: try_again = bool(self._buffer) if try_again: postevents = self._zmq_sock.getsockopt(zmq.EVENTS) if postevents & zmq.POLLIN: schedule = True elif self._buffer and postevents & zmq.POLLOUT: schedule = True else: schedule = False if schedule: self._soon_call = self._loop.call_soon(self._read_ready) def _do_read(self): try: try: data = self._zmq_sock.recv_multipart(zmq.NOBLOCK) except zmq.ZMQError as exc: if exc.errno in (errno.EAGAIN, errno.EINTR): return else: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: self._fatal_error(exc, 'Fatal read error on zmq socket transport') else: self._protocol.msg_received(data) def _do_write(self): if not self._buffer: return try: try: self._zmq_sock.send_multipart(self._buffer[0][1], zmq.DONTWAIT) except zmq.ZMQError as exc: if exc.errno in (errno.EAGAIN, errno.EINTR): if self._soon_call is None: self._soon_call = self._loop.call_soon( self._read_ready) return else: raise OSError(exc.errno, exc.strerror) from exc except Exception as exc: self._fatal_error(exc, 'Fatal write error on zmq socket transport') else: sent_len, sent_data = self._buffer.popleft() self._buffer_size -= sent_len self._maybe_resume_protocol() if not self._buffer and self._closing: self._loop.remove_reader(self._fd) self._call_connection_lost(None) else: if self._soon_call is None: self._soon_call = self._loop.call_soon(self._read_ready) def _do_send(self, data): try: self._zmq_sock.send_multipart(data, zmq.DONTWAIT) if self._soon_call is None: self._soon_call = self._loop.call_soon(self._read_ready) return True except zmq.ZMQError as exc: if exc.errno not in (errno.EAGAIN, errno.EINTR): raise OSError(exc.errno, exc.strerror) from exc else: if self._soon_call is None: self._soon_call = self._loop.call_soon(self._read_ready) return False def close(self): if self._closing: return self._closing = True if self._monitor: self._disable_monitor() if not self._buffer: self._conn_lost += 1 if not self._paused: self._loop.remove_reader(self._fd) self._loop.call_soon(self._call_connection_lost, None) def _force_close(self, exc): if self._conn_lost: return if self._monitor: self._disable_monitor() if self._buffer: self._buffer.clear() self._buffer_size = 0 self._closing = True self._loop.remove_reader(self._fd) self._conn_lost += 1 self._loop.call_soon(self._call_connection_lost, exc) def _do_pause_reading(self): pass def _do_resume_reading(self): self._read_ready() def _call_connection_lost(self, exc): try: super()._call_connection_lost(exc) finally: self._soon_call = None class ZmqEventLoopPolicy(asyncio.AbstractEventLoopPolicy): """ZeroMQ policy implementation for accessing the event loop. In this policy, each thread has its own event loop. However, we only automatically create an event loop by default for the main thread; other threads by default have no event loop. """ class _Local(threading.local): _loop = None _set_called = False def __init__(self): self._local = self._Local() self._watcher = None def get_event_loop(self): """Get the event loop. If current thread is the main thread and there are no registered event loop for current thread then the call creates new event loop and registers it. Return an instance of ZmqEventLoop. Raise RuntimeError if there is no registered event loop for current thread. """ if (self._local._loop is None and not self._local._set_called and isinstance(threading.current_thread(), threading._MainThread)): self.set_event_loop(self.new_event_loop()) assert self._local._loop is not None, \ ('There is no current event loop in thread %r.' % threading.current_thread().name) return self._local._loop def new_event_loop(self): """Create a new event loop. You must call set_event_loop() to make this the current event loop. """ return ZmqEventLoop() def set_event_loop(self, loop): """Set the event loop. As a side effect, if a child watcher was set before, then calling .set_event_loop() from the main thread will call .attach_loop(loop) on the child watcher. """ self._local._set_called = True assert loop is None or isinstance(loop, asyncio.AbstractEventLoop), \ "loop should be None or AbstractEventLoop instance" self._local._loop = loop if (self._watcher is not None and isinstance(threading.current_thread(), threading._MainThread)): self._watcher.attach_loop(loop) if sys.platform != 'win32': def _init_watcher(self): with asyncio.events._lock: if self._watcher is None: # pragma: no branch self._watcher = SafeChildWatcher() if isinstance(threading.current_thread(), threading._MainThread): self._watcher.attach_loop(self._local._loop) def get_child_watcher(self): """Get the child watcher. If not yet set, a SafeChildWatcher object is automatically created. """ if self._watcher is None: self._init_watcher() return self._watcher def set_child_watcher(self, watcher): """Set the child watcher.""" assert watcher is None or \ isinstance(watcher, asyncio.AbstractChildWatcher), \ "watcher should be None or AbstractChildWatcher instance" if self._watcher is not None: self._watcher.close() self._watcher = watcher aiozmq-0.9.0/aiozmq/rpc/0000775000372000037200000000000013614330247015751 5ustar travistravis00000000000000aiozmq-0.9.0/aiozmq/rpc/pipeline.py0000664000372000037200000001166713614330211020132 0ustar travistravis00000000000000import asyncio from functools import partial import zmq from aiozmq import create_zmq_connection from .base import ( NotFoundError, ParametersError, Service, ServiceClosedError, _BaseProtocol, _BaseServerProtocol, ) from .log import logger from .util import ( _MethodCall, ) @asyncio.coroutine def connect_pipeline(*, connect=None, bind=None, loop=None, translation_table=None): """A coroutine that creates and connects/binds Pipeline client instance. Usually for this function you need to use *connect* parameter, but ZeroMQ does not forbid to use *bind*. translation_table -- an optional table for custom value translators. loop -- an optional parameter to point ZmqEventLoop instance. If loop is None then default event loop will be given by asyncio.get_event_loop() call. Returns PipelineClient instance. """ if loop is None: loop = asyncio.get_event_loop() transp, proto = yield from create_zmq_connection( lambda: _ClientProtocol(loop, translation_table=translation_table), zmq.PUSH, connect=connect, bind=bind, loop=loop) return PipelineClient(loop, proto) @asyncio.coroutine def serve_pipeline(handler, *, connect=None, bind=None, loop=None, translation_table=None, log_exceptions=False, exclude_log_exceptions=(), timeout=None): """A coroutine that creates and connects/binds Pipeline server instance. Usually for this function you need to use *bind* parameter, but ZeroMQ does not forbid to use *connect*. handler -- an object which processes incoming pipeline calls. Usually you like to pass AttrHandler instance. log_exceptions -- log exceptions from remote calls if True. translation_table -- an optional table for custom value translators. exclude_log_exceptions -- sequence of exception classes than should not be logged. timeout -- timeout for performing handling of async server calls. loop -- an optional parameter to point ZmqEventLoop instance. If loop is None then default event loop will be given by asyncio.get_event_loop() call. Returns Service instance. """ if loop is None: loop = asyncio.get_event_loop() trans, proto = yield from create_zmq_connection( lambda: _ServerProtocol(loop, handler, translation_table=translation_table, log_exceptions=log_exceptions, exclude_log_exceptions=exclude_log_exceptions, timeout=timeout), zmq.PULL, connect=connect, bind=bind, loop=loop) return Service(loop, proto) class _ClientProtocol(_BaseProtocol): def call(self, name, args, kwargs): if self.transport is None: raise ServiceClosedError() bname = name.encode('utf-8') bargs = self.packer.packb(args) bkwargs = self.packer.packb(kwargs) self.transport.write([bname, bargs, bkwargs]) fut = asyncio.Future(loop=self.loop) fut.set_result(None) return fut class PipelineClient(Service): def __init__(self, loop, proto): super().__init__(loop, proto) @property def notify(self): """Return object for dynamic Pipeline calls. The usage is: yield from client.pipeline.ns.func(1, 2) """ return _MethodCall(self._proto) class _ServerProtocol(_BaseServerProtocol): def msg_received(self, data): bname, bargs, bkwargs = data args = self.packer.unpackb(bargs) kwargs = self.packer.unpackb(bkwargs) try: name = bname.decode('utf-8') func = self.dispatch(name) args, kwargs, ret_ann = self.check_args(func, args, kwargs) except (NotFoundError, ParametersError) as exc: fut = asyncio.Future(loop=self.loop) fut.set_exception(exc) else: if asyncio.iscoroutinefunction(func): fut = self.add_pending(func(*args, **kwargs)) else: fut = asyncio.Future(loop=self.loop) try: fut.set_result(func(*args, **kwargs)) except Exception as exc: fut.set_exception(exc) fut.add_done_callback(partial(self.process_call_result, name=name, args=args, kwargs=kwargs)) def process_call_result(self, fut, *, name, args, kwargs): self.discard_pending(fut) try: if fut.result() is not None: logger.warning("Pipeline handler %r returned not None", name) except (NotFoundError, ParametersError) as exc: logger.exception("Call to %r caused error: %r", name, exc) except asyncio.CancelledError: return except Exception: self.try_log(fut, name, args, kwargs) aiozmq-0.9.0/aiozmq/rpc/util.py0000664000372000037200000000273013614330211017271 0ustar travistravis00000000000000import asyncio import builtins from .base import NotFoundError, ParametersError class _MethodCall: __slots__ = ('_proto', '_timeout', '_names') def __init__(self, proto, timeout=None, names=()): self._proto = proto self._timeout = timeout self._names = names def __getattr__(self, name): return self.__class__(self._proto, self._timeout, self._names + (name,)) def __call__(self, *args, **kwargs): if not self._names: raise ValueError('RPC method name is empty') fut = self._proto.call('.'.join(self._names), args, kwargs) loop = self._proto.loop return asyncio.Task(asyncio.wait_for(fut, timeout=self._timeout, loop=loop), loop=loop) def _fill_error_table(): # Fill error table with standard exceptions error_table = {} for name in dir(builtins): val = getattr(builtins, name) if isinstance(val, type) and issubclass(val, Exception): error_table['builtins.'+name] = val for name in dir(asyncio): val = getattr(asyncio, name) if isinstance(val, type) and issubclass(val, Exception): error_table['asyncio.'+name] = val error_table['aiozmq.rpc.base.NotFoundError'] = NotFoundError error_table['aiozmq.rpc.base.ParametersError'] = ParametersError return error_table aiozmq-0.9.0/aiozmq/rpc/rpc.py0000664000372000037200000002421713614330211017104 0ustar travistravis00000000000000"""ZeroMQ RPC""" import asyncio import os import random import struct import sys import time from collections import ChainMap from functools import partial import zmq from aiozmq import create_zmq_connection from .base import ( GenericError, NotFoundError, ParametersError, Service, ServiceClosedError, _BaseProtocol, _BaseServerProtocol, ) from .log import logger from .util import ( _MethodCall, _fill_error_table, ) __all__ = [ 'connect_rpc', 'serve_rpc', ] @asyncio.coroutine def connect_rpc(*, connect=None, bind=None, loop=None, error_table=None, translation_table=None, timeout=None): """A coroutine that creates and connects/binds RPC client. Usually for this function you need to use *connect* parameter, but ZeroMQ does not forbid to use *bind*. error_table -- an optional table for custom exception translators. timeout -- an optional timeout for RPC calls. If timeout is not None and remote call takes longer than timeout seconds then asyncio.TimeoutError will be raised at client side. If the server will return an answer after timeout has been raised that answer **is ignored**. translation_table -- an optional table for custom value translators. loop -- an optional parameter to point ZmqEventLoop instance. If loop is None then default event loop will be given by asyncio.get_event_loop call. Returns a RPCClient instance. """ if loop is None: loop = asyncio.get_event_loop() transp, proto = yield from create_zmq_connection( lambda: _ClientProtocol(loop, error_table=error_table, translation_table=translation_table), zmq.DEALER, connect=connect, bind=bind, loop=loop) return RPCClient(loop, proto, timeout=timeout) @asyncio.coroutine def serve_rpc(handler, *, connect=None, bind=None, loop=None, translation_table=None, log_exceptions=False, exclude_log_exceptions=(), timeout=None): """A coroutine that creates and connects/binds RPC server instance. Usually for this function you need to use *bind* parameter, but ZeroMQ does not forbid to use *connect*. handler -- an object which processes incoming RPC calls. Usually you like to pass AttrHandler instance. log_exceptions -- log exceptions from remote calls if True. exclude_log_exceptions -- sequence of exception classes than should not be logged. translation_table -- an optional table for custom value translators. timeout -- timeout for performing handling of async server calls. loop -- an optional parameter to point ZmqEventLoop instance. If loop is None then default event loop will be given by asyncio.get_event_loop call. Returns Service instance. """ if loop is None: loop = asyncio.get_event_loop() transp, proto = yield from create_zmq_connection( lambda: _ServerProtocol(loop, handler, translation_table=translation_table, log_exceptions=log_exceptions, exclude_log_exceptions=exclude_log_exceptions, timeout=timeout), zmq.ROUTER, connect=connect, bind=bind, loop=loop) return Service(loop, proto) _default_error_table = _fill_error_table() class _ClientProtocol(_BaseProtocol): """Client protocol implementation.""" REQ_PREFIX = struct.Struct('=HH') REQ_SUFFIX = struct.Struct('=Ld') RESP = struct.Struct('=HHLd?') def __init__(self, loop, *, error_table=None, translation_table=None): super().__init__(loop, translation_table=translation_table) self.calls = {} self.prefix = self.REQ_PREFIX.pack(os.getpid() % 0x10000, random.randrange(0x10000)) self.counter = 0 if error_table is None: self.error_table = _default_error_table else: self.error_table = ChainMap(error_table, _default_error_table) def msg_received(self, data): try: header, banswer = data pid, rnd, req_id, timestamp, is_error = self.RESP.unpack(header) answer = self.packer.unpackb(banswer) except Exception: logger.critical("Cannot unpack %r", data, exc_info=sys.exc_info()) return call = self.calls.pop(req_id, None) if call is None: logger.critical("Unknown answer id: %d (%d %d %f %d) -> %s", req_id, pid, rnd, timestamp, is_error, answer) elif call.cancelled(): logger.debug("The future for request #%08x has been cancelled, " "skip the received result.", req_id) else: if is_error: call.set_exception(self._translate_error(*answer)) else: call.set_result(answer) def connection_lost(self, exc): super().connection_lost(exc) for call in self.calls.values(): if not call.cancelled(): call.cancel() def _translate_error(self, exc_type, exc_args, exc_repr): found = self.error_table.get(exc_type) if found is None: return GenericError(exc_type, exc_args, exc_repr) else: return found(*exc_args) def _new_id(self): self.counter += 1 if self.counter > 0xffffffff: self.counter = 0 return (self.prefix + self.REQ_SUFFIX.pack(self.counter, time.time()), self.counter) def call(self, name, args, kwargs): if self.transport is None: raise ServiceClosedError() bname = name.encode('utf-8') bargs = self.packer.packb(args) bkwargs = self.packer.packb(kwargs) header, req_id = self._new_id() assert req_id not in self.calls, (req_id, self.calls) fut = asyncio.Future(loop=self.loop) self.calls[req_id] = fut self.transport.write([header, bname, bargs, bkwargs]) return fut class RPCClient(Service): def __init__(self, loop, proto, *, timeout): super().__init__(loop, proto) self._timeout = timeout @property def call(self): """Return object for dynamic RPC calls. The usage is: ret = yield from client.call.ns.func(1, 2) """ return _MethodCall(self._proto, timeout=self._timeout) def with_timeout(self, timeout): """Return a new RPCClient instance with overriden timeout""" return self.__class__(self._loop, self._proto, timeout=timeout) def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_tb): return class _ServerProtocol(_BaseServerProtocol): REQ = struct.Struct('=HHLd') RESP_PREFIX = struct.Struct('=HH') RESP_SUFFIX = struct.Struct('=Ld?') def __init__(self, loop, handler, *, translation_table=None, log_exceptions=False, exclude_log_exceptions=(), timeout=None): super().__init__(loop, handler, translation_table=translation_table, log_exceptions=log_exceptions, exclude_log_exceptions=exclude_log_exceptions, timeout=timeout) self.prefix = self.RESP_PREFIX.pack(os.getpid() % 0x10000, random.randrange(0x10000)) def msg_received(self, data): try: *pre, header, bname, bargs, bkwargs = data pid, rnd, req_id, timestamp = self.REQ.unpack(header) name = bname.decode('utf-8') args = self.packer.unpackb(bargs) kwargs = self.packer.unpackb(bkwargs) except Exception: logger.critical("Cannot unpack %r", data, exc_info=sys.exc_info()) return try: func = self.dispatch(name) args, kwargs, ret_ann = self.check_args(func, args, kwargs) except (NotFoundError, ParametersError) as exc: fut = asyncio.Future(loop=self.loop) fut.add_done_callback(partial(self.process_call_result, req_id=req_id, pre=pre, name=name, args=args, kwargs=kwargs)) fut.set_exception(exc) else: if asyncio.iscoroutinefunction(func): fut = self.add_pending(func(*args, **kwargs)) else: fut = asyncio.Future(loop=self.loop) try: fut.set_result(func(*args, **kwargs)) except Exception as exc: fut.set_exception(exc) fut.add_done_callback(partial(self.process_call_result, req_id=req_id, pre=pre, return_annotation=ret_ann, name=name, args=args, kwargs=kwargs)) def process_call_result(self, fut, *, req_id, pre, name, args, kwargs, return_annotation=None): self.discard_pending(fut) self.try_log(fut, name, args, kwargs) if self.transport is None: return try: ret = fut.result() if return_annotation is not None: ret = return_annotation(ret) prefix = self.prefix + self.RESP_SUFFIX.pack(req_id, time.time(), False) self.transport.write(pre + [prefix, self.packer.packb(ret)]) except asyncio.CancelledError: return except Exception as exc: prefix = self.prefix + self.RESP_SUFFIX.pack(req_id, time.time(), True) exc_type = exc.__class__ exc_info = (exc_type.__module__ + '.' + exc_type.__qualname__, exc.args, repr(exc)) self.transport.write(pre + [prefix, self.packer.packb(exc_info)]) aiozmq-0.9.0/aiozmq/rpc/packer.py0000664000372000037200000000506713614330211017567 0ustar travistravis00000000000000"""Private utility functions.""" from collections import ChainMap from datetime import datetime, date, time, timedelta, tzinfo from functools import partial from pickle import dumps, loads, HIGHEST_PROTOCOL from msgpack import ExtType, packb, unpackb _default = { 127: (date, partial(dumps, protocol=HIGHEST_PROTOCOL), loads), 126: (datetime, partial(dumps, protocol=HIGHEST_PROTOCOL), loads), 125: (time, partial(dumps, protocol=HIGHEST_PROTOCOL), loads), 124: (timedelta, partial(dumps, protocol=HIGHEST_PROTOCOL), loads), 123: (tzinfo, partial(dumps, protocol=HIGHEST_PROTOCOL), loads), } class _Packer: def __init__(self, *, translation_table=None): if translation_table is None: translation_table = _default else: translation_table = ChainMap(translation_table, _default) self.translation_table = translation_table self._pack_cache = {} self._unpack_cache = {} for code in sorted(self.translation_table): cls, packer, unpacker = self.translation_table[code] self._pack_cache[cls] = (code, packer) self._unpack_cache[code] = unpacker def packb(self, data): return packb(data, use_bin_type=True, default=self.ext_type_pack_hook) def unpackb(self, packed): return unpackb(packed, use_list=False, raw=False, ext_hook=self.ext_type_unpack_hook) def ext_type_pack_hook(self, obj, _sentinel=object()): obj_class = obj.__class__ hit = self._pack_cache.get(obj_class, _sentinel) if hit is None: # packer has been not found by previous long-lookup raise TypeError("Unknown type: {!r}".format(obj)) elif hit is _sentinel: # do long-lookup for code in sorted(self.translation_table): cls, packer, unpacker = self.translation_table[code] if isinstance(obj, cls): self._pack_cache[obj_class] = (code, packer) self._unpack_cache[code] = unpacker return ExtType(code, packer(obj)) else: self._pack_cache[obj_class] = None raise TypeError("Unknown type: {!r}".format(obj)) else: # do shortcut code, packer = hit return ExtType(code, packer(obj)) def ext_type_unpack_hook(self, code, data): try: unpacker = self._unpack_cache[code] return unpacker(data) except KeyError: return ExtType(code, data) aiozmq-0.9.0/aiozmq/rpc/pubsub.py0000664000372000037200000001676413614330211017630 0ustar travistravis00000000000000import asyncio from collections.abc import Iterable from functools import partial import zmq from aiozmq import create_zmq_connection from .base import ( NotFoundError, ParametersError, Service, ServiceClosedError, _BaseProtocol, _BaseServerProtocol, ) from .log import logger @asyncio.coroutine def connect_pubsub(*, connect=None, bind=None, loop=None, translation_table=None): """A coroutine that creates and connects/binds pubsub client. Usually for this function you need to use connect parameter, but ZeroMQ does not forbid to use bind. translation_table -- an optional table for custom value translators. loop -- an optional parameter to point ZmqEventLoop. If loop is None then default event loop will be given by asyncio.get_event_loop() call. Returns PubSubClient instance. """ if loop is None: loop = asyncio.get_event_loop() transp, proto = yield from create_zmq_connection( lambda: _ClientProtocol(loop, translation_table=translation_table), zmq.PUB, connect=connect, bind=bind, loop=loop) return PubSubClient(loop, proto) @asyncio.coroutine def serve_pubsub(handler, *, subscribe=None, connect=None, bind=None, loop=None, translation_table=None, log_exceptions=False, exclude_log_exceptions=(), timeout=None): """A coroutine that creates and connects/binds pubsub server instance. Usually for this function you need to use *bind* parameter, but ZeroMQ does not forbid to use *connect*. handler -- an object which processes incoming pipeline calls. Usually you like to pass AttrHandler instance. log_exceptions -- log exceptions from remote calls if True. subscribe -- subscription specification. Subscribe server to topics. Allowed parameters are str, bytes, iterable of str or bytes. translation_table -- an optional table for custom value translators. exclude_log_exceptions -- sequence of exception classes than should not be logged. timeout -- timeout for performing handling of async server calls. loop -- an optional parameter to point ZmqEventLoop. If loop is None then default event loop will be given by asyncio.get_event_loop() call. Returns PubSubService instance. Raises OSError on system error. Raises TypeError if arguments have inappropriate type. """ if loop is None: loop = asyncio.get_event_loop() transp, proto = yield from create_zmq_connection( lambda: _ServerProtocol(loop, handler, translation_table=translation_table, log_exceptions=log_exceptions, exclude_log_exceptions=exclude_log_exceptions, timeout=timeout), zmq.SUB, connect=connect, bind=bind, loop=loop) serv = PubSubService(loop, proto) if subscribe is not None: if isinstance(subscribe, (str, bytes)): subscribe = [subscribe] else: if not isinstance(subscribe, Iterable): raise TypeError('bind should be str, bytes or iterable') for topic in subscribe: serv.subscribe(topic) return serv class _ClientProtocol(_BaseProtocol): def call(self, topic, name, args, kwargs): if self.transport is None: raise ServiceClosedError() if topic is None: btopic = b'' elif isinstance(topic, str): btopic = topic.encode('utf-8') elif isinstance(topic, bytes): btopic = topic else: raise TypeError('topic argument should be None, str or bytes ' '({!r})'.format(topic)) bname = name.encode('utf-8') bargs = self.packer.packb(args) bkwargs = self.packer.packb(kwargs) self.transport.write([btopic, bname, bargs, bkwargs]) fut = asyncio.Future(loop=self.loop) fut.set_result(None) return fut class PubSubClient(Service): def __init__(self, loop, proto): super().__init__(loop, proto) def publish(self, topic): """Return object for dynamic PubSub calls. The usage is: yield from client.publish('my_topic').ns.func(1, 2) topic argument may be None otherwise must be isntance of str or bytes """ return _MethodCall(self._proto, topic) class PubSubService(Service): def subscribe(self, topic): """Subscribe to the topic. topic argument must be str or bytes. Raises TypeError in other cases """ if isinstance(topic, bytes): btopic = topic elif isinstance(topic, str): btopic = topic.encode('utf-8') else: raise TypeError('topic should be str or bytes, got {!r}' .format(topic)) self.transport.subscribe(btopic) def unsubscribe(self, topic): """Unsubscribe from the topic. topic argument must be str or bytes. Raises TypeError in other cases """ if isinstance(topic, bytes): btopic = topic elif isinstance(topic, str): btopic = topic.encode('utf-8') else: raise TypeError('topic should be str or bytes, got {!r}' .format(topic)) self.transport.unsubscribe(btopic) class _MethodCall: __slots__ = ('_proto', '_topic', '_names') def __init__(self, proto, topic, names=()): self._proto = proto self._topic = topic self._names = names def __getattr__(self, name): return self.__class__(self._proto, self._topic, self._names + (name,)) def __call__(self, *args, **kwargs): if not self._names: raise ValueError("PubSub method name is empty") return self._proto.call(self._topic, '.'.join(self._names), args, kwargs) class _ServerProtocol(_BaseServerProtocol): def msg_received(self, data): btopic, bname, bargs, bkwargs = data args = self.packer.unpackb(bargs) kwargs = self.packer.unpackb(bkwargs) try: name = bname.decode('utf-8') func = self.dispatch(name) args, kwargs, ret_ann = self.check_args(func, args, kwargs) except (NotFoundError, ParametersError) as exc: fut = asyncio.Future(loop=self.loop) fut.set_exception(exc) else: if asyncio.iscoroutinefunction(func): fut = self.add_pending(func(*args, **kwargs)) else: fut = asyncio.Future(loop=self.loop) try: fut.set_result(func(*args, **kwargs)) except Exception as exc: fut.set_exception(exc) fut.add_done_callback(partial(self.process_call_result, name=name, args=args, kwargs=kwargs)) def process_call_result(self, fut, *, name, args, kwargs): self.discard_pending(fut) try: if fut.result() is not None: logger.warning("PubSub handler %r returned not None", name) except asyncio.CancelledError: return except (NotFoundError, ParametersError) as exc: logger.exception("Call to %r caused error: %r", name, exc) except Exception: self.try_log(fut, name, args, kwargs) aiozmq-0.9.0/aiozmq/rpc/base.py0000664000372000037200000001766013614330211017236 0ustar travistravis00000000000000import abc import asyncio import inspect import pprint import textwrap from types import MethodType from .log import logger from .packer import _Packer from aiozmq import interface if hasattr(asyncio, 'ensure_future'): ensure_future = asyncio.ensure_future else: # Deprecated since Python version 3.4.4. ensure_future = getattr(asyncio, 'async') class Error(Exception): """Base RPC exception""" class GenericError(Error): """Error for all untranslated exceptions from rpc method calls.""" def __init__(self, exc_type, args, exc_repr): super().__init__(exc_type, args, exc_repr) self.exc_type = exc_type self.arguments = args self.exc_repr = exc_repr def __repr__(self): return ''.format(self.exc_type, self.arguments, self.exc_repr) class NotFoundError(Error, LookupError): """Error raised by server if RPC namespace/method lookup failed.""" class ParametersError(Error, ValueError): """Error raised by server when RPC method's parameters could not be validated against their annotations.""" class ServiceClosedError(Error): """RPC Service is closed.""" class AbstractHandler(metaclass=abc.ABCMeta): """Abstract class for server-side RPC handlers.""" __slots__ = () @abc.abstractmethod def __getitem__(self, key): raise KeyError @classmethod def __subclasshook__(cls, C): if issubclass(C, (str, bytes)): return False if cls is AbstractHandler: if any("__getitem__" in B.__dict__ for B in C.__mro__): return True return NotImplemented class AttrHandler(AbstractHandler): """Base class for RPC handlers via attribute lookup.""" def __getitem__(self, key): try: return getattr(self, key) except AttributeError: raise KeyError def method(func): """Marks a decorated function as RPC endpoint handler. The func object may provide arguments and/or return annotations. If so annotations should be callable objects and they will be used to validate received arguments and/or return value. """ func.__rpc__ = {} func.__signature__ = sig = inspect.signature(func) for name, param in sig.parameters.items(): ann = param.annotation if ann is not param.empty and not callable(ann): raise ValueError("Expected {!r} annotation to be callable" .format(name)) ann = sig.return_annotation if ann is not sig.empty and not callable(ann): raise ValueError("Expected return annotation to be callable") return func class Service(asyncio.AbstractServer): """RPC service. Instances of Service (or descendants) are returned by coroutines that creates clients or servers. Implementation of AbstractServer. """ def __init__(self, loop, proto): self._loop = loop self._proto = proto @property def transport(self): """Return the transport. You can use the transport to dynamically bind/unbind, connect/disconnect etc. """ transport = self._proto.transport if transport is None: raise ServiceClosedError() return transport def close(self): if self._proto.closing: return self._proto.closing = True if self._proto.transport is None: return self._proto.transport.close() @asyncio.coroutine def wait_closed(self): if self._proto.transport is None: return waiter = asyncio.Future(loop=self._loop) self._proto.done_waiters.append(waiter) yield from waiter class _BaseProtocol(interface.ZmqProtocol): def __init__(self, loop, *, translation_table=None): self.loop = loop self.transport = None self.done_waiters = [] self.packer = _Packer(translation_table=translation_table) self.pending_waiters = set() self.closing = False def connection_made(self, transport): self.transport = transport def connection_lost(self, exc): self.transport = None for waiter in self.done_waiters: waiter.set_result(None) class _BaseServerProtocol(_BaseProtocol): def __init__(self, loop, handler, *, translation_table=None, log_exceptions=False, exclude_log_exceptions=(), timeout=None): super().__init__(loop, translation_table=translation_table) if not isinstance(handler, AbstractHandler): raise TypeError('handler must implement AbstractHandler') self.handler = handler self.log_exceptions = log_exceptions self.exclude_log_exceptions = exclude_log_exceptions self.timeout = timeout def connection_lost(self, exc): super().connection_lost(exc) for waiter in list(self.pending_waiters): if not waiter.cancelled(): waiter.cancel() def dispatch(self, name): if not name: raise NotFoundError(name) namespaces, sep, method = name.rpartition('.') handler = self.handler if namespaces: for part in namespaces.split('.'): try: handler = handler[part] except KeyError: raise NotFoundError(name) else: if not isinstance(handler, AbstractHandler): raise NotFoundError(name) try: func = handler[method] except KeyError: raise NotFoundError(name) else: if isinstance(func, MethodType): holder = func.__func__ else: holder = func if not hasattr(holder, '__rpc__'): raise NotFoundError(name) return func def check_args(self, func, args, kwargs): """Utility function for validating function arguments Returns validated (args, kwargs, return annotation) tuple """ try: sig = inspect.signature(func) bargs = sig.bind(*args, **kwargs) except TypeError as exc: raise ParametersError(repr(exc)) from exc else: arguments = bargs.arguments marker = object() for name, param in sig.parameters.items(): if param.annotation is param.empty: continue val = arguments.get(name, marker) if val is marker: continue # Skip default value try: arguments[name] = param.annotation(val) except (TypeError, ValueError) as exc: raise ParametersError( 'Invalid value for argument {!r}: {!r}' .format(name, exc)) from exc if sig.return_annotation is not sig.empty: return bargs.args, bargs.kwargs, sig.return_annotation return bargs.args, bargs.kwargs, None def try_log(self, fut, name, args, kwargs): try: fut.result() except Exception as exc: if self.log_exceptions: for e in self.exclude_log_exceptions: if isinstance(exc, e): return logger.exception(textwrap.dedent("""\ An exception %r from method %r call occurred. args = %s kwargs = %s """), exc, name, pprint.pformat(args), pprint.pformat(kwargs)) # noqa def add_pending(self, coro): fut = ensure_future(coro, loop=self.loop) self.pending_waiters.add(fut) return fut def discard_pending(self, fut): self.pending_waiters.discard(fut) aiozmq-0.9.0/aiozmq/rpc/__init__.py0000664000372000037200000000223613614330211020054 0ustar travistravis00000000000000"""ZeroMQ RPC/Pipeline/PubSub services""" try: from msgpack import version as msgpack_version except ImportError: # pragma: no cover msgpack_version = (0,) from .base import ( method, AbstractHandler, AttrHandler, Error, GenericError, NotFoundError, ParametersError, ServiceClosedError, Service, ) from .rpc import ( connect_rpc, serve_rpc, ) from .pipeline import ( connect_pipeline, serve_pipeline, ) from .pubsub import ( connect_pubsub, serve_pubsub, ) from .log import logger _MSGPACK_VERSION = (0, 4, 0) _MSGPACK_VERSION_STR = '.'.join(map(str, _MSGPACK_VERSION)) if msgpack_version < _MSGPACK_VERSION: # pragma: no cover raise ImportError("aiozmq.rpc requires msgpack package" " (version >= {})".format(_MSGPACK_VERSION_STR)) __all__ = [ 'method', 'connect_rpc', 'serve_rpc', 'connect_pipeline', 'serve_pipeline', 'connect_pubsub', 'serve_pubsub', 'logger', 'Error', 'GenericError', 'NotFoundError', 'ParametersError', 'AbstractHandler', 'ServiceClosedError', 'AttrHandler', 'Service', ] aiozmq-0.9.0/aiozmq/rpc/log.py0000664000372000037200000000012613614330211017072 0ustar travistravis00000000000000"""Logging configuration.""" import logging logger = logging.getLogger(__package__) aiozmq-0.9.0/aiozmq/selector.py0000664000372000037200000001430113614330211017345 0ustar travistravis00000000000000"""ZMQ pooler for asyncio.""" import math from collections.abc import Mapping from errno import EINTR from zmq import (ZMQError, POLLIN, POLLOUT, POLLERR, Socket as ZMQSocket, Poller as ZMQPoller) __all__ = ['ZmqSelector'] try: from asyncio.selectors import (BaseSelector, SelectorKey, EVENT_READ, EVENT_WRITE) except ImportError: # pragma: no cover from selectors import BaseSelector, SelectorKey, EVENT_READ, EVENT_WRITE def _fileobj_to_fd(fileobj): """Return a file descriptor from a file object. Parameters: fileobj -- file object or file descriptor Returns: corresponding file descriptor or zmq.Socket instance Raises: ValueError if the object is invalid """ if isinstance(fileobj, int): fd = fileobj elif isinstance(fileobj, ZMQSocket): return fileobj else: try: fd = int(fileobj.fileno()) except (AttributeError, TypeError, ValueError): raise ValueError("Invalid file object: " "{!r}".format(fileobj)) from None if fd < 0: raise ValueError("Invalid file descriptor: {}".format(fd)) return fd class _SelectorMapping(Mapping): """Mapping of file objects to selector keys.""" def __init__(self, selector): self._selector = selector def __len__(self): return len(self._selector._fd_to_key) def __getitem__(self, fileobj): try: fd = self._selector._fileobj_lookup(fileobj) return self._selector._fd_to_key[fd] except KeyError: raise KeyError("{!r} is not registered".format(fileobj)) from None def __iter__(self): return iter(self._selector._fd_to_key) class ZmqSelector(BaseSelector): """A selector that can be used with asyncio's selector base event loops.""" def __init__(self): # this maps file descriptors to keys self._fd_to_key = {} # read-only mapping returned by get_map() self._map = _SelectorMapping(self) self._poller = ZMQPoller() def _fileobj_lookup(self, fileobj): """Return a file descriptor from a file object. This wraps _fileobj_to_fd() to do an exhaustive search in case the object is invalid but we still have it in our map. This is used by unregister() so we can unregister an object that was previously registered even if it is closed. It is also used by _SelectorMapping. """ try: return _fileobj_to_fd(fileobj) except ValueError: # Do an exhaustive search. for key in self._fd_to_key.values(): if key.fileobj is fileobj: return key.fd # Raise ValueError after all. raise def register(self, fileobj, events, data=None): if (not events) or (events & ~(EVENT_READ | EVENT_WRITE)): raise ValueError("Invalid events: {!r}".format(events)) key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data) if key.fd in self._fd_to_key: raise KeyError("{!r} (FD {}) is already registered" .format(fileobj, key.fd)) z_events = 0 if events & EVENT_READ: z_events |= POLLIN if events & EVENT_WRITE: z_events |= POLLOUT try: self._poller.register(key.fd, z_events) except ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc self._fd_to_key[key.fd] = key return key def unregister(self, fileobj): try: key = self._fd_to_key.pop(self._fileobj_lookup(fileobj)) except KeyError: raise KeyError("{!r} is not registered".format(fileobj)) from None try: self._poller.unregister(key.fd) except ZMQError as exc: self._fd_to_key[key.fd] = key raise OSError(exc.errno, exc.strerror) from exc return key def modify(self, fileobj, events, data=None): try: fd = self._fileobj_lookup(fileobj) key = self._fd_to_key[fd] except KeyError: raise KeyError("{!r} is not registered".format(fileobj)) from None if data == key.data and events == key.events: return key if events != key.events: z_events = 0 if events & EVENT_READ: z_events |= POLLIN if events & EVENT_WRITE: z_events |= POLLOUT try: self._poller.modify(fd, z_events) except ZMQError as exc: raise OSError(exc.errno, exc.strerror) from exc key = key._replace(data=data, events=events) self._fd_to_key[key.fd] = key return key def close(self): self._fd_to_key.clear() self._poller = None def get_map(self): return self._map def _key_from_fd(self, fd): """Return the key associated to a given file descriptor. Parameters: fd -- file descriptor Returns: corresponding key, or None if not found """ try: return self._fd_to_key[fd] except KeyError: return None def select(self, timeout=None): if timeout is None: timeout = None elif timeout <= 0: timeout = 0 else: # poll() has a resolution of 1 millisecond, round away from # zero to wait *at least* timeout seconds. timeout = math.ceil(timeout * 1e3) ready = [] try: z_events = self._poller.poll(timeout) except ZMQError as exc: if exc.errno == EINTR: return ready else: raise OSError(exc.errno, exc.strerror) from exc for fd, evt in z_events: events = 0 if evt & POLLIN: events |= EVENT_READ if evt & POLLOUT: events |= EVENT_WRITE if evt & POLLERR: events = EVENT_READ | EVENT_WRITE key = self._key_from_fd(fd) if key: ready.append((key, events & key.events)) return ready aiozmq-0.9.0/aiozmq/cli/0000775000372000037200000000000013614330247015734 5ustar travistravis00000000000000aiozmq-0.9.0/aiozmq/cli/proxy.py0000664000372000037200000001071413614330211017461 0ustar travistravis00000000000000import zmq import sys import argparse from datetime import datetime def get_arguments(): ap = argparse.ArgumentParser(description="ZMQ Proxy tool") def common_arguments(ap): ap.add_argument('--front-bind', metavar="ADDR", action='append', help="Binds frontend socket to specified address") ap.add_argument('--front-connect', metavar="ADDR", action='append', help="Connects frontend socket to specified address") ap.add_argument('--back-bind', metavar="ADDR", action='append', help="Binds backend socket to specified address") ap.add_argument('--back-connect', metavar="ADDR", action='append', help="Connects backend socket to specified address") ap.add_argument('--monitor-bind', metavar="ADDR", action='append', help="Creates and binds monitor socket" " to specified address") ap.add_argument('--monitor-connect', metavar="ADDR", action='append', help="Creates and connects monitor socket" " to specified address") parsers = ap.add_subparsers( title="Commands", help="ZMQ Proxy tool commands") sub = parsers.add_parser( 'queue', help="Creates Shared Queue proxy" " (frontend/backend sockets are ZMQ_ROUTER/ZMQ_DEALER)") sub.set_defaults(sock_types=(zmq.ROUTER, zmq.DEALER), action=serve_proxy) common_arguments(sub) sub = parsers.add_parser( 'forwarder', help="Creates Forwarder proxy" " (frontend/backend sockets are ZMQ_XSUB/ZMQ_XPUB)") sub.set_defaults(sock_types=(zmq.XSUB, zmq.XPUB), action=serve_proxy) common_arguments(sub) sub = parsers.add_parser( 'streamer', help="Creates Streamer proxy" " (frontend/backend sockets are ZMQ_PULL/ZMQ_PUSH)") sub.set_defaults(sock_types=(zmq.PULL, zmq.PUSH), action=serve_proxy) common_arguments(sub) sub = parsers.add_parser( 'monitor', help="Connects/binds to monitor socket and dumps all traffic") sub.set_defaults(action=monitor) sub.add_argument('--connect', metavar="ADDR", help="Connect to monitor socket") sub.add_argument('--bind', metavar="ADDR", help="Bind monitor socket") return ap def main(): ap = get_arguments() options = ap.parse_args() options.action(options) def serve_proxy(options): if not (options.front_connect or options.front_bind): print("No frontend socket address specified!", file=sys.stderr) sys.exit(1) if not (options.back_connect or options.back_bind): print("No backend socket address specified!", file=sys.stderr) sys.exit(1) ctx = zmq.Context.instance() front_type, back_type = options.sock_types front = ctx.socket(front_type) back = ctx.socket(back_type) if options.monitor_bind or options.monitor_connect: monitor = ctx.socket(zmq.PUB) bind_connect(monitor, options.monitor_bind, options.monitor_connect) else: monitor = None bind_connect(front, options.front_bind, options.front_connect) bind_connect(back, options.back_bind, options.back_connect) try: if monitor: zmq.proxy(front, back, monitor) else: zmq.proxy(front, back) except Exception: return finally: front.close() back.close() def bind_connect(sock, bind=None, connect=None): if bind: for address in bind: sock.bind(address) if connect: for address in connect: sock.connect(address) def monitor(options): ctx = zmq.Context.instance() sock = ctx.socket(zmq.SUB) bind = [options.bind] if options.bind else [] connect = [options.connect] if options.connect else [] bind_connect(sock, bind, connect) sock.setsockopt(zmq.SUBSCRIBE, b'') try: while True: try: data = sock.recv() except KeyboardInterrupt: break except Exception as err: print("Error receiving message: {!r}".format(err)) else: print(datetime.now().isoformat(), "Message received: {!r}".format(data)) finally: sock.close() ctx.term() if __name__ == "__main__": main() aiozmq-0.9.0/aiozmq/cli/__init__.py0000664000372000037200000000000013614330211020022 0ustar travistravis00000000000000aiozmq-0.9.0/aiozmq/__init__.py0000664000372000037200000000353513614330211017273 0ustar travistravis00000000000000import re import sys from collections import namedtuple import zmq from .core import ZmqEventLoop, ZmqEventLoopPolicy, create_zmq_connection from .interface import ZmqTransport, ZmqProtocol from .selector import ZmqSelector from .stream import (ZmqStream, ZmqStreamProtocol, ZmqStreamClosed, create_zmq_stream) __all__ = ('ZmqSelector', 'ZmqEventLoop', 'ZmqEventLoopPolicy', 'ZmqTransport', 'ZmqProtocol', 'ZmqStream', 'ZmqStreamProtocol', 'create_zmq_stream', 'ZmqStreamClosed', 'create_zmq_connection', 'version_info', 'version') __version__ = '0.9.0' version = __version__ + ' , Python ' + sys.version VersionInfo = namedtuple('VersionInfo', 'major minor micro releaselevel serial') def _parse_version(ver): RE = (r'^(?P\d+)\.(?P\d+)\.' r'(?P\d+)((?P[a-z]+)(?P\d+)?)?$') match = re.match(RE, ver) try: major = int(match.group('major')) minor = int(match.group('minor')) micro = int(match.group('micro')) levels = {'c': 'candidate', 'a': 'alpha', 'b': 'beta', None: 'final'} releaselevel = levels[match.group('releaselevel')] serial = int(match.group('serial')) if match.group('serial') else 0 return VersionInfo(major, minor, micro, releaselevel, serial) except Exception: raise ImportError("Invalid package version {}".format(ver)) version_info = _parse_version(__version__) if zmq.zmq_version_info()[0] < 3: # pragma no cover raise ImportError("aiozmq doesn't support libzmq < 3.0") # make pyflakes happy (ZmqSelector, ZmqEventLoop, ZmqEventLoopPolicy, ZmqTransport, ZmqProtocol, ZmqStream, ZmqStreamProtocol, ZmqStreamClosed, create_zmq_stream, create_zmq_connection) aiozmq-0.9.0/aiozmq/log.py0000664000372000037200000000007113614330211016305 0ustar travistravis00000000000000import logging logger = logging.getLogger(__package__) aiozmq-0.9.0/setup.cfg0000664000372000037200000000020713614330247015505 0ustar travistravis00000000000000[easy_install] zip_ok = false [nosetests] nocapture = 1 cover-package = aiozmq cover-erase = 1 [egg_info] tag_build = tag_date = 0 aiozmq-0.9.0/README.rst0000664000372000037200000000635313614330211015352 0ustar travistravis00000000000000asyncio integration with ZeroMQ =============================== asyncio (PEP 3156) support for ZeroMQ. .. image:: https://travis-ci.com/aio-libs/aiozmq.svg?branch=master :target: https://travis-ci.com/aio-libs/aiozmq The difference between ``aiozmq`` and vanilla ``pyzmq`` (``zmq.asyncio``). ``zmq.asyncio`` works only by replacement *event loop* with custom one. This approach works but have two disadvantages: 1. ``zmq.asyncio.ZMQEventLoop`` cannot be combined with other loop implementations (most notable is ultra fast ``uvloop``). 2. It uses internal ZMQ Poller which has fast ZMQ Sockets support but doesn't intended to work fast with many (thousands) regular TCP sockets. In practice it means that ``zmq.asyncio`` is not recommended to be used with web servers like ``aiohttp``. See also https://github.com/zeromq/pyzmq/issues/894 Documentation ------------- See http://aiozmq.readthedocs.org Simple high-level client-server RPC example: .. code-block:: python import asyncio import aiozmq.rpc class ServerHandler(aiozmq.rpc.AttrHandler): @aiozmq.rpc.method def remote_func(self, a:int, b:int) -> int: return a + b @asyncio.coroutine def go(): server = yield from aiozmq.rpc.serve_rpc( ServerHandler(), bind='tcp://127.0.0.1:5555') client = yield from aiozmq.rpc.connect_rpc( connect='tcp://127.0.0.1:5555') ret = yield from client.call.remote_func(1, 2) assert 3 == ret server.close() client.close() asyncio.get_event_loop().run_until_complete(go()) Low-level request-reply example: .. code-block:: python import asyncio import aiozmq import zmq @asyncio.coroutine def go(): router = yield from aiozmq.create_zmq_stream( zmq.ROUTER, bind='tcp://127.0.0.1:*') addr = list(router.transport.bindings())[0] dealer = yield from aiozmq.create_zmq_stream( zmq.DEALER, connect=addr) for i in range(10): msg = (b'data', b'ask', str(i).encode('utf-8')) dealer.write(msg) data = yield from router.read() router.write(data) answer = yield from dealer.read() print(answer) dealer.close() router.close() asyncio.get_event_loop().run_until_complete(go()) Comparison to pyzmq ------------------- `zmq.asyncio` provides a *asyncio compatible loop* implementation. But it's based on `zmq.Poller` which doesn't work well with massive non-zmq sockets usage. E.g. if you build a web server for handling at least thousands of parallel web requests (1000-5000) `pyzmq` internal Poller will be slow. `aiozmq` works with epoll natively, it doesn't need custom loop implementation and cooperates pretty well with `uvloop` for example. For details see https://github.com/zeromq/pyzmq/issues/894 Requirements ------------ * Python_ 3.5+ * pyzmq_ 13.1+ * optional submodule ``aiozmq.rpc`` requires msgpack_ 0.5+ License ------- aiozmq is offered under the BSD license. .. _python: https://www.python.org/ .. _pyzmq: https://pypi.python.org/pypi/pyzmq .. _asyncio: https://pypi.python.org/pypi/asyncio .. _msgpack: https://pypi.python.org/pypi/msgpack aiozmq-0.9.0/setup.py0000664000372000037200000000427713614330211015400 0ustar travistravis00000000000000import os import re import sys from setuptools import setup, find_packages install_requires = ['pyzmq>=13.1,!=17.1.2'] tests_require = install_requires + ['msgpack>=0.5.0'] extras_require = {'rpc': ['msgpack>=0.5.0']} if sys.version_info < (3, 5): raise RuntimeError("aiozmq requires Python 3.5 or higher") def read(f): return open(os.path.join(os.path.dirname(__file__), f)).read().strip() def read_version(): regexp = re.compile(r"^__version__\W*=\W*'([\d.abrc]+)'") init_py = os.path.join(os.path.dirname(__file__), 'aiozmq', '__init__.py') with open(init_py) as f: for line in f: match = regexp.match(line) if match is not None: return match.group(1) else: raise RuntimeError('Cannot find version in aiozmq/__init__.py') classifiers = [ 'License :: OSI Approved :: BSD License', 'Intended Audience :: Developers', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Operating System :: POSIX', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Environment :: Web Environment', 'Development Status :: 4 - Beta', 'Framework :: AsyncIO', ] setup(name='aiozmq', version=read_version(), description=('ZeroMQ integration with asyncio.'), long_description='\n\n'.join((read('README.rst'), read('CHANGES.txt'))), classifiers=classifiers, platforms=['POSIX', 'Windows', 'MacOS X'], author='Nikolay Kim', author_email='fafhrd91@gmail.com', maintainer='Jelle Zijlstra', maintainer_email='jelle.zijlstra@gmail.com', url='http://aiozmq.readthedocs.org', download_url='https://pypi.python.org/pypi/aiozmq', license='BSD', packages=find_packages(), install_requires=install_requires, tests_require=tests_require, extras_require=extras_require, entry_points={ 'console_scripts': [ 'aiozmq-proxy = aiozmq.cli.proxy:main', ], }, include_package_data=True)