eventlet-0.13.0/0000755000175000017500000000000012164600754014266 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet.egg-info/0000755000175000017500000000000012164600754017606 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet.egg-info/not-zip-safe0000644000175000017500000000000112163276741022040 0ustar temototemoto00000000000000 eventlet-0.13.0/eventlet.egg-info/PKG-INFO0000644000175000017500000000611712164600737020711 0ustar temototemoto00000000000000Metadata-Version: 1.1 Name: eventlet Version: 0.13.0 Summary: Highly concurrent networking library Home-page: http://eventlet.net Author: Linden Lab Author-email: eventletdev@lists.secondlife.com License: UNKNOWN Description: Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it. It uses epoll or libevent for highly scalable non-blocking I/O. Coroutines ensure that the developer uses a blocking style of programming that is similar to threading, but provide the benefits of non-blocking I/O. The event dispatch is implicit, which means you can easily use Eventlet from the Python interpreter, or as a small part of a larger application. It's easy to get started using Eventlet, and easy to convert existing applications to use it. Start off by looking at the `examples`_, `common design patterns`_, and the list of `basic API primitives`_. .. _examples: http://eventlet.net/doc/examples.html .. _common design patterns: http://eventlet.net/doc/design_patterns.html .. _basic API primitives: http://eventlet.net/doc/basic_usage.html Quick Example =============== Here's something you can try right on the command line:: % python >>> import eventlet >>> from eventlet.green import urllib2 >>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net') >>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com') >>> gt2.wait() >>> gt.wait() Getting Eventlet ================== The easiest way to get Eventlet is to use easy_install or pip:: easy_install eventlet pip install eventlet The development `tip`_ is available via easy_install as well:: easy_install 'eventlet==dev' pip install 'eventlet==dev' .. _tip: http://bitbucket.org/eventlet/eventlet/get/tip.zip#egg=eventlet-dev Building the Docs Locally ========================= To build a complete set of HTML documentation, you must have Sphinx, which can be found at http://sphinx.pocoo.org/ (or installed with `easy_install sphinx`) cd doc make html The built html files can be found in doc/_build/html afterward. Platform: UNKNOWN Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX Classifier: Operating System :: Microsoft :: Windows Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Internet Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Intended Audience :: Developers Classifier: Development Status :: 4 - Beta eventlet-0.13.0/eventlet.egg-info/requires.txt0000644000175000017500000000001712164600737022205 0ustar temototemoto00000000000000greenlet >= 0.3eventlet-0.13.0/eventlet.egg-info/dependency_links.txt0000644000175000017500000000000112164600737023655 0ustar temototemoto00000000000000 eventlet-0.13.0/eventlet.egg-info/top_level.txt0000644000175000017500000000001112164600737022331 0ustar temototemoto00000000000000eventlet eventlet-0.13.0/eventlet.egg-info/SOURCES.txt0000644000175000017500000001236212164600737021477 0ustar temototemoto00000000000000AUTHORS LICENSE MANIFEST.in NEWS README README.twisted setup.py doc/Makefile doc/authors.rst doc/basic_usage.rst doc/common.txt doc/conf.py doc/design_patterns.rst doc/environment.rst doc/examples.rst doc/history.rst doc/hubs.rst doc/index.rst doc/modules.rst doc/patching.rst doc/ssl.rst doc/testing.rst doc/threading.rst doc/zeromq.rst doc/images/threading_illustration.png doc/modules/backdoor.rst doc/modules/corolocal.rst doc/modules/db_pool.rst doc/modules/debug.rst doc/modules/event.rst doc/modules/greenpool.rst doc/modules/greenthread.rst doc/modules/pools.rst doc/modules/queue.rst doc/modules/semaphore.rst doc/modules/timeout.rst doc/modules/websocket.rst doc/modules/wsgi.rst doc/modules/zmq.rst eventlet/__init__.py eventlet/api.py eventlet/backdoor.py eventlet/convenience.py eventlet/corolocal.py eventlet/coros.py eventlet/db_pool.py eventlet/debug.py eventlet/event.py eventlet/greenio.py eventlet/greenpool.py eventlet/greenthread.py eventlet/patcher.py eventlet/pool.py eventlet/pools.py eventlet/proc.py eventlet/processes.py eventlet/queue.py eventlet/saranwrap.py eventlet/semaphore.py eventlet/timeout.py eventlet/tpool.py eventlet/util.py eventlet/websocket.py eventlet/wsgi.py eventlet.egg-info/PKG-INFO eventlet.egg-info/SOURCES.txt eventlet.egg-info/dependency_links.txt eventlet.egg-info/not-zip-safe eventlet.egg-info/requires.txt eventlet.egg-info/top_level.txt eventlet/green/BaseHTTPServer.py eventlet/green/CGIHTTPServer.py eventlet/green/MySQLdb.py eventlet/green/Queue.py eventlet/green/SimpleHTTPServer.py eventlet/green/SocketServer.py eventlet/green/__init__.py eventlet/green/_socket_nodns.py eventlet/green/asynchat.py eventlet/green/asyncore.py eventlet/green/ftplib.py eventlet/green/httplib.py eventlet/green/os.py eventlet/green/profile.py eventlet/green/select.py eventlet/green/socket.py eventlet/green/ssl.py eventlet/green/subprocess.py eventlet/green/thread.py eventlet/green/threading.py eventlet/green/time.py eventlet/green/urllib.py eventlet/green/urllib2.py eventlet/green/zmq.py eventlet/green/OpenSSL/SSL.py eventlet/green/OpenSSL/__init__.py eventlet/green/OpenSSL/crypto.py eventlet/green/OpenSSL/rand.py eventlet/green/OpenSSL/tsafe.py eventlet/green/OpenSSL/version.py eventlet/hubs/__init__.py eventlet/hubs/epolls.py eventlet/hubs/hub.py eventlet/hubs/kqueue.py eventlet/hubs/poll.py eventlet/hubs/pyevent.py eventlet/hubs/selects.py eventlet/hubs/timer.py eventlet/hubs/twistedr.py eventlet/support/__init__.py eventlet/support/greendns.py eventlet/support/greenlets.py eventlet/support/psycopg2_patcher.py eventlet/support/pylib.py eventlet/support/stacklesspypys.py eventlet/support/stacklesss.py eventlet/twistedutil/__init__.py eventlet/twistedutil/join_reactor.py eventlet/twistedutil/protocol.py eventlet/twistedutil/protocols/__init__.py eventlet/twistedutil/protocols/basic.py examples/chat_bridge.py examples/chat_server.py examples/connect.py examples/distributed_websocket_chat.py examples/echoserver.py examples/feedscraper-testclient.py examples/feedscraper.py examples/forwarder.py examples/producer_consumer.py examples/recursive_crawler.py examples/webcrawler.py examples/websocket.html examples/websocket.py examples/websocket_chat.html examples/websocket_chat.py examples/wsgi.py examples/zmq_chat.py examples/zmq_simple.py examples/twisted/twisted_client.py examples/twisted/twisted_http_proxy.py examples/twisted/twisted_portforward.py examples/twisted/twisted_server.py examples/twisted/twisted_srvconnector.py examples/twisted/twisted_xcap_proxy.py tests/__init__.py tests/api_test.py tests/backdoor_test.py tests/convenience_test.py tests/coros_test.py tests/db_pool_test.py tests/debug_test.py tests/env_test.py tests/event_test.py tests/fork_test.py tests/greenio_test.py tests/greenpipe_test_with_statement.py tests/greenpool_test.py tests/greenthread_test.py tests/hub_test.py tests/mock.py tests/mysqldb_test.py tests/nosewrapper.py tests/parse_results.py tests/patcher_psycopg_test.py tests/patcher_test.py tests/pools_test.py tests/processes_test.py tests/queue_test.py tests/saranwrap_test.py tests/semaphore_test.py tests/ssl_test.py tests/subprocess_test.py tests/test__coros_queue.py tests/test__event.py tests/test__greenness.py tests/test__pool.py tests/test__proc.py tests/test__refcount.py tests/test__socket_errors.py tests/test__twistedutil.py tests/test__twistedutil_protocol.py tests/test_server.crt tests/test_server.key tests/thread_test.py tests/timeout_test.py tests/timeout_test_with_statement.py tests/timer_test.py tests/tpool_test.py tests/websocket_test.py tests/wsgi_test.py tests/zmq_test.py tests/stdlib/all.py tests/stdlib/all_modules.py tests/stdlib/all_monkey.py tests/stdlib/test_SimpleHTTPServer.py tests/stdlib/test_asynchat.py tests/stdlib/test_asyncore.py tests/stdlib/test_ftplib.py tests/stdlib/test_httplib.py tests/stdlib/test_httpservers.py tests/stdlib/test_os.py tests/stdlib/test_queue.py tests/stdlib/test_select.py tests/stdlib/test_socket.py tests/stdlib/test_socket_ssl.py tests/stdlib/test_socketserver.py tests/stdlib/test_ssl.py tests/stdlib/test_subprocess.py tests/stdlib/test_thread.py tests/stdlib/test_thread__boundedsem.py tests/stdlib/test_threading.py tests/stdlib/test_threading_local.py tests/stdlib/test_timeout.py tests/stdlib/test_urllib.py tests/stdlib/test_urllib2.py tests/stdlib/test_urllib2_localnet.pyeventlet-0.13.0/MANIFEST.in0000644000175000017500000000030512164577340016026 0ustar temototemoto00000000000000recursive-include tests *.py *.crt *.key recursive-include doc *.rst *.txt *.py Makefile *.png recursive-include examples *.py *.html include MANIFEST.in README.twisted NEWS AUTHORS LICENSE README eventlet-0.13.0/AUTHORS0000644000175000017500000000650512164577340015350 0ustar temototemoto00000000000000Maintainer (i.e., Who To Hassle If You Find Bugs) ------------------------------------------------- Sergey Shepelev, temoto on Freenode, temotor@gmail.com Original Authors ---------------- * Bob Ippolito * Donovan Preston Contributors ------------ * AG Projects * Chris AtLee * R\. Tyler Ballance * Denis Bilenko * Mike Barton * Patrick Carlisle * Ben Ford * Andrew Godwin * Brantley Harris * Gregory Holt * Joe Malicki * Chet Murthy * Eugene Oden * radix * Scott Robinson * Tavis Rudd * Sergey Shepelev * Chuck Thier * Nick V * Daniele Varrazzo * Ryan Williams * Geoff Salmon * Edward George * Floris Bruynooghe Linden Lab Contributors ----------------------- * John Beisley * Tess Chu * Nat Goodspeed * Dave Kaprielian * Kartic Krishnamurthy * Bryan O'Sullivan * Kent Quirk * Ryan Williams Thanks To --------- * AdamKG, giving the hint that invalid argument errors were introduced post-0.9.0 * Luke Tucker, bug report regarding wsgi + webob * Taso Du Val, reproing an exception squelching bug, saving children's lives ;-) * Luci Stanescu, for reporting twisted hub bug * Marcus Cavanaugh, for test case code that has been incredibly useful in tracking down bugs * Brian Brunswick, for many helpful questions and suggestions on the mailing list * Cesar Alaniz, for uncovering bugs of great import * the grugq, for contributing patches, suggestions, and use cases * Ralf Schmitt, for wsgi/webob incompatibility bug report and suggested fix * Benoit Chesneau, bug report on green.os and patch to fix it * Slant, better iterator implementation in tpool * Ambroff, nice pygtk hub example * Michael Carter, websocket patch to improve location handling * Marcin Bachry, nice repro of a bug and good diagnosis leading to the fix * David Ziegler, reporting issue #53 * Favo Yang, twisted hub patch * Schmir, patch that fixes readline method with chunked encoding in wsgi.py, advice on patcher * Slide, for open-sourcing gogreen * Holger Krekel, websocket example small fix * mikepk, debugging MySQLdb/tpool issues * Malcolm Cleaton, patch for Event exception handling * Alexey Borzenkov, for finding and fixing issues with Windows error detection (#66, #69), reducing dependencies in zeromq hub (#71) * Anonymous, finding and fixing error in websocket chat example (#70) * Edward George, finding and fixing an issue in the [e]poll hubs (#74), and in convenience (#86) * Ruijun Luo, figuring out incorrect openssl import for wrap_ssl (#73) * rfk, patch to get green zmq to respect noblock flag. * Soren Hansen, finding and fixing issue in subprocess (#77) * Stefano Rivera, making tests pass in absence of postgres (#78) * Joshua Kwan, fixing busy-wait in eventlet.green.ssl. * Nick Vatamaniuc, Windows SO_REUSEADDR patch (#83) * Clay Gerrard, wsgi handle socket closed by client (#95) * Eric Windisch, zmq getsockopt(EVENTS) wake correct threads (pull request 22) * Raymond Lu, fixing busy-wait in eventlet.green.ssl.socket.sendall() * Thomas Grainger, webcrawler example small fix, "requests" library import bug report, Travis integration * Peter Portante, save syscalls in socket.dup(), environ[REMOTE_PORT] in wsgi * Peter Skirko, fixing socket.settimeout(0) bug * Derk Tegeler, Pre-cache proxied GreenSocket methods (Bitbucket #136) * Jakub Stasiak, Travis integration, wsgi fix * Paul Oppenheim, bug reports * David Malcolm, optional "timeout" argument to the subprocess module (Bitbucket #89) eventlet-0.13.0/setup.cfg0000644000175000017500000000007312164600754016107 0ustar temototemoto00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 eventlet-0.13.0/examples/0000755000175000017500000000000012164600754016104 5ustar temototemoto00000000000000eventlet-0.13.0/examples/websocket.py0000644000175000017500000000236612164577340020457 0ustar temototemoto00000000000000import eventlet from eventlet import wsgi from eventlet import websocket # demo app import os import random @websocket.WebSocketWSGI def handle(ws): """ This is the websocket handler function. Note that we can dispatch based on path in here, too.""" if ws.path == '/echo': while True: m = ws.wait() if m is None: break ws.send(m) elif ws.path == '/data': for i in xrange(10000): ws.send("0 %s %s\n" % (i, random.random())) eventlet.sleep(0.1) def dispatch(environ, start_response): """ This resolves to the web page or the websocket depending on the path.""" if environ['PATH_INFO'] == '/data': return handle(environ, start_response) else: start_response('200 OK', [('content-type', 'text/html')]) return [open(os.path.join( os.path.dirname(__file__), 'websocket.html')).read()] if __name__ == "__main__": # run an example app from the command line listener = eventlet.listen(('127.0.0.1', 7000)) print "\nVisit http://localhost:7000/ in your websocket-capable browser.\n" wsgi.server(listener, dispatch) eventlet-0.13.0/examples/websocket.html0000644000175000017500000000251212164577340020764 0ustar temototemoto00000000000000

Plot

(Only tested in Chrome)

eventlet-0.13.0/examples/websocket_chat.py0000644000175000017500000000210612164577340021446 0ustar temototemoto00000000000000import os import eventlet from eventlet import wsgi from eventlet import websocket PORT = 7000 participants = set() @websocket.WebSocketWSGI def handle(ws): participants.add(ws) try: while True: m = ws.wait() if m is None: break for p in participants: p.send(m) finally: participants.remove(ws) def dispatch(environ, start_response): """Resolves to the web page or the websocket depending on the path.""" if environ['PATH_INFO'] == '/chat': return handle(environ, start_response) else: start_response('200 OK', [('content-type', 'text/html')]) html_path = os.path.join(os.path.dirname(__file__), 'websocket_chat.html') return [open(html_path).read() % {'port': PORT}] if __name__ == "__main__": # run an example app from the command line listener = eventlet.listen(('127.0.0.1', PORT)) print "\nVisit http://localhost:7000/ in your websocket-capable browser.\n" wsgi.server(listener, dispatch) eventlet-0.13.0/examples/wsgi.py0000644000175000017500000000115012164577340017430 0ustar temototemoto00000000000000"""This is a simple example of running a wsgi application with eventlet. For a more fully-featured server which supports multiple processes, multiple threads, and graceful code reloading, see: http://pypi.python.org/pypi/Spawning/ """ import eventlet from eventlet import wsgi def hello_world(env, start_response): if env['PATH_INFO'] != '/': start_response('404 Not Found', [('Content-Type', 'text/plain')]) return ['Not Found\r\n'] start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Hello, World!\r\n'] wsgi.server(eventlet.listen(('', 8090)), hello_world) eventlet-0.13.0/examples/zmq_simple.py0000644000175000017500000000130612164577340020642 0ustar temototemoto00000000000000from eventlet.green import zmq import eventlet CTX = zmq.Context(1) def bob_client(ctx, count): print "STARTING BOB" bob = zmq.Socket(CTX, zmq.REQ) bob.connect("ipc:///tmp/test") for i in range(0, count): print "BOB SENDING" bob.send("HI") print "BOB GOT:", bob.recv() def alice_server(ctx, count): print "STARTING ALICE" alice = zmq.Socket(CTX, zmq.REP) alice.bind("ipc:///tmp/test") print "ALICE READY" for i in range(0, count): print "ALICE GOT:", alice.recv() print "ALIC SENDING" alice.send("HI BACK") alice = eventlet.spawn(alice_server, CTX, 10) bob = eventlet.spawn(bob_client, CTX, 10) bob.wait() alice.wait() eventlet-0.13.0/examples/webcrawler.py0000644000175000017500000000147412164577340020625 0ustar temototemoto00000000000000#!/usr/bin/env python """ This is a simple web "crawler" that fetches a bunch of urls using a pool to control the number of outbound connections. It has as many simultaneously open connections as coroutines in the pool. The prints in the body of the fetch function are there to demonstrate that the requests are truly made in parallel. """ import eventlet from eventlet.green import urllib2 urls = [ "https://www.google.com/intl/en_ALL/images/logo.gif", "http://python.org/images/python-logo.gif", "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif", ] def fetch(url): print "opening", url body = urllib2.urlopen(url).read() print "done with", url return url, body pool = eventlet.GreenPool(200) for url, body in pool.imap(fetch, urls): print "got body from", url, "of length", len(body) eventlet-0.13.0/examples/feedscraper.py0000644000175000017500000000214712164577340020751 0ustar temototemoto00000000000000"""A simple web server that accepts POSTS containing a list of feed urls, and returns the titles of those feeds. """ import eventlet feedparser = eventlet.import_patched('feedparser') # the pool provides a safety limit on our concurrency pool = eventlet.GreenPool() def fetch_title(url): d = feedparser.parse(url) return d.feed.get('title', '') def app(environ, start_response): if environ['REQUEST_METHOD'] != 'POST': start_response('403 Forbidden', []) return [] # the pile collects the result of a concurrent operation -- in this case, # the collection of feed titles pile = eventlet.GreenPile(pool) for line in environ['wsgi.input'].readlines(): url = line.strip() if url: pile.spawn(fetch_title, url) # since the pile is an iterator over the results, # you can use it in all sorts of great Pythonic ways titles = '\n'.join(pile) start_response('200 OK', [('Content-type', 'text/plain')]) return [titles] if __name__ == '__main__': from eventlet import wsgi wsgi.server(eventlet.listen(('localhost', 9010)), app)eventlet-0.13.0/examples/zmq_chat.py0000644000175000017500000000317712164577340020300 0ustar temototemoto00000000000000import eventlet, sys from eventlet.green import socket, zmq from eventlet.hubs import use_hub use_hub('zeromq') ADDR = 'ipc:///tmp/chat' ctx = zmq.Context() def publish(writer): print "connected" socket = ctx.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, "") socket.connect(ADDR) eventlet.sleep(0.1) while True: msg = socket.recv_pyobj() str_msg = "%s: %s" % msg writer.write(str_msg) writer.flush() PORT=3001 def read_chat_forever(reader, pub_socket): line = reader.readline() who = 'someone' while line: print "Chat:", line.strip() if line.startswith('name:'): who = line.split(':')[-1].strip() try: pub_socket.send_pyobj((who, line)) except socket.error, e: # ignore broken pipes, they just mean the participant # closed its connection already if e[0] != 32: raise line = reader.readline() print "Participant left chat." try: print "ChatServer starting up on port %s" % PORT server = eventlet.listen(('0.0.0.0', PORT)) pub_socket = ctx.socket(zmq.PUB) pub_socket.bind(ADDR) eventlet.spawn_n(publish, sys.stdout) while True: new_connection, address = server.accept() print "Participant joined chat." eventlet.spawn_n(publish, new_connection.makefile('w')) eventlet.spawn_n(read_chat_forever, new_connection.makefile('r'), pub_socket) except (KeyboardInterrupt, SystemExit): print "ChatServer exiting."eventlet-0.13.0/examples/websocket_chat.html0000644000175000017500000000146212164577340021766 0ustar temototemoto00000000000000

Chat!

(Only tested in Chrome)

eventlet-0.13.0/examples/producer_consumer.py0000644000175000017500000000361212164577340022222 0ustar temototemoto00000000000000"""This is a recursive web crawler. Don't go pointing this at random sites; it doesn't respect robots.txt and it is pretty brutal about how quickly it fetches pages. This is a kind of "producer/consumer" example; the fetch function produces jobs, and the GreenPool itself is the consumer, farming out work concurrently. It's easier to write it this way rather than writing a standard consumer loop; GreenPool handles any exceptions raised and arranges so that there's a set number of "workers", so you don't have to write that tedious management code yourself. """ from __future__ import with_statement from eventlet.green import urllib2 import eventlet import re # http://daringfireball.net/2009/11/liberal_regex_for_matching_urls url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))') def fetch(url, outq): """Fetch a url and push any urls found into a queue.""" print "fetching", url data = '' with eventlet.Timeout(5, False): data = urllib2.urlopen(url).read() for url_match in url_regex.finditer(data): new_url = url_match.group(0) outq.put(new_url) def producer(start_url): """Recursively crawl starting from *start_url*. Returns a set of urls that were found.""" pool = eventlet.GreenPool() seen = set() q = eventlet.Queue() q.put(start_url) # keep looping if there are new urls, or workers that may produce more urls while True: while not q.empty(): url = q.get() # limit requests to eventlet.net so we don't crash all over the internet if url not in seen and 'eventlet.net' in url: seen.add(url) pool.spawn_n(fetch, url, q) pool.waitall() if q.empty(): break return seen seen = producer("http://eventlet.net") print "I saw these urls:" print "\n".join(seen) eventlet-0.13.0/examples/distributed_websocket_chat.py0000644000175000017500000000747312164577340024064 0ustar temototemoto00000000000000"""This is a websocket chat example with many servers. A client can connect to any of the servers and their messages will be received by all clients connected to any of the servers. Run the examples like this: $ python examples/chat_bridge.py tcp://127.0.0.1:12345 tcp://127.0.0.1:12346 and the servers like this (changing the port for each one obviously): $ python examples/distributed_websocket_chat.py -p tcp://127.0.0.1:12345 -s tcp://127.0.0.1:12346 7000 So all messages are published to port 12345 and the device forwards all the messages to 12346 where they are subscribed to """ import os, sys import eventlet from collections import defaultdict from eventlet import spawn_n, sleep from eventlet import wsgi from eventlet import websocket from eventlet.green import zmq from eventlet.hubs import get_hub, use_hub from uuid import uuid1 use_hub('zeromq') ctx = zmq.Context() class IDName(object): def __init__(self): self.id = uuid1() self.name = None def __str__(self): if self.name: return self.name else: return str(self.id) def pack_message(self, msg): return self, msg def unpack_message(self, msg): sender, message = msg sender_name = 'you said' if sender.id == self.id \ else '%s says' % sender return "%s: %s" % (sender_name, message) participants = defaultdict(IDName) def subscribe_and_distribute(sub_socket): global participants while True: msg = sub_socket.recv_pyobj() for ws, name_id in participants.items(): to_send = name_id.unpack_message(msg) if to_send: try: ws.send(to_send) except: del participants[ws] @websocket.WebSocketWSGI def handle(ws): global pub_socket name_id = participants[ws] ws.send("Connected as %s, change name with 'name: new_name'" % name_id) try: while True: m = ws.wait() if m is None: break if m.startswith('name:'): old_name = str(name_id) new_name = m.split(':', 1)[1].strip() name_id.name = new_name m = 'Changed name from %s' % old_name pub_socket.send_pyobj(name_id.pack_message(m)) sleep() finally: del participants[ws] def dispatch(environ, start_response): """Resolves to the web page or the websocket depending on the path.""" global port if environ['PATH_INFO'] == '/chat': return handle(environ, start_response) else: start_response('200 OK', [('content-type', 'text/html')]) return [open(os.path.join( os.path.dirname(__file__), 'websocket_chat.html')).read() % dict(port=port)] port = None if __name__ == "__main__": usage = 'usage: websocket_chat -p pub address -s sub address port number' if len (sys.argv) != 6: print usage sys.exit(1) pub_addr = sys.argv[2] sub_addr = sys.argv[4] try: port = int(sys.argv[5]) except ValueError: print "Error port supplied couldn't be converted to int\n", usage sys.exit(1) try: pub_socket = ctx.socket(zmq.PUB) pub_socket.connect(pub_addr) print "Publishing to %s" % pub_addr sub_socket = ctx.socket(zmq.SUB) sub_socket.connect(sub_addr) sub_socket.setsockopt(zmq.SUBSCRIBE, "") print "Subscribing to %s" % sub_addr except: print "Couldn't create sockets\n", usage sys.exit(1) spawn_n(subscribe_and_distribute, sub_socket) listener = eventlet.listen(('127.0.0.1', port)) print "\nVisit http://localhost:%s/ in your websocket-capable browser.\n" % port wsgi.server(listener, dispatch) eventlet-0.13.0/examples/recursive_crawler.py0000644000175000017500000000341312164577340022211 0ustar temototemoto00000000000000"""This is a recursive web crawler. Don't go pointing this at random sites; it doesn't respect robots.txt and it is pretty brutal about how quickly it fetches pages. The code for this is very short; this is perhaps a good indication that this is making the most effective use of the primitves at hand. The fetch function does all the work of making http requests, searching for new urls, and dispatching new fetches. The GreenPool acts as sort of a job coordinator (and concurrency controller of course). """ from __future__ import with_statement from eventlet.green import urllib2 import eventlet import re # http://daringfireball.net/2009/11/liberal_regex_for_matching_urls url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))') def fetch(url, seen, pool): """Fetch a url, stick any found urls into the seen set, and dispatch any new ones to the pool.""" print "fetching", url data = '' with eventlet.Timeout(5, False): data = urllib2.urlopen(url).read() for url_match in url_regex.finditer(data): new_url = url_match.group(0) # only send requests to eventlet.net so as not to destroy the internet if new_url not in seen and 'eventlet.net' in new_url: seen.add(new_url) # while this seems stack-recursive, it's actually not: # spawned greenthreads start their own stacks pool.spawn_n(fetch, new_url, seen, pool) def crawl(start_url): """Recursively crawl starting from *start_url*. Returns a set of urls that were found.""" pool = eventlet.GreenPool() seen = set() fetch(start_url, seen, pool) pool.waitall() return seen seen = crawl("http://eventlet.net") print "I saw these urls:" print "\n".join(seen) eventlet-0.13.0/examples/forwarder.py0000644000175000017500000000155612164577340020464 0ustar temototemoto00000000000000""" This is an incredibly simple port forwarder from port 7000 to 22 on localhost. It calls a callback function when the socket is closed, to demonstrate one way that you could start to do interesting things by starting from a simple framework like this. """ import eventlet def closed_callback(): print "called back" def forward(source, dest, cb = lambda: None): """Forwards bytes unidirectionally from source to dest""" while True: d = source.recv(32384) if d == '': cb() break dest.sendall(d) listener = eventlet.listen(('localhost', 7000)) while True: client, addr = listener.accept() server = eventlet.connect(('localhost', 22)) # two unidirectional forwarders make a bidirectional one eventlet.spawn_n(forward, client, server, closed_callback) eventlet.spawn_n(forward, server, client) eventlet-0.13.0/examples/echoserver.py0000644000175000017500000000160712164577340020633 0ustar temototemoto00000000000000#! /usr/bin/env python """\ Simple server that listens on port 6000 and echos back every input to the client. To try out the server, start it up by running this file. Connect to it with: telnet localhost 6000 You terminate your connection by terminating telnet (typically Ctrl-] and then 'quit') """ import eventlet def handle(fd): print "client connected" while True: # pass through every non-eof line x = fd.readline() if not x: break fd.write(x) fd.flush() print "echoed", x, print "client disconnected" print "server socket listening on port 6000" server = eventlet.listen(('0.0.0.0', 6000)) pool = eventlet.GreenPool() while True: try: new_sock, address = server.accept() print "accepted", address pool.spawn_n(handle, new_sock.makefile('rw')) except (SystemExit, KeyboardInterrupt): break eventlet-0.13.0/examples/chat_bridge.py0000644000175000017500000000103512164577340020714 0ustar temototemoto00000000000000import sys from zmq import FORWARDER, PUB, SUB, SUBSCRIBE from zmq.devices import Device if __name__ == "__main__": usage = 'usage: chat_bridge sub_address pub_address' if len (sys.argv) != 3: print usage sys.exit(1) sub_addr = sys.argv[1] pub_addr = sys.argv[2] print "Recieving on %s" % sub_addr print "Sending on %s" % pub_addr device = Device(FORWARDER, SUB, PUB) device.bind_in(sub_addr) device.setsockopt_in(SUBSCRIBE, "") device.bind_out(pub_addr) device.start() eventlet-0.13.0/examples/chat_server.py0000644000175000017500000000224212164577340020767 0ustar temototemoto00000000000000import eventlet from eventlet.green import socket PORT=3001 participants = set() def read_chat_forever(writer, reader): line = reader.readline() while line: print "Chat:", line.strip() for p in participants: try: if p is not writer: # Don't echo p.write(line) p.flush() except socket.error, e: # ignore broken pipes, they just mean the participant # closed its connection already if e[0] != 32: raise line = reader.readline() participants.remove(writer) print "Participant left chat." try: print "ChatServer starting up on port %s" % PORT server = eventlet.listen(('0.0.0.0', PORT)) while True: new_connection, address = server.accept() print "Participant joined chat." new_writer = new_connection.makefile('w') participants.add(new_writer) eventlet.spawn_n(read_chat_forever, new_writer, new_connection.makefile('r')) except (KeyboardInterrupt, SystemExit): print "ChatServer exiting." eventlet-0.13.0/examples/twisted/0000755000175000017500000000000012164600754017567 5ustar temototemoto00000000000000eventlet-0.13.0/examples/twisted/twisted_http_proxy.py0000644000175000017500000000434312164577340024134 0ustar temototemoto00000000000000"""Listen on port 8888 and pretend to be an HTTP proxy. It even works for some pages. Demonstrates how to * plug in eventlet into a twisted application (join_reactor) * call green functions from places where blocking calls are not allowed (deferToGreenThread) * use eventlet.green package which provides [some of] the standard library modules that don't block other greenlets. """ import re from twisted.internet.protocol import Factory from twisted.internet import reactor from twisted.protocols import basic from eventlet.twistedutil import deferToGreenThread from eventlet.twistedutil import join_reactor from eventlet.green import httplib class LineOnlyReceiver(basic.LineOnlyReceiver): def connectionMade(self): self.lines = [] def lineReceived(self, line): if line: self.lines.append(line) elif self.lines: self.requestReceived(self.lines) self.lines = [] def requestReceived(self, lines): request = re.match('^(\w+) http://(.*?)(/.*?) HTTP/1..$', lines[0]) #print request.groups() method, host, path = request.groups() headers = dict(x.split(': ', 1) for x in lines[1:]) def callback(result): self.transport.write(str(result)) self.transport.loseConnection() def errback(err): err.printTraceback() self.transport.loseConnection() d = deferToGreenThread(http_request, method, host, path, headers=headers) d.addCallbacks(callback, errback) def http_request(method, host, path, headers): conn = httplib.HTTPConnection(host) conn.request(method, path, headers=headers) response = conn.getresponse() body = response.read() print method, host, path, response.status, response.reason, len(body) return format_response(response, body) def format_response(response, body): result = "HTTP/1.1 %s %s" % (response.status, response.reason) for k, v in response.getheaders(): result += '\r\n%s: %s' % (k, v) if body: result += '\r\n\r\n' result += body result += '\r\n' return result class MyFactory(Factory): protocol = LineOnlyReceiver print __doc__ reactor.listenTCP(8888, MyFactory()) reactor.run() eventlet-0.13.0/examples/twisted/twisted_client.py0000644000175000017500000000160512164577340023170 0ustar temototemoto00000000000000"""Example for GreenTransport and GreenClientCreator. In this example reactor is started implicitly upon the first use of a blocking function. """ from twisted.internet import ssl from twisted.internet.error import ConnectionClosed from eventlet.twistedutil.protocol import GreenClientCreator from eventlet.twistedutil.protocols.basic import LineOnlyReceiverTransport from twisted.internet import reactor # read from TCP connection conn = GreenClientCreator(reactor).connectTCP('www.google.com', 80) conn.write('GET / HTTP/1.0\r\n\r\n') conn.loseWriteConnection() print conn.read() # read from SSL connection line by line conn = GreenClientCreator(reactor, LineOnlyReceiverTransport).connectSSL('sf.net', 443, ssl.ClientContextFactory()) conn.write('GET / HTTP/1.0\r\n\r\n') try: for num, line in enumerate(conn): print '%3s %r' % (num, line) except ConnectionClosed, ex: print ex eventlet-0.13.0/examples/twisted/twisted_server.py0000644000175000017500000000262312164577340023221 0ustar temototemoto00000000000000"""Simple chat demo application. Listen on port 8007 and re-send all the data received to other participants. Demonstrates how to * plug in eventlet into a twisted application (join_reactor) * how to use SpawnFactory to start a new greenlet for each new request. """ from eventlet.twistedutil import join_reactor from eventlet.twistedutil.protocol import SpawnFactory from eventlet.twistedutil.protocols.basic import LineOnlyReceiverTransport class Chat: def __init__(self): self.participants = [] def handler(self, conn): peer = conn.getPeer() print 'new connection from %s' % (peer, ) conn.write("Welcome! There're %s participants already\n" % (len(self.participants))) self.participants.append(conn) try: for line in conn: if line: print 'received from %s: %s' % (peer, line) for buddy in self.participants: if buddy is not conn: buddy.sendline('from %s: %s' % (peer, line)) except Exception, ex: print peer, ex else: print peer, 'connection done' finally: conn.loseConnection() self.participants.remove(conn) print __doc__ chat = Chat() from twisted.internet import reactor reactor.listenTCP(8007, SpawnFactory(chat.handler, LineOnlyReceiverTransport)) reactor.run() eventlet-0.13.0/examples/twisted/twisted_srvconnector.py0000644000175000017500000000214612164577340024440 0ustar temototemoto00000000000000from twisted.internet import reactor from twisted.names.srvconnect import SRVConnector from gnutls.interfaces.twisted import X509Credentials from eventlet.twistedutil.protocol import GreenClientCreator from eventlet.twistedutil.protocols.basic import LineOnlyReceiverTransport class NoisySRVConnector(SRVConnector): def pickServer(self): host, port = SRVConnector.pickServer(self) print 'Resolved _%s._%s.%s --> %s:%s' % (self.service, self.protocol, self.domain, host, port) return host, port cred = X509Credentials(None, None) creator = GreenClientCreator(reactor, LineOnlyReceiverTransport) conn = creator.connectSRV('msrps', 'ag-projects.com', connectFuncName='connectTLS', connectFuncArgs=(cred,), ConnectorClass=NoisySRVConnector) request = """MSRP 49fh AUTH To-Path: msrps://alice@intra.example.com;tcp From-Path: msrps://alice.example.com:9892/98cjs;tcp -------49fh$ """.replace('\n', '\r\n') print 'Sending:\n%s' % request conn.write(request) print 'Received:' for x in conn: print repr(x) if '-------' in x: break eventlet-0.13.0/examples/twisted/twisted_xcap_proxy.py0000644000175000017500000000202712164577340024105 0ustar temototemoto00000000000000from twisted.internet.protocol import Factory from twisted.internet import reactor from twisted.protocols import basic from xcaplib.green import XCAPClient from eventlet.twistedutil import deferToGreenThread from eventlet.twistedutil import join_reactor class LineOnlyReceiver(basic.LineOnlyReceiver): def lineReceived(self, line): print 'received: %r' % line if not line: return app, context, node = (line + ' ').split(' ', 3) context = {'u' : 'users', 'g': 'global'}.get(context, context) d = deferToGreenThread(client._get, app, node, globaltree=context=='global') def callback(result): self.transport.write(str(result)) def errback(error): self.transport.write(error.getTraceback()) d.addCallback(callback) d.addErrback(errback) class MyFactory(Factory): protocol = LineOnlyReceiver client = XCAPClient('https://xcap.sipthor.net/xcap-root', 'alice@example.com', '123') reactor.listenTCP(8007, MyFactory()) reactor.run() eventlet-0.13.0/examples/twisted/twisted_portforward.py0000644000175000017500000000220612164577340024261 0ustar temototemoto00000000000000"""Port forwarder USAGE: twisted_portforward.py local_port remote_host remote_port""" import sys from twisted.internet import reactor from eventlet.twistedutil import join_reactor from eventlet.twistedutil.protocol import GreenClientCreator, SpawnFactory, UnbufferedTransport from eventlet import proc def forward(source, dest): try: while True: x = source.recv() if not x: break print 'forwarding %s bytes' % len(x) dest.write(x) finally: dest.loseConnection() def handler(local): client = str(local.getHost()) print 'accepted connection from %s' % client remote = GreenClientCreator(reactor, UnbufferedTransport).connectTCP(remote_host, remote_port) a = proc.spawn(forward, remote, local) b = proc.spawn(forward, local, remote) proc.waitall([a, b], trap_errors=True) print 'closed connection to %s' % client try: local_port, remote_host, remote_port = sys.argv[1:] except ValueError: sys.exit(__doc__) local_port = int(local_port) remote_port = int(remote_port) reactor.listenTCP(local_port, SpawnFactory(handler)) reactor.run() eventlet-0.13.0/examples/connect.py0000644000175000017500000000131512164577340020113 0ustar temototemoto00000000000000"""Spawn multiple workers and collect their results. Demonstrates how to use the eventlet.green.socket module. """ import eventlet from eventlet.green import socket def geturl(url): c = socket.socket() ip = socket.gethostbyname(url) c.connect((ip, 80)) print '%s connected' % url c.sendall('GET /\r\n\r\n') return c.recv(1024) urls = ['www.google.com', 'www.yandex.ru', 'www.python.org'] pile = eventlet.GreenPile() for x in urls: pile.spawn(geturl, x) # note that the pile acts as a collection of return values from the functions # if any exceptions are raised by the function they'll get raised here for url, result in zip(urls, pile): print '%s: %s' % (url, repr(result)[:50]) eventlet-0.13.0/examples/feedscraper-testclient.py0000644000175000017500000000145212164577340023123 0ustar temototemoto00000000000000from eventlet.green import urllib2 big_list_of_feeds = """ http://blog.eventlet.net/feed/ http://rss.slashdot.org/Slashdot/slashdot http://feeds.boingboing.net/boingboing/iBag http://feeds.feedburner.com/RockPaperShotgun http://feeds.penny-arcade.com/pa-mainsite http://achewood.com/rss.php http://raysmuckles.blogspot.com/atom.xml http://rbeef.blogspot.com/atom.xml http://journeyintoreason.blogspot.com/atom.xml http://orezscu.blogspot.com/atom.xml http://feeds2.feedburner.com/AskMetafilter http://feeds2.feedburner.com/Metafilter http://stackoverflow.com/feeds http://feeds.feedburner.com/codinghorror http://www.tbray.org/ongoing/ongoing.atom http://www.zeldman.com/feed/ http://ln.hixie.ch/rss/html """ url = 'http://localhost:9010/' result = urllib2.urlopen(url, big_list_of_feeds) print result.read()eventlet-0.13.0/PKG-INFO0000644000175000017500000000611712164600754015370 0ustar temototemoto00000000000000Metadata-Version: 1.1 Name: eventlet Version: 0.13.0 Summary: Highly concurrent networking library Home-page: http://eventlet.net Author: Linden Lab Author-email: eventletdev@lists.secondlife.com License: UNKNOWN Description: Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it. It uses epoll or libevent for highly scalable non-blocking I/O. Coroutines ensure that the developer uses a blocking style of programming that is similar to threading, but provide the benefits of non-blocking I/O. The event dispatch is implicit, which means you can easily use Eventlet from the Python interpreter, or as a small part of a larger application. It's easy to get started using Eventlet, and easy to convert existing applications to use it. Start off by looking at the `examples`_, `common design patterns`_, and the list of `basic API primitives`_. .. _examples: http://eventlet.net/doc/examples.html .. _common design patterns: http://eventlet.net/doc/design_patterns.html .. _basic API primitives: http://eventlet.net/doc/basic_usage.html Quick Example =============== Here's something you can try right on the command line:: % python >>> import eventlet >>> from eventlet.green import urllib2 >>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net') >>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com') >>> gt2.wait() >>> gt.wait() Getting Eventlet ================== The easiest way to get Eventlet is to use easy_install or pip:: easy_install eventlet pip install eventlet The development `tip`_ is available via easy_install as well:: easy_install 'eventlet==dev' pip install 'eventlet==dev' .. _tip: http://bitbucket.org/eventlet/eventlet/get/tip.zip#egg=eventlet-dev Building the Docs Locally ========================= To build a complete set of HTML documentation, you must have Sphinx, which can be found at http://sphinx.pocoo.org/ (or installed with `easy_install sphinx`) cd doc make html The built html files can be found in doc/_build/html afterward. Platform: UNKNOWN Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX Classifier: Operating System :: Microsoft :: Windows Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Internet Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Intended Audience :: Developers Classifier: Development Status :: 4 - Beta eventlet-0.13.0/eventlet/0000755000175000017500000000000012164600754016114 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/websocket.py0000644000175000017500000002366612164577340020475 0ustar temototemoto00000000000000import collections import errno import string import struct from socket import error as SocketError try: from hashlib import md5 except ImportError: #pragma NO COVER from md5 import md5 import eventlet from eventlet import semaphore from eventlet import wsgi from eventlet.green import socket from eventlet.support import get_errno ACCEPTABLE_CLIENT_ERRORS = set((errno.ECONNRESET, errno.EPIPE)) __all__ = ["WebSocketWSGI", "WebSocket"] class WebSocketWSGI(object): """Wraps a websocket handler function in a WSGI application. Use it like this:: @websocket.WebSocketWSGI def my_handler(ws): from_browser = ws.wait() ws.send("from server") The single argument to the function will be an instance of :class:`WebSocket`. To close the socket, simply return from the function. Note that the server will log the websocket request at the time of closure. """ def __init__(self, handler): self.handler = handler self.protocol_version = None def __call__(self, environ, start_response): if not (environ.get('HTTP_CONNECTION') == 'Upgrade' and environ.get('HTTP_UPGRADE') == 'WebSocket'): # need to check a few more things here for true compliance start_response('400 Bad Request', [('Connection','close')]) return [] # See if they sent the new-format headers if 'HTTP_SEC_WEBSOCKET_KEY1' in environ: self.protocol_version = 76 if 'HTTP_SEC_WEBSOCKET_KEY2' not in environ: # That's bad. start_response('400 Bad Request', [('Connection','close')]) return [] else: self.protocol_version = 75 # Get the underlying socket and wrap a WebSocket class around it sock = environ['eventlet.input'].get_socket() ws = WebSocket(sock, environ, self.protocol_version) # If it's new-version, we need to work out our challenge response if self.protocol_version == 76: key1 = self._extract_number(environ['HTTP_SEC_WEBSOCKET_KEY1']) key2 = self._extract_number(environ['HTTP_SEC_WEBSOCKET_KEY2']) # There's no content-length header in the request, but it has 8 # bytes of data. environ['wsgi.input'].content_length = 8 key3 = environ['wsgi.input'].read(8) key = struct.pack(">II", key1, key2) + key3 response = md5(key).digest() # Start building the response scheme = 'ws' if environ.get('wsgi.url_scheme') == 'https': scheme = 'wss' location = '%s://%s%s%s' % ( scheme, environ.get('HTTP_HOST'), environ.get('SCRIPT_NAME'), environ.get('PATH_INFO') ) qs = environ.get('QUERY_STRING') if qs is not None: location += '?' + qs if self.protocol_version == 75: handshake_reply = ("HTTP/1.1 101 Web Socket Protocol Handshake\r\n" "Upgrade: WebSocket\r\n" "Connection: Upgrade\r\n" "WebSocket-Origin: %s\r\n" "WebSocket-Location: %s\r\n\r\n" % ( environ.get('HTTP_ORIGIN'), location)) elif self.protocol_version == 76: handshake_reply = ("HTTP/1.1 101 WebSocket Protocol Handshake\r\n" "Upgrade: WebSocket\r\n" "Connection: Upgrade\r\n" "Sec-WebSocket-Origin: %s\r\n" "Sec-WebSocket-Protocol: %s\r\n" "Sec-WebSocket-Location: %s\r\n" "\r\n%s"% ( environ.get('HTTP_ORIGIN'), environ.get('HTTP_SEC_WEBSOCKET_PROTOCOL', 'default'), location, response)) else: #pragma NO COVER raise ValueError("Unknown WebSocket protocol version.") sock.sendall(handshake_reply) try: self.handler(ws) except socket.error, e: if get_errno(e) not in ACCEPTABLE_CLIENT_ERRORS: raise # Make sure we send the closing frame ws._send_closing_frame(True) # use this undocumented feature of eventlet.wsgi to ensure that it # doesn't barf on the fact that we didn't call start_response return wsgi.ALREADY_HANDLED def _extract_number(self, value): """ Utility function which, given a string like 'g98sd 5[]221@1', will return 9852211. Used to parse the Sec-WebSocket-Key headers. """ out = "" spaces = 0 for char in value: if char in string.digits: out += char elif char == " ": spaces += 1 return int(out) / spaces class WebSocket(object): """A websocket object that handles the details of serialization/deserialization to the socket. The primary way to interact with a :class:`WebSocket` object is to call :meth:`send` and :meth:`wait` in order to pass messages back and forth with the browser. Also available are the following properties: path The path value of the request. This is the same as the WSGI PATH_INFO variable, but more convenient. protocol The value of the Websocket-Protocol header. origin The value of the 'Origin' header. environ The full WSGI environment for this request. """ def __init__(self, sock, environ, version=76): """ :param socket: The eventlet socket :type socket: :class:`eventlet.greenio.GreenSocket` :param environ: The wsgi environment :param version: The WebSocket spec version to follow (default is 76) """ self.socket = sock self.origin = environ.get('HTTP_ORIGIN') self.protocol = environ.get('HTTP_WEBSOCKET_PROTOCOL') self.path = environ.get('PATH_INFO') self.environ = environ self.version = version self.websocket_closed = False self._buf = "" self._msgs = collections.deque() self._sendlock = semaphore.Semaphore() @staticmethod def _pack_message(message): """Pack the message inside ``00`` and ``FF`` As per the dataframing section (5.3) for the websocket spec """ if isinstance(message, unicode): message = message.encode('utf-8') elif not isinstance(message, str): message = str(message) packed = "\x00%s\xFF" % message return packed def _parse_messages(self): """ Parses for messages in the buffer *buf*. It is assumed that the buffer contains the start character for a message, but that it may contain only part of the rest of the message. Returns an array of messages, and the buffer remainder that didn't contain any full messages.""" msgs = [] end_idx = 0 buf = self._buf while buf: frame_type = ord(buf[0]) if frame_type == 0: # Normal message. end_idx = buf.find("\xFF") if end_idx == -1: #pragma NO COVER break msgs.append(buf[1:end_idx].decode('utf-8', 'replace')) buf = buf[end_idx+1:] elif frame_type == 255: # Closing handshake. assert ord(buf[1]) == 0, "Unexpected closing handshake: %r" % buf self.websocket_closed = True break else: raise ValueError("Don't understand how to parse this type of message: %r" % buf) self._buf = buf return msgs def send(self, message): """Send a message to the browser. *message* should be convertable to a string; unicode objects should be encodable as utf-8. Raises socket.error with errno of 32 (broken pipe) if the socket has already been closed by the client.""" packed = self._pack_message(message) # if two greenthreads are trying to send at the same time # on the same socket, sendlock prevents interleaving and corruption self._sendlock.acquire() try: self.socket.sendall(packed) finally: self._sendlock.release() def wait(self): """Waits for and deserializes messages. Returns a single message; the oldest not yet processed. If the client has already closed the connection, returns None. This is different from normal socket behavior because the empty string is a valid websocket message.""" while not self._msgs: # Websocket might be closed already. if self.websocket_closed: return None # no parsed messages, must mean buf needs more data delta = self.socket.recv(8096) if delta == '': return None self._buf += delta msgs = self._parse_messages() self._msgs.extend(msgs) return self._msgs.popleft() def _send_closing_frame(self, ignore_send_errors=False): """Sends the closing frame to the client, if required.""" if self.version == 76 and not self.websocket_closed: try: self.socket.sendall("\xff\x00") except SocketError: # Sometimes, like when the remote side cuts off the connection, # we don't care about this. if not ignore_send_errors: #pragma NO COVER raise self.websocket_closed = True def close(self): """Forcibly close the websocket; generally it is preferable to return from the handler method.""" self._send_closing_frame() self.socket.shutdown(True) self.socket.close() eventlet-0.13.0/eventlet/hubs/0000755000175000017500000000000012164600754017055 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/hubs/pyevent.py0000644000175000017500000001251712164577340021133 0ustar temototemoto00000000000000import sys import traceback import event import types from eventlet.support import greenlets as greenlet from eventlet.hubs.hub import BaseHub, FdListener, READ, WRITE class event_wrapper(object): def __init__(self, impl=None, seconds=None): self.impl = impl self.seconds = seconds def __repr__(self): if self.impl is not None: return repr(self.impl) else: return object.__repr__(self) def __str__(self): if self.impl is not None: return str(self.impl) else: return object.__str__(self) def cancel(self): if self.impl is not None: self.impl.delete() self.impl = None @property def pending(self): return bool(self.impl and self.impl.pending()) class Hub(BaseHub): SYSTEM_EXCEPTIONS = (KeyboardInterrupt, SystemExit) def __init__(self): super(Hub,self).__init__() event.init() self.signal_exc_info = None self.signal( 2, lambda signalnum, frame: self.greenlet.parent.throw(KeyboardInterrupt)) self.events_to_add = [] def dispatch(self): loop = event.loop while True: for e in self.events_to_add: if e is not None and e.impl is not None and e.seconds is not None: e.impl.add(e.seconds) e.seconds = None self.events_to_add = [] result = loop() if getattr(event, '__event_exc', None) is not None: # only have to do this because of bug in event.loop t = getattr(event, '__event_exc') setattr(event, '__event_exc', None) assert getattr(event, '__event_exc') is None raise t[0], t[1], t[2] if result != 0: return result def run(self): while True: try: self.dispatch() except greenlet.GreenletExit: break except self.SYSTEM_EXCEPTIONS: raise except: if self.signal_exc_info is not None: self.schedule_call_global( 0, greenlet.getcurrent().parent.throw, *self.signal_exc_info) self.signal_exc_info = None else: self.squelch_timer_exception(None, sys.exc_info()) def abort(self, wait=True): self.schedule_call_global(0, self.greenlet.throw, greenlet.GreenletExit) if wait: assert self.greenlet is not greenlet.getcurrent(), "Can't abort with wait from inside the hub's greenlet." self.switch() def _getrunning(self): return bool(self.greenlet) def _setrunning(self, value): pass # exists for compatibility with BaseHub running = property(_getrunning, _setrunning) def add(self, evtype, fileno, real_cb): # this is stupid: pyevent won't call a callback unless it's a function, # so we have to force it to be one here if isinstance(real_cb, types.BuiltinMethodType): def cb(_d): real_cb(_d) else: cb = real_cb if evtype is READ: evt = event.read(fileno, cb, fileno) elif evtype is WRITE: evt = event.write(fileno, cb, fileno) return super(Hub,self).add(evtype, fileno, evt) def signal(self, signalnum, handler): def wrapper(): try: handler(signalnum, None) except: self.signal_exc_info = sys.exc_info() event.abort() return event_wrapper(event.signal(signalnum, wrapper)) def remove(self, listener): super(Hub, self).remove(listener) listener.cb.delete() def remove_descriptor(self, fileno): for lcontainer in self.listeners.itervalues(): listener = lcontainer.pop(fileno, None) if listener: try: listener.cb.delete() except self.SYSTEM_EXCEPTIONS: raise except: traceback.print_exc() def schedule_call_local(self, seconds, cb, *args, **kwargs): current = greenlet.getcurrent() if current is self.greenlet: return self.schedule_call_global(seconds, cb, *args, **kwargs) event_impl = event.event(_scheduled_call_local, (cb, args, kwargs, current)) wrapper = event_wrapper(event_impl, seconds=seconds) self.events_to_add.append(wrapper) return wrapper schedule_call = schedule_call_local def schedule_call_global(self, seconds, cb, *args, **kwargs): event_impl = event.event(_scheduled_call, (cb, args, kwargs)) wrapper = event_wrapper(event_impl, seconds=seconds) self.events_to_add.append(wrapper) return wrapper def _version_info(self): baseversion = event.__version__ return baseversion def _scheduled_call(event_impl, handle, evtype, arg): cb, args, kwargs = arg try: cb(*args, **kwargs) finally: event_impl.delete() def _scheduled_call_local(event_impl, handle, evtype, arg): cb, args, kwargs, caller_greenlet = arg try: if not caller_greenlet.dead: cb(*args, **kwargs) finally: event_impl.delete() eventlet-0.13.0/eventlet/hubs/timer.py0000644000175000017500000000617412164577340020563 0ustar temototemoto00000000000000from eventlet.support import greenlets as greenlet from eventlet.hubs import get_hub """ If true, captures a stack trace for each timer when constructed. This is useful for debugging leaking timers, to find out where the timer was set up. """ _g_debug = False class Timer(object): def __init__(self, seconds, cb, *args, **kw): """Create a timer. seconds: The minimum number of seconds to wait before calling cb: The callback to call when the timer has expired *args: The arguments to pass to cb **kw: The keyword arguments to pass to cb This timer will not be run unless it is scheduled in a runloop by calling timer.schedule() or runloop.add_timer(timer). """ self.seconds = seconds self.tpl = cb, args, kw self.called = False if _g_debug: import traceback, cStringIO self.traceback = cStringIO.StringIO() traceback.print_stack(file=self.traceback) @property def pending(self): return not self.called def __repr__(self): secs = getattr(self, 'seconds', None) cb, args, kw = getattr(self, 'tpl', (None, None, None)) retval = "Timer(%s, %s, *%s, **%s)" % ( secs, cb, args, kw) if _g_debug and hasattr(self, 'traceback'): retval += '\n' + self.traceback.getvalue() return retval def copy(self): cb, args, kw = self.tpl return self.__class__(self.seconds, cb, *args, **kw) def schedule(self): """Schedule this timer to run in the current runloop. """ self.called = False self.scheduled_time = get_hub().add_timer(self) return self def __call__(self, *args): if not self.called: self.called = True cb, args, kw = self.tpl try: cb(*args, **kw) finally: try: del self.tpl except AttributeError: pass def cancel(self): """Prevent this timer from being called. If the timer has already been called or canceled, has no effect. """ if not self.called: self.called = True get_hub().timer_canceled(self) try: del self.tpl except AttributeError: pass # No default ordering in 3.x. heapq uses < # FIXME should full set be added? def __lt__(self, other): return id(self)= _seconds >= 0, \ "%s is not greater than or equal to 0 seconds" % (_seconds,) tple = DelayedCallClass(reactor.seconds() + _seconds, _f, args, kw, reactor._cancelCallLater, reactor._moveCallLaterSooner, seconds=reactor.seconds) reactor._newTimedCalls.append(tple) return tple class socket_rwdescriptor(FdListener): #implements(IReadWriteDescriptor) def __init__(self, evtype, fileno, cb): super(socket_rwdescriptor, self).__init__(evtype, fileno, cb) if not isinstance(fileno, (int,long)): raise TypeError("Expected int or long, got %s" % type(fileno)) # Twisted expects fileno to be a callable, not an attribute def _fileno(): return fileno self.fileno = _fileno # required by glib2reactor disconnected = False def doRead(self): if self.evtype is READ: self.cb(self) def doWrite(self): if self.evtype == WRITE: self.cb(self) def connectionLost(self, reason): self.disconnected = True if self.cb: self.cb(reason) # trampoline() will now switch into the greenlet that owns the socket # leaving the mainloop unscheduled. However, when the next switch # to the mainloop occurs, twisted will not re-evaluate the delayed calls # because it assumes that none were scheduled since no client code was executed # (it has no idea it was switched away). So, we restart the mainloop. # XXX this is not enough, pollreactor prints the traceback for # this and epollreactor times out. see test__hub.TestCloseSocketWhilePolling raise greenlet.GreenletExit logstr = "twistedr" def logPrefix(self): return self.logstr class BaseTwistedHub(object): """This hub does not run a dedicated greenlet for the mainloop (unlike TwistedHub). Instead, it assumes that the mainloop is run in the main greenlet. This makes running "green" functions in the main greenlet impossible but is useful when you want to call reactor.run() yourself. """ # XXX: remove me from here. make functions that depend on reactor # XXX: hub's methods uses_twisted_reactor = True WRITE = WRITE READ = READ def __init__(self, mainloop_greenlet): self.greenlet = mainloop_greenlet def switch(self): assert greenlet.getcurrent() is not self.greenlet, \ "Cannot switch from MAINLOOP to MAINLOOP" try: greenlet.getcurrent().parent = self.greenlet except ValueError: pass return self.greenlet.switch() def stop(self): from twisted.internet import reactor reactor.stop() def add(self, evtype, fileno, cb): from twisted.internet import reactor descriptor = socket_rwdescriptor(evtype, fileno, cb) if evtype is READ: reactor.addReader(descriptor) if evtype is WRITE: reactor.addWriter(descriptor) return descriptor def remove(self, descriptor): from twisted.internet import reactor reactor.removeReader(descriptor) reactor.removeWriter(descriptor) def schedule_call_local(self, seconds, func, *args, **kwargs): from twisted.internet import reactor def call_if_greenlet_alive(*args1, **kwargs1): if timer.greenlet.dead: return return func(*args1, **kwargs1) timer = callLater(LocalDelayedCall, reactor, seconds, call_if_greenlet_alive, *args, **kwargs) return timer schedule_call = schedule_call_local def schedule_call_global(self, seconds, func, *args, **kwargs): from twisted.internet import reactor return callLater(DelayedCall, reactor, seconds, func, *args, **kwargs) def abort(self): from twisted.internet import reactor reactor.crash() @property def running(self): from twisted.internet import reactor return reactor.running # for debugging: def get_readers(self): from twisted.internet import reactor readers = reactor.getReaders() readers.remove(getattr(reactor, 'waker')) return readers def get_writers(self): from twisted.internet import reactor return reactor.getWriters() def get_timers_count(self): from twisted.internet import reactor return len(reactor.getDelayedCalls()) class TwistedHub(BaseTwistedHub): # wrapper around reactor that runs reactor's main loop in a separate greenlet. # whenever you need to wait, i.e. inside a call that must appear # blocking, call hub.switch() (then your blocking operation should switch back to you # upon completion) # unlike other eventlet hubs, which are created per-thread, # this one cannot be instantiated more than once, because # twisted doesn't allow that # 0-not created # 1-initialized but not started # 2-started # 3-restarted state = 0 installSignalHandlers = False def __init__(self): assert Hub.state==0, ('%s hub can only be instantiated once'%type(self).__name__, Hub.state) Hub.state = 1 make_twisted_threadpool_daemonic() # otherwise the program # would hang after the main # greenlet exited g = greenlet.greenlet(self.run) BaseTwistedHub.__init__(self, g) def switch(self): assert greenlet.getcurrent() is not self.greenlet, \ "Cannot switch from MAINLOOP to MAINLOOP" if self.greenlet.dead: self.greenlet = greenlet.greenlet(self.run) try: greenlet.getcurrent().parent = self.greenlet except ValueError: pass return self.greenlet.switch() def run(self, installSignalHandlers=None): if installSignalHandlers is None: installSignalHandlers = self.installSignalHandlers # main loop, executed in a dedicated greenlet from twisted.internet import reactor assert Hub.state in [1, 3], ('run function is not reentrant', Hub.state) if Hub.state == 1: reactor.startRunning(installSignalHandlers=installSignalHandlers) elif not reactor.running: # if we're here, then reactor was explicitly stopped with reactor.stop() # restarting reactor (like we would do after an exception) in this case # is not an option. raise AssertionError("reactor is not running") try: self.mainLoop(reactor) except: # an exception in the mainLoop is a normal operation (e.g. user's # signal handler could raise an exception). In this case we will re-enter # the main loop at the next switch. Hub.state = 3 raise # clean exit here is needed for abort() method to work # do not raise an exception here. def mainLoop(self, reactor): Hub.state = 2 # Unlike reactor's mainLoop, this function does not catch exceptions. # Anything raised goes into the main greenlet (because it is always the # parent of this one) while reactor.running: # Advance simulation time in delayed event processors. reactor.runUntilCurrent() t2 = reactor.timeout() t = reactor.running and t2 reactor.doIteration(t) Hub = TwistedHub class DaemonicThread(threading.Thread): def _set_daemon(self): return True def make_twisted_threadpool_daemonic(): from twisted.python.threadpool import ThreadPool if ThreadPool.threadFactory != DaemonicThread: ThreadPool.threadFactory = DaemonicThread eventlet-0.13.0/eventlet/hubs/__init__.py0000644000175000017500000001201212164577340021166 0ustar temototemoto00000000000000import sys import os from eventlet.support import greenlets as greenlet from eventlet import patcher try: # try and import pkg_resources ... import pkg_resources except ImportError: # ... but do not depend on it pkg_resources = None __all__ = ["use_hub", "get_hub", "get_default_hub", "trampoline"] threading = patcher.original('threading') _threadlocal = threading.local() def get_default_hub(): """Select the default hub implementation based on what multiplexing libraries are installed. The order that the hubs are tried is: * epoll * kqueue * poll * select It won't automatically select the pyevent hub, because it's not python-thread-safe. .. include:: ../doc/common.txt .. note :: |internal| """ # pyevent hub disabled for now because it is not thread-safe # try: # import eventlet.hubs.pyevent # return eventlet.hubs.pyevent # except: # pass select = patcher.original('select') try: import eventlet.hubs.epolls return eventlet.hubs.epolls except ImportError: try: import eventlet.hubs.kqueue return eventlet.hubs.kqueue except ImportError: if hasattr(select, 'poll'): import eventlet.hubs.poll return eventlet.hubs.poll else: import eventlet.hubs.selects return eventlet.hubs.selects def use_hub(mod=None): """Use the module *mod*, containing a class called Hub, as the event hub. Usually not required; the default hub is usually fine. Mod can be an actual module, a string, or None. If *mod* is a module, it uses it directly. If *mod* is a string and contains either '.' or ':' use_hub tries to import the hub using the 'package.subpackage.module:Class' convention, otherwise use_hub looks for a matching setuptools entry point in the 'eventlet.hubs' group to load or finally tries to import `eventlet.hubs.mod` and use that as the hub module. If *mod* is None, use_hub uses the default hub. Only call use_hub during application initialization, because it resets the hub's state and any existing timers or listeners will never be resumed. """ if mod is None: mod = os.environ.get('EVENTLET_HUB', None) if mod is None: mod = get_default_hub() if hasattr(_threadlocal, 'hub'): del _threadlocal.hub if isinstance(mod, str): assert mod.strip(), "Need to specify a hub" if '.' in mod or ':' in mod: modulename, _, classname = mod.strip().partition(':') mod = __import__(modulename, globals(), locals(), [classname]) if classname: mod = getattr(mod, classname) else: found = False if pkg_resources is not None: for entry in pkg_resources.iter_entry_points( group='eventlet.hubs', name=mod): mod, found = entry.load(), True break if not found: mod = __import__( 'eventlet.hubs.' + mod, globals(), locals(), ['Hub']) if hasattr(mod, 'Hub'): _threadlocal.Hub = mod.Hub else: _threadlocal.Hub = mod def get_hub(): """Get the current event hub singleton object. .. note :: |internal| """ try: hub = _threadlocal.hub except AttributeError: try: _threadlocal.Hub except AttributeError: use_hub() hub = _threadlocal.hub = _threadlocal.Hub() return hub from eventlet import timeout def trampoline(fd, read=None, write=None, timeout=None, timeout_exc=timeout.Timeout): """Suspend the current coroutine until the given socket object or file descriptor is ready to *read*, ready to *write*, or the specified *timeout* elapses, depending on arguments specified. To wait for *fd* to be ready to read, pass *read* ``=True``; ready to write, pass *write* ``=True``. To specify a timeout, pass the *timeout* argument in seconds. If the specified *timeout* elapses before the socket is ready to read or write, *timeout_exc* will be raised instead of ``trampoline()`` returning normally. .. note :: |internal| """ t = None hub = get_hub() current = greenlet.getcurrent() assert hub.greenlet is not current, 'do not call blocking functions from the mainloop' assert not ( read and write), 'not allowed to trampoline for reading and writing' try: fileno = fd.fileno() except AttributeError: fileno = fd if timeout is not None: t = hub.schedule_call_global(timeout, current.throw, timeout_exc) try: if read: listener = hub.add(hub.READ, fileno, current.switch) elif write: listener = hub.add(hub.WRITE, fileno, current.switch) try: return hub.switch() finally: hub.remove(listener) finally: if t is not None: t.cancel() eventlet-0.13.0/eventlet/hubs/poll.py0000644000175000017500000000717612164577340020414 0ustar temototemoto00000000000000import sys import errno import signal from eventlet import patcher select = patcher.original('select') time = patcher.original('time') sleep = time.sleep from eventlet.support import get_errno, clear_sys_exc_info from eventlet.hubs.hub import BaseHub, READ, WRITE, noop, alarm_handler EXC_MASK = select.POLLERR | select.POLLHUP READ_MASK = select.POLLIN | select.POLLPRI WRITE_MASK = select.POLLOUT class Hub(BaseHub): def __init__(self, clock=time.time): super(Hub, self).__init__(clock) self.poll = select.poll() # poll.modify is new to 2.6 try: self.modify = self.poll.modify except AttributeError: self.modify = self.poll.register def add(self, evtype, fileno, cb): listener = super(Hub, self).add(evtype, fileno, cb) self.register(fileno, new=True) return listener def remove(self, listener): super(Hub, self).remove(listener) self.register(listener.fileno) def register(self, fileno, new=False): mask = 0 if self.listeners[READ].get(fileno): mask |= READ_MASK | EXC_MASK if self.listeners[WRITE].get(fileno): mask |= WRITE_MASK | EXC_MASK try: if mask: if new: self.poll.register(fileno, mask) else: try: self.modify(fileno, mask) except (IOError, OSError): self.poll.register(fileno, mask) else: try: self.poll.unregister(fileno) except (KeyError, IOError, OSError): # raised if we try to remove a fileno that was # already removed/invalid pass except ValueError: # fileno is bad, issue 74 self.remove_descriptor(fileno) raise def remove_descriptor(self, fileno): super(Hub, self).remove_descriptor(fileno) try: self.poll.unregister(fileno) except (KeyError, ValueError, IOError, OSError): # raised if we try to remove a fileno that was # already removed/invalid pass def do_poll(self, seconds): # poll.poll expects integral milliseconds return self.poll.poll(int(seconds * 1000.0)) def wait(self, seconds=None): readers = self.listeners[READ] writers = self.listeners[WRITE] if not readers and not writers: if seconds: sleep(seconds) return try: presult = self.do_poll(seconds) except (IOError, select.error), e: if get_errno(e) == errno.EINTR: return raise SYSTEM_EXCEPTIONS = self.SYSTEM_EXCEPTIONS if self.debug_blocking: self.block_detect_pre() for fileno, event in presult: try: if event & READ_MASK: readers.get(fileno, noop).cb(fileno) if event & WRITE_MASK: writers.get(fileno, noop).cb(fileno) if event & select.POLLNVAL: self.remove_descriptor(fileno) continue if event & EXC_MASK: readers.get(fileno, noop).cb(fileno) writers.get(fileno, noop).cb(fileno) except SYSTEM_EXCEPTIONS: raise except: self.squelch_exception(fileno, sys.exc_info()) clear_sys_exc_info() if self.debug_blocking: self.block_detect_post() eventlet-0.13.0/eventlet/hubs/kqueue.py0000644000175000017500000000676212164577340020745 0ustar temototemoto00000000000000import os import sys from eventlet import patcher select = patcher.original('select') time = patcher.original('time') sleep = time.sleep from eventlet.support import get_errno, clear_sys_exc_info from eventlet.hubs.hub import BaseHub, READ, WRITE, noop if getattr(select, 'kqueue', None) is None: raise ImportError('No kqueue implementation found in select module') FILTERS = {READ: select.KQ_FILTER_READ, WRITE: select.KQ_FILTER_WRITE} class Hub(BaseHub): MAX_EVENTS = 100 def __init__(self, clock=time.time): super(Hub, self).__init__(clock) self._events = {} self._init_kqueue() def _init_kqueue(self): self.kqueue = select.kqueue() self._pid = os.getpid() def _reinit_kqueue(self): self.kqueue.close() self._init_kqueue() kqueue = self.kqueue events = [e for i in self._events.itervalues() for e in i.itervalues()] kqueue.control(events, 0, 0) def _control(self, events, max_events, timeout): try: return self.kqueue.control(events, max_events, timeout) except OSError: # have we forked? if os.getpid() != self._pid: self._reinit_kqueue() return self.kqueue.control(events, max_events, timeout) raise def add(self, evtype, fileno, cb): listener = super(Hub, self).add(evtype, fileno, cb) events = self._events.setdefault(fileno, {}) if evtype not in events: try: event = select.kevent(fileno, FILTERS.get(evtype), select.KQ_EV_ADD) self._control([event], 0, 0) events[evtype] = event except ValueError: super(Hub, self).remove(listener) raise return listener def _delete_events(self, events): del_events = map(lambda e: select.kevent(e.ident, e.filter, select.KQ_EV_DELETE), events) self._control(del_events, 0, 0) def remove(self, listener): super(Hub, self).remove(listener) evtype = listener.evtype fileno = listener.fileno if not self.listeners[evtype].get(fileno): event = self._events[fileno].pop(evtype) try: self._delete_events([event]) except OSError, e: pass def remove_descriptor(self, fileno): super(Hub, self).remove_descriptor(fileno) try: events = self._events.pop(fileno).values() self._delete_events(events) except KeyError, e: pass except OSError, e: pass def wait(self, seconds=None): readers = self.listeners[READ] writers = self.listeners[WRITE] if not readers and not writers: if seconds: sleep(seconds) return result = self._control([], self.MAX_EVENTS, seconds) SYSTEM_EXCEPTIONS = self.SYSTEM_EXCEPTIONS for event in result: fileno = event.ident evfilt = event.filter try: if evfilt == FILTERS[READ]: readers.get(fileno, noop).cb(fileno) if evfilt == FILTERS[WRITE]: writers.get(fileno, noop).cb(fileno) except SYSTEM_EXCEPTIONS: raise except: self.squelch_exception(fileno, sys.exc_info()) clear_sys_exc_info() eventlet-0.13.0/eventlet/hubs/hub.py0000644000175000017500000003131012164577340020207 0ustar temototemoto00000000000000import heapq import math import traceback import signal import sys import warnings arm_alarm = None if hasattr(signal, 'setitimer'): def alarm_itimer(seconds): signal.setitimer(signal.ITIMER_REAL, seconds) arm_alarm = alarm_itimer else: try: import itimer arm_alarm = itimer.alarm except ImportError: def alarm_signal(seconds): signal.alarm(math.ceil(seconds)) arm_alarm = alarm_signal from eventlet.support import greenlets as greenlet, clear_sys_exc_info from eventlet.hubs import timer from eventlet import patcher time = patcher.original('time') g_prevent_multiple_readers = True READ="read" WRITE="write" class FdListener(object): def __init__(self, evtype, fileno, cb): assert (evtype is READ or evtype is WRITE) self.evtype = evtype self.fileno = fileno self.cb = cb def __repr__(self): return "%s(%r, %r, %r)" % (type(self).__name__, self.evtype, self.fileno, self.cb) __str__ = __repr__ noop = FdListener(READ, 0, lambda x: None) # in debug mode, track the call site that created the listener class DebugListener(FdListener): def __init__(self, evtype, fileno, cb): self.where_called = traceback.format_stack() self.greenlet = greenlet.getcurrent() super(DebugListener, self).__init__(evtype, fileno, cb) def __repr__(self): return "DebugListener(%r, %r, %r, %r)\n%sEndDebugFdListener" % ( self.evtype, self.fileno, self.cb, self.greenlet, ''.join(self.where_called)) __str__ = __repr__ def alarm_handler(signum, frame): import inspect raise RuntimeError("Blocking detector ALARMED at" + str(inspect.getframeinfo(frame))) class BaseHub(object): """ Base hub class for easing the implementation of subclasses that are specific to a particular underlying event architecture. """ SYSTEM_EXCEPTIONS = (KeyboardInterrupt, SystemExit) READ = READ WRITE = WRITE def __init__(self, clock=time.time): self.listeners = {READ:{}, WRITE:{}} self.secondaries = {READ:{}, WRITE:{}} self.clock = clock self.greenlet = greenlet.greenlet(self.run) self.stopping = False self.running = False self.timers = [] self.next_timers = [] self.lclass = FdListener self.timers_canceled = 0 self.debug_exceptions = True self.debug_blocking = False self.debug_blocking_resolution = 1 def block_detect_pre(self): # shortest alarm we can possibly raise is one second tmp = signal.signal(signal.SIGALRM, alarm_handler) if tmp != alarm_handler: self._old_signal_handler = tmp arm_alarm(self.debug_blocking_resolution) def block_detect_post(self): if (hasattr(self, "_old_signal_handler") and self._old_signal_handler): signal.signal(signal.SIGALRM, self._old_signal_handler) signal.alarm(0) def add(self, evtype, fileno, cb): """ Signals an intent to or write a particular file descriptor. The *evtype* argument is either the constant READ or WRITE. The *fileno* argument is the file number of the file of interest. The *cb* argument is the callback which will be called when the file is ready for reading/writing. """ listener = self.lclass(evtype, fileno, cb) bucket = self.listeners[evtype] if fileno in bucket: if g_prevent_multiple_readers: raise RuntimeError("Second simultaneous %s on fileno %s "\ "detected. Unless you really know what you're doing, "\ "make sure that only one greenthread can %s any "\ "particular socket. Consider using a pools.Pool. "\ "If you do know what you're doing and want to disable "\ "this error, call "\ "eventlet.debug.hub_prevent_multiple_readers(False)" % ( evtype, fileno, evtype)) # store off the second listener in another structure self.secondaries[evtype].setdefault(fileno, []).append(listener) else: bucket[fileno] = listener return listener def remove(self, listener): fileno = listener.fileno evtype = listener.evtype self.listeners[evtype].pop(fileno, None) # migrate a secondary listener to be the primary listener if fileno in self.secondaries[evtype]: sec = self.secondaries[evtype].get(fileno, None) if not sec: return self.listeners[evtype][fileno] = sec.pop(0) if not sec: del self.secondaries[evtype][fileno] def remove_descriptor(self, fileno): """ Completely remove all listeners for this fileno. For internal use only.""" listeners = [] listeners.append(self.listeners[READ].pop(fileno, noop)) listeners.append(self.listeners[WRITE].pop(fileno, noop)) listeners.extend(self.secondaries[READ].pop(fileno, ())) listeners.extend(self.secondaries[WRITE].pop(fileno, ())) for listener in listeners: try: listener.cb(fileno) except Exception, e: self.squelch_generic_exception(sys.exc_info()) def ensure_greenlet(self): if self.greenlet.dead: # create new greenlet sharing same parent as original new = greenlet.greenlet(self.run, self.greenlet.parent) # need to assign as parent of old greenlet # for those greenlets that are currently # children of the dead hub and may subsequently # exit without further switching to hub. self.greenlet.parent = new self.greenlet = new def switch(self): cur = greenlet.getcurrent() assert cur is not self.greenlet, 'Cannot switch to MAINLOOP from MAINLOOP' switch_out = getattr(cur, 'switch_out', None) if switch_out is not None: try: switch_out() except: self.squelch_generic_exception(sys.exc_info()) self.ensure_greenlet() try: if self.greenlet.parent is not cur: cur.parent = self.greenlet except ValueError: pass # gets raised if there is a greenlet parent cycle clear_sys_exc_info() return self.greenlet.switch() def squelch_exception(self, fileno, exc_info): traceback.print_exception(*exc_info) sys.stderr.write("Removing descriptor: %r\n" % (fileno,)) sys.stderr.flush() try: self.remove_descriptor(fileno) except Exception, e: sys.stderr.write("Exception while removing descriptor! %r\n" % (e,)) sys.stderr.flush() def wait(self, seconds=None): raise NotImplementedError("Implement this in a subclass") def default_sleep(self): return 60.0 def sleep_until(self): t = self.timers if not t: return None return t[0][0] def run(self, *a, **kw): """Run the runloop until abort is called. """ # accept and discard variable arguments because they will be # supplied if other greenlets have run and exited before the # hub's greenlet gets a chance to run if self.running: raise RuntimeError("Already running!") try: self.running = True self.stopping = False while not self.stopping: self.prepare_timers() if self.debug_blocking: self.block_detect_pre() self.fire_timers(self.clock()) if self.debug_blocking: self.block_detect_post() self.prepare_timers() wakeup_when = self.sleep_until() if wakeup_when is None: sleep_time = self.default_sleep() else: sleep_time = wakeup_when - self.clock() if sleep_time > 0: self.wait(sleep_time) else: self.wait(0) else: self.timers_canceled = 0 del self.timers[:] del self.next_timers[:] finally: self.running = False self.stopping = False def abort(self, wait=False): """Stop the runloop. If run is executing, it will exit after completing the next runloop iteration. Set *wait* to True to cause abort to switch to the hub immediately and wait until it's finished processing. Waiting for the hub will only work from the main greenthread; all other greenthreads will become unreachable. """ if self.running: self.stopping = True if wait: assert self.greenlet is not greenlet.getcurrent(), "Can't abort with wait from inside the hub's greenlet." # schedule an immediate timer just so the hub doesn't sleep self.schedule_call_global(0, lambda: None) # switch to it; when done the hub will switch back to its parent, # the main greenlet self.switch() def squelch_generic_exception(self, exc_info): if self.debug_exceptions: traceback.print_exception(*exc_info) sys.stderr.flush() clear_sys_exc_info() def squelch_timer_exception(self, timer, exc_info): if self.debug_exceptions: traceback.print_exception(*exc_info) sys.stderr.flush() clear_sys_exc_info() def add_timer(self, timer): scheduled_time = self.clock() + timer.seconds self.next_timers.append((scheduled_time, timer)) return scheduled_time def timer_canceled(self, timer): self.timers_canceled += 1 len_timers = len(self.timers) + len(self.next_timers) if len_timers > 1000 and len_timers/2 <= self.timers_canceled: self.timers_canceled = 0 self.timers = [t for t in self.timers if not t[1].called] self.next_timers = [t for t in self.next_timers if not t[1].called] heapq.heapify(self.timers) def prepare_timers(self): heappush = heapq.heappush t = self.timers for item in self.next_timers: if item[1].called: self.timers_canceled -= 1 else: heappush(t, item) del self.next_timers[:] def schedule_call_local(self, seconds, cb, *args, **kw): """Schedule a callable to be called after 'seconds' seconds have elapsed. Cancel the timer if greenlet has exited. seconds: The number of seconds to wait. cb: The callable to call after the given time. *args: Arguments to pass to the callable when called. **kw: Keyword arguments to pass to the callable when called. """ t = timer.LocalTimer(seconds, cb, *args, **kw) self.add_timer(t) return t def schedule_call_global(self, seconds, cb, *args, **kw): """Schedule a callable to be called after 'seconds' seconds have elapsed. The timer will NOT be canceled if the current greenlet has exited before the timer fires. seconds: The number of seconds to wait. cb: The callable to call after the given time. *args: Arguments to pass to the callable when called. **kw: Keyword arguments to pass to the callable when called. """ t = timer.Timer(seconds, cb, *args, **kw) self.add_timer(t) return t def fire_timers(self, when): t = self.timers heappop = heapq.heappop while t: next = t[0] exp = next[0] timer = next[1] if when < exp: break heappop(t) try: if timer.called: self.timers_canceled -= 1 else: timer() except self.SYSTEM_EXCEPTIONS: raise except: self.squelch_timer_exception(timer, sys.exc_info()) clear_sys_exc_info() # for debugging: def get_readers(self): return self.listeners[READ].values() def get_writers(self): return self.listeners[WRITE].values() def get_timers_count(hub): return len(hub.timers) + len(hub.next_timers) def set_debug_listeners(self, value): if value: self.lclass = DebugListener else: self.lclass = FdListener def set_timer_exceptions(self, value): self.debug_exceptions = value eventlet-0.13.0/eventlet/hubs/epolls.py0000644000175000017500000000412512164577340020733 0ustar temototemoto00000000000000import errno from eventlet.support import get_errno from eventlet import patcher time = patcher.original('time') select = patcher.original("select") if hasattr(select, 'epoll'): epoll = select.epoll else: try: # http://pypi.python.org/pypi/select26/ from select26 import epoll except ImportError: try: import epoll as _epoll_mod except ImportError: raise ImportError( "No epoll implementation found in select module or PYTHONPATH") else: if hasattr(_epoll_mod, 'poll'): epoll = _epoll_mod.poll else: raise ImportError( "You have an old, buggy epoll module in PYTHONPATH." " Install http://pypi.python.org/pypi/python-epoll/" " NOT http://pypi.python.org/pypi/pyepoll/. " " easy_install pyepoll installs the wrong version.") from eventlet.hubs.hub import BaseHub from eventlet.hubs import poll from eventlet.hubs.poll import READ, WRITE # NOTE: we rely on the fact that the epoll flag constants # are identical in value to the poll constants class Hub(poll.Hub): def __init__(self, clock=time.time): BaseHub.__init__(self, clock) self.poll = epoll() try: # modify is required by select.epoll self.modify = self.poll.modify except AttributeError: self.modify = self.poll.register def add(self, evtype, fileno, cb): oldlisteners = bool(self.listeners[READ].get(fileno) or self.listeners[WRITE].get(fileno)) listener = BaseHub.add(self, evtype, fileno, cb) try: if not oldlisteners: # Means we've added a new listener self.register(fileno, new=True) else: self.register(fileno, new=False) except IOError, ex: # ignore EEXIST, #80 if get_errno(ex) != errno.EEXIST: raise return listener def do_poll(self, seconds): return self.poll.poll(seconds) eventlet-0.13.0/eventlet/hubs/selects.py0000644000175000017500000000356412164577340021105 0ustar temototemoto00000000000000import sys import errno from eventlet import patcher from eventlet.support import get_errno, clear_sys_exc_info select = patcher.original('select') time = patcher.original('time') from eventlet.hubs.hub import BaseHub, READ, WRITE, noop try: BAD_SOCK = set((errno.EBADF, errno.WSAENOTSOCK)) except AttributeError: BAD_SOCK = set((errno.EBADF,)) class Hub(BaseHub): def _remove_bad_fds(self): """ Iterate through fds, removing the ones that are bad per the operating system. """ for fd in self.listeners[READ].keys() + self.listeners[WRITE].keys(): try: select.select([fd], [], [], 0) except select.error, e: if get_errno(e) in BAD_SOCK: self.remove_descriptor(fd) def wait(self, seconds=None): readers = self.listeners[READ] writers = self.listeners[WRITE] if not readers and not writers: if seconds: time.sleep(seconds) return try: r, w, er = select.select(readers.keys(), writers.keys(), readers.keys() + writers.keys(), seconds) except select.error, e: if get_errno(e) == errno.EINTR: return elif get_errno(e) in BAD_SOCK: self._remove_bad_fds() return else: raise for fileno in er: readers.get(fileno, noop).cb(fileno) writers.get(fileno, noop).cb(fileno) for listeners, events in ((readers, r), (writers, w)): for fileno in events: try: listeners.get(fileno, noop).cb(fileno) except self.SYSTEM_EXCEPTIONS: raise except: self.squelch_exception(fileno, sys.exc_info()) clear_sys_exc_info() eventlet-0.13.0/eventlet/wsgi.py0000644000175000017500000006517712164577340017463 0ustar temototemoto00000000000000import errno import os import sys import time import traceback import types import warnings from eventlet.green import urllib from eventlet.green import socket from eventlet.green import BaseHTTPServer from eventlet import greenpool from eventlet import greenio from eventlet.support import get_errno DEFAULT_MAX_SIMULTANEOUS_REQUESTS = 1024 DEFAULT_MAX_HTTP_VERSION = 'HTTP/1.1' MAX_REQUEST_LINE = 8192 MAX_HEADER_LINE = 8192 MAX_TOTAL_HEADER_SIZE = 65536 MINIMUM_CHUNK_SIZE = 4096 # %(client_port)s is also available DEFAULT_LOG_FORMAT= ('%(client_ip)s - - [%(date_time)s] "%(request_line)s"' ' %(status_code)s %(body_length)s %(wall_seconds).6f') __all__ = ['server', 'format_date_time'] # Weekday and month names for HTTP date/time formatting; always English! _weekdayname = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] _monthname = [None, # Dummy so we can use 1-based month numbers "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] def format_date_time(timestamp): """Formats a unix timestamp into an HTTP standard string.""" year, month, day, hh, mm, ss, wd, _y, _z = time.gmtime(timestamp) return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % ( _weekdayname[wd], day, _monthname[month], year, hh, mm, ss ) # Collections of error codes to compare against. Not all attributes are set # on errno module on all platforms, so some are literals :( BAD_SOCK = set((errno.EBADF, 10053)) BROKEN_SOCK = set((errno.EPIPE, errno.ECONNRESET)) # special flag return value for apps class _AlreadyHandled(object): def __iter__(self): return self def next(self): raise StopIteration ALREADY_HANDLED = _AlreadyHandled() class Input(object): def __init__(self, rfile, content_length, wfile=None, wfile_line=None, chunked_input=False): self.rfile = rfile if content_length is not None: content_length = int(content_length) self.content_length = content_length self.wfile = wfile self.wfile_line = wfile_line self.position = 0 self.chunked_input = chunked_input self.chunk_length = -1 def _do_read(self, reader, length=None): if self.wfile is not None: ## 100 Continue self.wfile.write(self.wfile_line) self.wfile = None self.wfile_line = None if length is None and self.content_length is not None: length = self.content_length - self.position if length and length > self.content_length - self.position: length = self.content_length - self.position if not length: return '' try: read = reader(length) except greenio.SSL.ZeroReturnError: read = '' self.position += len(read) return read def _chunked_read(self, rfile, length=None, use_readline=False): if self.wfile is not None: ## 100 Continue self.wfile.write(self.wfile_line) self.wfile = None self.wfile_line = None try: if length == 0: return "" if length < 0: length = None if use_readline: reader = self.rfile.readline else: reader = self.rfile.read response = [] while self.chunk_length != 0: maxreadlen = self.chunk_length - self.position if length is not None and length < maxreadlen: maxreadlen = length if maxreadlen > 0: data = reader(maxreadlen) if not data: self.chunk_length = 0 raise IOError("unexpected end of file while parsing chunked data") datalen = len(data) response.append(data) self.position += datalen if self.chunk_length == self.position: rfile.readline() if length is not None: length -= datalen if length == 0: break if use_readline and data[-1] == "\n": break else: self.chunk_length = int(rfile.readline().split(";", 1)[0], 16) self.position = 0 if self.chunk_length == 0: rfile.readline() except greenio.SSL.ZeroReturnError: pass return ''.join(response) def read(self, length=None): if self.chunked_input: return self._chunked_read(self.rfile, length) return self._do_read(self.rfile.read, length) def readline(self, size=None): if self.chunked_input: return self._chunked_read(self.rfile, size, True) else: return self._do_read(self.rfile.readline, size) def readlines(self, hint=None): return self._do_read(self.rfile.readlines, hint) def __iter__(self): return iter(self.read()) def get_socket(self): return self.rfile._sock class HeaderLineTooLong(Exception): pass class HeadersTooLarge(Exception): pass class FileObjectForHeaders(object): def __init__(self, fp): self.fp = fp self.total_header_size = 0 def readline(self, size=-1): sz = size if size < 0: sz = MAX_HEADER_LINE rv = self.fp.readline(sz) if size < 0 and len(rv) >= MAX_HEADER_LINE: raise HeaderLineTooLong() self.total_header_size += len(rv) if self.total_header_size > MAX_TOTAL_HEADER_SIZE: raise HeadersTooLarge() return rv class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler): protocol_version = 'HTTP/1.1' minimum_chunk_size = MINIMUM_CHUNK_SIZE def setup(self): # overriding SocketServer.setup to correctly handle SSL.Connection objects conn = self.connection = self.request try: self.rfile = conn.makefile('rb', self.rbufsize) self.wfile = conn.makefile('wb', self.wbufsize) except (AttributeError, NotImplementedError): if hasattr(conn, 'send') and hasattr(conn, 'recv'): # it's an SSL.Connection self.rfile = socket._fileobject(conn, "rb", self.rbufsize) self.wfile = socket._fileobject(conn, "wb", self.wbufsize) else: # it's a SSLObject, or a martian raise NotImplementedError("wsgi.py doesn't support sockets "\ "of type %s" % type(conn)) def handle_one_request(self): if self.server.max_http_version: self.protocol_version = self.server.max_http_version if self.rfile.closed: self.close_connection = 1 return try: self.raw_requestline = self.rfile.readline(self.server.url_length_limit) if len(self.raw_requestline) == self.server.url_length_limit: self.wfile.write( "HTTP/1.0 414 Request URI Too Long\r\n" "Connection: close\r\nContent-length: 0\r\n\r\n") self.close_connection = 1 return except greenio.SSL.ZeroReturnError: self.raw_requestline = '' except socket.error, e: if get_errno(e) not in BAD_SOCK: raise self.raw_requestline = '' if not self.raw_requestline: self.close_connection = 1 return orig_rfile = self.rfile try: self.rfile = FileObjectForHeaders(self.rfile) if not self.parse_request(): return except HeaderLineTooLong: self.wfile.write( "HTTP/1.0 400 Header Line Too Long\r\n" "Connection: close\r\nContent-length: 0\r\n\r\n") self.close_connection = 1 return except HeadersTooLarge: self.wfile.write( "HTTP/1.0 400 Headers Too Large\r\n" "Connection: close\r\nContent-length: 0\r\n\r\n") self.close_connection = 1 return finally: self.rfile = orig_rfile content_length = self.headers.getheader('content-length') if content_length: try: int(content_length) except ValueError: self.wfile.write( "HTTP/1.0 400 Bad Request\r\n" "Connection: close\r\nContent-length: 0\r\n\r\n") self.close_connection = 1 return self.environ = self.get_environ() self.application = self.server.app try: self.server.outstanding_requests += 1 try: self.handle_one_response() except socket.error, e: # Broken pipe, connection reset by peer if get_errno(e) not in BROKEN_SOCK: raise finally: self.server.outstanding_requests -= 1 def handle_one_response(self): start = time.time() headers_set = [] headers_sent = [] wfile = self.wfile result = None use_chunked = [False] length = [0] status_code = [200] def write(data, _writelines=wfile.writelines): towrite = [] if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: status, response_headers = headers_set headers_sent.append(1) header_list = [header[0].lower() for header in response_headers] towrite.append('%s %s\r\n' % (self.protocol_version, status)) for header in response_headers: towrite.append('%s: %s\r\n' % header) # send Date header? if 'date' not in header_list: towrite.append('Date: %s\r\n' % (format_date_time(time.time()),)) client_conn = self.headers.get('Connection', '').lower() send_keep_alive = False if self.close_connection == 0 and \ self.server.keepalive and (client_conn == 'keep-alive' or \ (self.request_version == 'HTTP/1.1' and not client_conn == 'close')): # only send keep-alives back to clients that sent them, # it's redundant for 1.1 connections send_keep_alive = (client_conn == 'keep-alive') self.close_connection = 0 else: self.close_connection = 1 if 'content-length' not in header_list: if self.request_version == 'HTTP/1.1': use_chunked[0] = True towrite.append('Transfer-Encoding: chunked\r\n') elif 'content-length' not in header_list: # client is 1.0 and therefore must read to EOF self.close_connection = 1 if self.close_connection: towrite.append('Connection: close\r\n') elif send_keep_alive: towrite.append('Connection: keep-alive\r\n') towrite.append('\r\n') # end of header writing if use_chunked[0]: ## Write the chunked encoding towrite.append("%x\r\n%s\r\n" % (len(data), data)) else: towrite.append(data) try: _writelines(towrite) length[0] = length[0] + sum(map(len, towrite)) except UnicodeEncodeError: self.server.log_message("Encountered non-ascii unicode while attempting to write wsgi response: %r" % [x for x in towrite if isinstance(x, unicode)]) self.server.log_message(traceback.format_exc()) _writelines( ["HTTP/1.1 500 Internal Server Error\r\n", "Connection: close\r\n", "Content-type: text/plain\r\n", "Content-length: 98\r\n", "Date: %s\r\n" % format_date_time(time.time()), "\r\n", ("Internal Server Error: wsgi application passed " "a unicode object to the server instead of a string.")]) def start_response(status, response_headers, exc_info=None): status_code[0] = status.split()[0] if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: # Avoid dangling circular ref exc_info = None capitalized_headers = [('-'.join([x.capitalize() for x in key.split('-')]), value) for key, value in response_headers] headers_set[:] = [status, capitalized_headers] return write try: try: result = self.application(self.environ, start_response) if (isinstance(result, _AlreadyHandled) or isinstance(getattr(result, '_obj', None), _AlreadyHandled)): self.close_connection = 1 return if not headers_sent and hasattr(result, '__len__') and \ 'Content-Length' not in [h for h, _v in headers_set[1]]: headers_set[1].append(('Content-Length', str(sum(map(len, result))))) towrite = [] towrite_size = 0 just_written_size = 0 for data in result: towrite.append(data) towrite_size += len(data) if towrite_size >= self.minimum_chunk_size: write(''.join(towrite)) towrite = [] just_written_size = towrite_size towrite_size = 0 if towrite: just_written_size = towrite_size write(''.join(towrite)) if not headers_sent or (use_chunked[0] and just_written_size): write('') except Exception: self.close_connection = 1 tb = traceback.format_exc() self.server.log_message(tb) if not headers_set: err_body = "" if(self.server.debug): err_body = tb start_response("500 Internal Server Error", [('Content-type', 'text/plain'), ('Content-length', len(err_body))]) write(err_body) finally: if hasattr(result, 'close'): result.close() if (self.environ['eventlet.input'].chunked_input or self.environ['eventlet.input'].position \ < self.environ['eventlet.input'].content_length): ## Read and discard body if there was no pending 100-continue if not self.environ['eventlet.input'].wfile: # NOTE: MINIMUM_CHUNK_SIZE is used here for purpose different than chunking. # We use it only cause it's at hand and has reasonable value in terms of # emptying the buffer. while self.environ['eventlet.input'].read(MINIMUM_CHUNK_SIZE): pass finish = time.time() for hook, args, kwargs in self.environ['eventlet.posthooks']: hook(self.environ, *args, **kwargs) if self.server.log_output: self.server.log_message(self.server.log_format % { 'client_ip': self.get_client_ip(), 'client_port': self.client_address[1], 'date_time': self.log_date_time_string(), 'request_line': self.requestline, 'status_code': status_code[0], 'body_length': length[0], 'wall_seconds': finish - start, }) def get_client_ip(self): client_ip = self.client_address[0] if self.server.log_x_forwarded_for: forward = self.headers.get('X-Forwarded-For', '').replace(' ', '') if forward: client_ip = "%s,%s" % (forward, client_ip) return client_ip def get_environ(self): env = self.server.get_environ() env['REQUEST_METHOD'] = self.command env['SCRIPT_NAME'] = '' pq = self.path.split('?', 1) env['RAW_PATH_INFO'] = pq[0] env['PATH_INFO'] = urllib.unquote(pq[0]) if len(pq) > 1: env['QUERY_STRING'] = pq[1] if self.headers.typeheader is None: env['CONTENT_TYPE'] = self.headers.type else: env['CONTENT_TYPE'] = self.headers.typeheader length = self.headers.getheader('content-length') if length: env['CONTENT_LENGTH'] = length env['SERVER_PROTOCOL'] = 'HTTP/1.0' host, port = self.request.getsockname()[:2] env['SERVER_NAME'] = host env['SERVER_PORT'] = str(port) env['REMOTE_ADDR'] = self.client_address[0] env['REMOTE_PORT'] = str(self.client_address[1]) env['GATEWAY_INTERFACE'] = 'CGI/1.1' for h in self.headers.headers: k, v = h.split(':', 1) k = k.replace('-', '_').upper() v = v.strip() if k in env: continue envk = 'HTTP_' + k if envk in env: env[envk] += ',' + v else: env[envk] = v if env.get('HTTP_EXPECT') == '100-continue': wfile = self.wfile wfile_line = 'HTTP/1.1 100 Continue\r\n\r\n' else: wfile = None wfile_line = None chunked = env.get('HTTP_TRANSFER_ENCODING', '').lower() == 'chunked' env['wsgi.input'] = env['eventlet.input'] = Input( self.rfile, length, wfile=wfile, wfile_line=wfile_line, chunked_input=chunked) env['eventlet.posthooks'] = [] return env def finish(self): try: BaseHTTPServer.BaseHTTPRequestHandler.finish(self) except socket.error, e: # Broken pipe, connection reset by peer if get_errno(e) not in BROKEN_SOCK: raise greenio.shutdown_safe(self.connection) self.connection.close() class Server(BaseHTTPServer.HTTPServer): def __init__(self, socket, address, app, log=None, environ=None, max_http_version=None, protocol=HttpProtocol, minimum_chunk_size=None, log_x_forwarded_for=True, keepalive=True, log_output=True, log_format=DEFAULT_LOG_FORMAT, url_length_limit=MAX_REQUEST_LINE, debug=True): self.outstanding_requests = 0 self.socket = socket self.address = address if log: self.log = log else: self.log = sys.stderr self.app = app self.keepalive = keepalive self.environ = environ self.max_http_version = max_http_version self.protocol = protocol self.pid = os.getpid() self.minimum_chunk_size = minimum_chunk_size self.log_x_forwarded_for = log_x_forwarded_for self.log_output = log_output self.log_format = log_format self.url_length_limit = url_length_limit self.debug = debug def get_environ(self): d = { 'wsgi.errors': sys.stderr, 'wsgi.version': (1, 0), 'wsgi.multithread': True, 'wsgi.multiprocess': False, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', } # detect secure socket if hasattr(self.socket, 'do_handshake'): d['wsgi.url_scheme'] = 'https' d['HTTPS'] = 'on' if self.environ is not None: d.update(self.environ) return d def process_request(self, (socket, address)): # The actual request handling takes place in __init__, so we need to # set minimum_chunk_size before __init__ executes and we don't want to modify # class variable proto = types.InstanceType(self.protocol) if self.minimum_chunk_size is not None: proto.minimum_chunk_size = self.minimum_chunk_size proto.__init__(socket, address, self) def log_message(self, message): self.log.write(message + '\n') try: import ssl ACCEPT_EXCEPTIONS = (socket.error, ssl.SSLError) ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF, errno.ECONNRESET, ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_SSL)) except ImportError: ACCEPT_EXCEPTIONS = (socket.error,) ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF, errno.ECONNRESET)) def server(sock, site, log=None, environ=None, max_size=None, max_http_version=DEFAULT_MAX_HTTP_VERSION, protocol=HttpProtocol, server_event=None, minimum_chunk_size=None, log_x_forwarded_for=True, custom_pool=None, keepalive=True, log_output=True, log_format=DEFAULT_LOG_FORMAT, url_length_limit=MAX_REQUEST_LINE, debug=True): """ Start up a wsgi server handling requests from the supplied server socket. This function loops forever. The *sock* object will be closed after server exits, but the underlying file descriptor will remain open, so if you have a dup() of *sock*, it will remain usable. :param sock: Server socket, must be already bound to a port and listening. :param site: WSGI application function. :param log: File-like object that logs should be written to. If not specified, sys.stderr is used. :param environ: Additional parameters that go into the environ dictionary of every request. :param max_size: Maximum number of client connections opened at any time by this server. :param max_http_version: Set to "HTTP/1.0" to make the server pretend it only supports HTTP 1.0. This can help with applications or clients that don't behave properly using HTTP 1.1. :param protocol: Protocol class. Deprecated. :param server_event: Used to collect the Server object. Deprecated. :param minimum_chunk_size: Minimum size in bytes for http chunks. This can be used to improve performance of applications which yield many small strings, though using it technically violates the WSGI spec. :param log_x_forwarded_for: If True (the default), logs the contents of the x-forwarded-for header in addition to the actual client ip address in the 'client_ip' field of the log line. :param custom_pool: A custom GreenPool instance which is used to spawn client green threads. If this is supplied, max_size is ignored. :param keepalive: If set to False, disables keepalives on the server; all connections will be closed after serving one request. :param log_output: A Boolean indicating if the server will log data or not. :param log_format: A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. The default is a good example of how to use it. :param url_length_limit: A maximum allowed length of the request url. If exceeded, 414 error is returned. :param debug: True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. """ serv = Server(sock, sock.getsockname(), site, log, environ=environ, max_http_version=max_http_version, protocol=protocol, minimum_chunk_size=minimum_chunk_size, log_x_forwarded_for=log_x_forwarded_for, keepalive=keepalive, log_output=log_output, log_format=log_format, url_length_limit=url_length_limit, debug=debug) if server_event is not None: server_event.send(serv) if max_size is None: max_size = DEFAULT_MAX_SIMULTANEOUS_REQUESTS if custom_pool is not None: pool = custom_pool else: pool = greenpool.GreenPool(max_size) try: host, port = sock.getsockname()[:2] port = ':%s' % (port, ) if hasattr(sock, 'do_handshake'): scheme = 'https' if port == ':443': port = '' else: scheme = 'http' if port == ':80': port = '' serv.log.write("(%s) wsgi starting up on %s://%s%s/\n" % ( serv.pid, scheme, host, port)) while True: try: client_socket = sock.accept() if debug: serv.log.write("(%s) accepted %r\n" % ( serv.pid, client_socket[1])) try: pool.spawn_n(serv.process_request, client_socket) except AttributeError: warnings.warn("wsgi's pool should be an instance of " \ "eventlet.greenpool.GreenPool, is %s. Please convert your"\ " call site to use GreenPool instead" % type(pool), DeprecationWarning, stacklevel=2) pool.execute_async(serv.process_request, client_socket) except ACCEPT_EXCEPTIONS, e: if get_errno(e) not in ACCEPT_ERRNO: raise except (KeyboardInterrupt, SystemExit): serv.log.write("wsgi exiting\n") break finally: try: # NOTE: It's not clear whether we want this to leave the # socket open or close it. Use cases like Spawning want # the underlying fd to remain open, but if we're going # that far we might as well not bother closing sock at # all. sock.close() except socket.error, e: if get_errno(e) not in BROKEN_SOCK: traceback.print_exc() eventlet-0.13.0/eventlet/twistedutil/0000755000175000017500000000000012164600754020475 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/twistedutil/__init__.py0000644000175000017500000000420112164577340022607 0ustar temototemoto00000000000000from eventlet.hubs import get_hub from eventlet import spawn, getcurrent def block_on(deferred): cur = [getcurrent()] synchronous = [] def cb(value): if cur: if getcurrent() is cur[0]: synchronous.append((value, None)) else: cur[0].switch(value) return value def eb(fail): if cur: if getcurrent() is cur[0]: synchronous.append((None, fail)) else: fail.throwExceptionIntoGenerator(cur[0]) deferred.addCallbacks(cb, eb) if synchronous: result, fail = synchronous[0] if fail is not None: fail.raiseException() return result try: return get_hub().switch() finally: del cur[0] def _putResultInDeferred(deferred, f, args, kwargs): try: result = f(*args, **kwargs) except: from twisted.python import failure f = failure.Failure() deferred.errback(f) else: deferred.callback(result) def deferToGreenThread(func, *args, **kwargs): from twisted.internet import defer d = defer.Deferred() spawn(_putResultInDeferred, d, func, args, kwargs) return d def callInGreenThread(func, *args, **kwargs): return spawn(func, *args, **kwargs) if __name__=='__main__': import sys try: num = int(sys.argv[1]) except: sys.exit('Supply number of test as an argument, 0, 1, 2 or 3') from twisted.internet import reactor def test(): print block_on(reactor.resolver.getHostByName('www.google.com')) print block_on(reactor.resolver.getHostByName('###')) if num==0: test() elif num==1: spawn(test) from eventlet.api import sleep print 'sleeping..' sleep(5) print 'done sleeping..' elif num==2: from eventlet.twistedutil import join_reactor spawn(test) reactor.run() elif num==3: from eventlet.twistedutil import join_reactor print "fails because it's impossible to use block_on from the mainloop" reactor.callLater(0, test) reactor.run() eventlet-0.13.0/eventlet/twistedutil/protocol.py0000644000175000017500000003153512164577340022723 0ustar temototemoto00000000000000"""Basic twisted protocols converted to synchronous mode""" import sys from twisted.internet.protocol import Protocol as twistedProtocol from twisted.internet.error import ConnectionDone from twisted.internet.protocol import Factory, ClientFactory from twisted.internet import main from twisted.python import failure from eventlet import greenthread from eventlet import getcurrent from eventlet.coros import Queue from eventlet.event import Event as BaseEvent class ValueQueue(Queue): """Queue that keeps the last item forever in the queue if it's an exception. Useful if you send an exception over queue only once, and once sent it must be always available. """ def send(self, value=None, exc=None): if exc is not None or not self.has_error(): Queue.send(self, value, exc) def wait(self): """The difference from Queue.wait: if there is an only item in the Queue and it is an exception, raise it, but keep it in the Queue, so that future calls to wait() will raise it again. """ if self.has_error() and len(self.items)==1: # the last item, which is an exception, raise without emptying the Queue getcurrent().throw(*self.items[0][1]) else: return Queue.wait(self) def has_error(self): return self.items and self.items[-1][1] is not None class Event(BaseEvent): def send(self, value, exc=None): if self.ready(): self.reset() return BaseEvent.send(self, value, exc) def send_exception(self, *throw_args): if self.ready(): self.reset() return BaseEvent.send_exception(self, *throw_args) class Producer2Event(object): # implements IPullProducer def __init__(self, event): self.event = event def resumeProducing(self): self.event.send(1) def stopProducing(self): del self.event class GreenTransportBase(object): transportBufferSize = None def __init__(self, transportBufferSize=None): if transportBufferSize is not None: self.transportBufferSize = transportBufferSize self._queue = ValueQueue() self._write_event = Event() self._disconnected_event = Event() def build_protocol(self): return self.protocol_class(self) def _got_transport(self, transport): self._queue.send(transport) def _got_data(self, data): self._queue.send(data) def _connectionLost(self, reason): self._disconnected_event.send(reason.value) self._queue.send_exception(reason.value) self._write_event.send_exception(reason.value) def _wait(self): if self.disconnecting or self._disconnected_event.ready(): if self._queue: return self._queue.wait() else: raise self._disconnected_event.wait() self.resumeProducing() try: return self._queue.wait() finally: self.pauseProducing() def write(self, data, wait=True): if self._disconnected_event.ready(): raise self._disconnected_event.wait() if wait: self._write_event.reset() self.transport.write(data) self._write_event.wait() else: self.transport.write(data) def loseConnection(self, connDone=failure.Failure(main.CONNECTION_DONE), wait=True): self.transport.unregisterProducer() self.transport.loseConnection(connDone) if wait: self._disconnected_event.wait() def __getattr__(self, item): if item=='transport': raise AttributeError(item) if hasattr(self, 'transport'): try: return getattr(self.transport, item) except AttributeError: me = type(self).__name__ trans = type(self.transport).__name__ raise AttributeError("Neither %r nor %r has attribute %r" % (me, trans, item)) else: raise AttributeError(item) def resumeProducing(self): self.paused -= 1 if self.paused==0: self.transport.resumeProducing() def pauseProducing(self): self.paused += 1 if self.paused==1: self.transport.pauseProducing() def _init_transport_producer(self): self.transport.pauseProducing() self.paused = 1 def _init_transport(self): transport = self._queue.wait() self.transport = transport if self.transportBufferSize is not None: transport.bufferSize = self.transportBufferSize self._init_transport_producer() transport.registerProducer(Producer2Event(self._write_event), False) class Protocol(twistedProtocol): def __init__(self, recepient): self._recepient = recepient def connectionMade(self): self._recepient._got_transport(self.transport) def dataReceived(self, data): self._recepient._got_data(data) def connectionLost(self, reason): self._recepient._connectionLost(reason) class UnbufferedTransport(GreenTransportBase): """A very simple implementation of a green transport without an additional buffer""" protocol_class = Protocol def recv(self): """Receive a single chunk of undefined size. Return '' if connection was closed cleanly, raise the exception if it was closed in a non clean fashion. After that all successive calls return ''. """ if self._disconnected_event.ready(): return '' try: return self._wait() except ConnectionDone: return '' def read(self): """Read the data from the socket until the connection is closed cleanly. If connection was closed in a non-clean fashion, the appropriate exception is raised. In that case already received data is lost. Next time read() is called it returns ''. """ result = '' while True: recvd = self.recv() if not recvd: break result += recvd return result # iterator protocol: def __iter__(self): return self def next(self): result = self.recv() if not result: raise StopIteration return result class GreenTransport(GreenTransportBase): protocol_class = Protocol _buffer = '' _error = None def read(self, size=-1): """Read size bytes or until EOF""" if not self._disconnected_event.ready(): try: while len(self._buffer) < size or size < 0: self._buffer += self._wait() except ConnectionDone: pass except: if not self._disconnected_event.has_exception(): raise if size>=0: result, self._buffer = self._buffer[:size], self._buffer[size:] else: result, self._buffer = self._buffer, '' if not result and self._disconnected_event.has_exception(): try: self._disconnected_event.wait() except ConnectionDone: pass return result def recv(self, buflen=None): """Receive a single chunk of undefined size but no bigger than buflen""" if not self._disconnected_event.ready(): self.resumeProducing() try: try: recvd = self._wait() #print 'received %r' % recvd self._buffer += recvd except ConnectionDone: pass except: if not self._disconnected_event.has_exception(): raise finally: self.pauseProducing() if buflen is None: result, self._buffer = self._buffer, '' else: result, self._buffer = self._buffer[:buflen], self._buffer[buflen:] if not result and self._disconnected_event.has_exception(): try: self._disconnected_event.wait() except ConnectionDone: pass return result # iterator protocol: def __iter__(self): return self def next(self): res = self.recv() if not res: raise StopIteration return res class GreenInstanceFactory(ClientFactory): def __init__(self, instance, event): self.instance = instance self.event = event def buildProtocol(self, addr): return self.instance def clientConnectionFailed(self, connector, reason): self.event.send_exception(reason.type, reason.value, reason.tb) class GreenClientCreator(object): """Connect to a remote host and return a connected green transport instance. """ gtransport_class = GreenTransport def __init__(self, reactor=None, gtransport_class=None, *args, **kwargs): if reactor is None: from twisted.internet import reactor self.reactor = reactor if gtransport_class is not None: self.gtransport_class = gtransport_class self.args = args self.kwargs = kwargs def _make_transport_and_factory(self): gtransport = self.gtransport_class(*self.args, **self.kwargs) protocol = gtransport.build_protocol() factory = GreenInstanceFactory(protocol, gtransport._queue) return gtransport, factory def connectTCP(self, host, port, *args, **kwargs): gtransport, factory = self._make_transport_and_factory() self.reactor.connectTCP(host, port, factory, *args, **kwargs) gtransport._init_transport() return gtransport def connectSSL(self, host, port, *args, **kwargs): gtransport, factory = self._make_transport_and_factory() self.reactor.connectSSL(host, port, factory, *args, **kwargs) gtransport._init_transport() return gtransport def connectTLS(self, host, port, *args, **kwargs): gtransport, factory = self._make_transport_and_factory() self.reactor.connectTLS(host, port, factory, *args, **kwargs) gtransport._init_transport() return gtransport def connectUNIX(self, address, *args, **kwargs): gtransport, factory = self._make_transport_and_factory() self.reactor.connectUNIX(address, factory, *args, **kwargs) gtransport._init_transport() return gtransport def connectSRV(self, service, domain, *args, **kwargs): SRVConnector = kwargs.pop('ConnectorClass', None) if SRVConnector is None: from twisted.names.srvconnect import SRVConnector gtransport, factory = self._make_transport_and_factory() c = SRVConnector(self.reactor, service, domain, factory, *args, **kwargs) c.connect() gtransport._init_transport() return gtransport class SimpleSpawnFactory(Factory): """Factory that spawns a new greenlet for each incoming connection. For an incoming connection a new greenlet is created using the provided callback as a function and a connected green transport instance as an argument. """ gtransport_class = GreenTransport def __init__(self, handler, gtransport_class=None, *args, **kwargs): if callable(handler): self.handler = handler else: self.handler = handler.send if hasattr(handler, 'send_exception'): self.exc_handler = handler.send_exception if gtransport_class is not None: self.gtransport_class = gtransport_class self.args = args self.kwargs = kwargs def exc_handler(self, *args): pass def buildProtocol(self, addr): gtransport = self.gtransport_class(*self.args, **self.kwargs) protocol = gtransport.build_protocol() protocol.factory = self self._do_spawn(gtransport, protocol) return protocol def _do_spawn(self, gtransport, protocol): greenthread.spawn(self._run_handler, gtransport, protocol) def _run_handler(self, gtransport, protocol): try: gtransport._init_transport() except Exception: self.exc_handler(*sys.exc_info()) else: self.handler(gtransport) class SpawnFactory(SimpleSpawnFactory): """An extension to SimpleSpawnFactory that provides some control over the greenlets it has spawned. """ def __init__(self, handler, gtransport_class=None, *args, **kwargs): self.greenlets = set() SimpleSpawnFactory.__init__(self, handler, gtransport_class, *args, **kwargs) def _do_spawn(self, gtransport, protocol): g = greenthread.spawn(self._run_handler, gtransport, protocol) self.greenlets.add(g) g.link(lambda *_: self.greenlets.remove(g)) def waitall(self): results = [] for g in self.greenlets: results.append(g.wait()) return results eventlet-0.13.0/eventlet/twistedutil/protocols/0000755000175000017500000000000012164600754022521 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/twistedutil/protocols/__init__.py0000644000175000017500000000000012164577340024624 0ustar temototemoto00000000000000eventlet-0.13.0/eventlet/twistedutil/protocols/basic.py0000644000175000017500000000165012164577340024162 0ustar temototemoto00000000000000from twisted.protocols import basic from twisted.internet.error import ConnectionDone from eventlet.twistedutil.protocol import GreenTransportBase class LineOnlyReceiver(basic.LineOnlyReceiver): def __init__(self, recepient): self._recepient = recepient def connectionMade(self): self._recepient._got_transport(self.transport) def connectionLost(self, reason): self._recepient._connectionLost(reason) def lineReceived(self, line): self._recepient._got_data(line) class LineOnlyReceiverTransport(GreenTransportBase): protocol_class = LineOnlyReceiver def readline(self): return self._wait() def sendline(self, line): self.protocol.sendLine(line) # iterator protocol: def __iter__(self): return self def next(self): try: return self.readline() except ConnectionDone: raise StopIteration eventlet-0.13.0/eventlet/twistedutil/join_reactor.py0000644000175000017500000000064612164577340023537 0ustar temototemoto00000000000000"""Integrate eventlet with twisted's reactor mainloop. You generally don't have to use it unless you need to call reactor.run() yourself. """ from eventlet.hubs.twistedr import BaseTwistedHub from eventlet.support import greenlets as greenlet from eventlet.hubs import _threadlocal, use_hub use_hub(BaseTwistedHub) assert not hasattr(_threadlocal, 'hub') hub = _threadlocal.hub = _threadlocal.Hub(greenlet.getcurrent()) eventlet-0.13.0/eventlet/queue.py0000644000175000017500000004117112164577340017622 0ustar temototemoto00000000000000# Copyright (c) 2009 Denis Bilenko, denis.bilenko at gmail com # Copyright (c) 2010 Eventlet Contributors (see AUTHORS) # and licensed under the MIT license: # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Synchronized queues. The :mod:`eventlet.queue` module implements multi-producer, multi-consumer queues that work across greenlets, with the API similar to the classes found in the standard :mod:`Queue` and :class:`multiprocessing ` modules. A major difference is that queues in this module operate as channels when initialized with *maxsize* of zero. In such case, both :meth:`Queue.empty` and :meth:`Queue.full` return ``True`` and :meth:`Queue.put` always blocks until a call to :meth:`Queue.get` retrieves the item. An interesting difference, made possible because of greenthreads, is that :meth:`Queue.qsize`, :meth:`Queue.empty`, and :meth:`Queue.full` *can* be used as indicators of whether the subsequent :meth:`Queue.get` or :meth:`Queue.put` will not block. The new methods :meth:`Queue.getting` and :meth:`Queue.putting` report on the number of greenthreads blocking in :meth:`put ` or :meth:`get ` respectively. """ import sys import heapq import collections import traceback from Queue import Full, Empty _NONE = object() from eventlet.hubs import get_hub from eventlet.greenthread import getcurrent from eventlet.event import Event from eventlet.timeout import Timeout __all__ = ['Queue', 'PriorityQueue', 'LifoQueue', 'LightQueue', 'Full', 'Empty'] class Waiter(object): """A low level synchronization class. Wrapper around greenlet's ``switch()`` and ``throw()`` calls that makes them safe: * switching will occur only if the waiting greenlet is executing :meth:`wait` method currently. Otherwise, :meth:`switch` and :meth:`throw` are no-ops. * any error raised in the greenlet is handled inside :meth:`switch` and :meth:`throw` The :meth:`switch` and :meth:`throw` methods must only be called from the :class:`Hub` greenlet. The :meth:`wait` method must be called from a greenlet other than :class:`Hub`. """ __slots__ = ['greenlet'] def __init__(self): self.greenlet = None def __repr__(self): if self.waiting: waiting = ' waiting' else: waiting = '' return '<%s at %s%s greenlet=%r>' % (type(self).__name__, hex(id(self)), waiting, self.greenlet) def __str__(self): """ >>> print Waiter() """ if self.waiting: waiting = ' waiting' else: waiting = '' return '<%s%s greenlet=%s>' % (type(self).__name__, waiting, self.greenlet) def __nonzero__(self): return self.greenlet is not None @property def waiting(self): return self.greenlet is not None def switch(self, value=None): """Wake up the greenlet that is calling wait() currently (if there is one). Can only be called from Hub's greenlet. """ assert getcurrent() is get_hub().greenlet, "Can only use Waiter.switch method from the mainloop" if self.greenlet is not None: try: self.greenlet.switch(value) except: traceback.print_exc() def throw(self, *throw_args): """Make greenlet calling wait() wake up (if there is a wait()). Can only be called from Hub's greenlet. """ assert getcurrent() is get_hub().greenlet, "Can only use Waiter.switch method from the mainloop" if self.greenlet is not None: try: self.greenlet.throw(*throw_args) except: traceback.print_exc() # XXX should be renamed to get() ? and the whole class is called Receiver? def wait(self): """Wait until switch() or throw() is called. """ assert self.greenlet is None, 'This Waiter is already used by %r' % (self.greenlet, ) self.greenlet = getcurrent() try: return get_hub().switch() finally: self.greenlet = None class LightQueue(object): """ This is a variant of Queue that behaves mostly like the standard :class:`Queue`. It differs by not supporting the :meth:`task_done ` or :meth:`join ` methods, and is a little faster for not having that overhead. """ def __init__(self, maxsize=None): if maxsize is None or maxsize < 0: #None is not comparable in 3.x self.maxsize = None else: self.maxsize = maxsize self.getters = set() self.putters = set() self._event_unlock = None self._init(maxsize) # QQQ make maxsize into a property with setter that schedules unlock if necessary def _init(self, maxsize): self.queue = collections.deque() def _get(self): return self.queue.popleft() def _put(self, item): self.queue.append(item) def __repr__(self): return '<%s at %s %s>' % (type(self).__name__, hex(id(self)), self._format()) def __str__(self): return '<%s %s>' % (type(self).__name__, self._format()) def _format(self): result = 'maxsize=%r' % (self.maxsize, ) if getattr(self, 'queue', None): result += ' queue=%r' % self.queue if self.getters: result += ' getters[%s]' % len(self.getters) if self.putters: result += ' putters[%s]' % len(self.putters) if self._event_unlock is not None: result += ' unlocking' return result def qsize(self): """Return the size of the queue.""" return len(self.queue) def resize(self, size): """Resizes the queue's maximum size. If the size is increased, and there are putters waiting, they may be woken up.""" if self.maxsize is not None and (size is None or size > self.maxsize): # None is not comparable in 3.x # Maybe wake some stuff up self._schedule_unlock() self.maxsize = size def putting(self): """Returns the number of greenthreads that are blocked waiting to put items into the queue.""" return len(self.putters) def getting(self): """Returns the number of greenthreads that are blocked waiting on an empty queue.""" return len(self.getters) def empty(self): """Return ``True`` if the queue is empty, ``False`` otherwise.""" return not self.qsize() def full(self): """Return ``True`` if the queue is full, ``False`` otherwise. ``Queue(None)`` is never full. """ return self.maxsize is not None and self.qsize() >= self.maxsize # None is not comparable in 3.x def put(self, item, block=True, timeout=None): """Put an item into the queue. If optional arg *block* is true and *timeout* is ``None`` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :class:`Full` exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is immediately available, else raise the :class:`Full` exception (*timeout* is ignored in that case). """ if self.maxsize is None or self.qsize() < self.maxsize: # there's a free slot, put an item right away self._put(item) if self.getters: self._schedule_unlock() elif not block and get_hub().greenlet is getcurrent(): # we're in the mainloop, so we cannot wait; we can switch() to other greenlets though # find a getter and deliver an item to it while self.getters: getter = self.getters.pop() if getter: self._put(item) item = self._get() getter.switch(item) return raise Full elif block: waiter = ItemWaiter(item) self.putters.add(waiter) timeout = Timeout(timeout, Full) try: if self.getters: self._schedule_unlock() result = waiter.wait() assert result is waiter, "Invalid switch into Queue.put: %r" % (result, ) if waiter.item is not _NONE: self._put(item) finally: timeout.cancel() self.putters.discard(waiter) else: raise Full def put_nowait(self, item): """Put an item into the queue without blocking. Only enqueue the item if a free slot is immediately available. Otherwise raise the :class:`Full` exception. """ self.put(item, False) def get(self, block=True, timeout=None): """Remove and return an item from the queue. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :class:`Empty` exception if no item was available within that time. Otherwise (*block* is false), return an item if one is immediately available, else raise the :class:`Empty` exception (*timeout* is ignored in that case). """ if self.qsize(): if self.putters: self._schedule_unlock() return self._get() elif not block and get_hub().greenlet is getcurrent(): # special case to make get_nowait() runnable in the mainloop greenlet # there are no items in the queue; try to fix the situation by unlocking putters while self.putters: putter = self.putters.pop() if putter: putter.switch(putter) if self.qsize(): return self._get() raise Empty elif block: waiter = Waiter() timeout = Timeout(timeout, Empty) try: self.getters.add(waiter) if self.putters: self._schedule_unlock() return waiter.wait() finally: self.getters.discard(waiter) timeout.cancel() else: raise Empty def get_nowait(self): """Remove and return an item from the queue without blocking. Only get an item if one is immediately available. Otherwise raise the :class:`Empty` exception. """ return self.get(False) def _unlock(self): try: while True: if self.qsize() and self.getters: getter = self.getters.pop() if getter: try: item = self._get() except: getter.throw(*sys.exc_info()) else: getter.switch(item) elif self.putters and self.getters: putter = self.putters.pop() if putter: getter = self.getters.pop() if getter: item = putter.item putter.item = _NONE # this makes greenlet calling put() not to call _put() again self._put(item) item = self._get() getter.switch(item) putter.switch(putter) else: self.putters.add(putter) elif self.putters and (self.getters or self.maxsize is None or self.qsize() < self.maxsize): putter = self.putters.pop() putter.switch(putter) else: break finally: self._event_unlock = None # QQQ maybe it's possible to obtain this info from libevent? # i.e. whether this event is pending _OR_ currently executing # testcase: 2 greenlets: while True: q.put(q.get()) - nothing else has a change to execute # to avoid this, schedule unlock with timer(0, ...) once in a while def _schedule_unlock(self): if self._event_unlock is None: self._event_unlock = get_hub().schedule_call_global(0, self._unlock) class ItemWaiter(Waiter): __slots__ = ['item'] def __init__(self, item): Waiter.__init__(self) self.item = item class Queue(LightQueue): '''Create a queue object with a given maximum size. If *maxsize* is less than zero or ``None``, the queue size is infinite. ``Queue(0)`` is a channel, that is, its :meth:`put` method always blocks until the item is delivered. (This is unlike the standard :class:`Queue`, where 0 means infinite size). In all other respects, this Queue class resembled the standard library, :class:`Queue`. ''' def __init__(self, maxsize=None): LightQueue.__init__(self, maxsize) self.unfinished_tasks = 0 self._cond = Event() def _format(self): result = LightQueue._format(self) if self.unfinished_tasks: result += ' tasks=%s _cond=%s' % (self.unfinished_tasks, self._cond) return result def _put(self, item): LightQueue._put(self, item) self._put_bookkeeping() def _put_bookkeeping(self): self.unfinished_tasks += 1 if self._cond.ready(): self._cond.reset() def task_done(self): '''Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each :meth:`get ` used to fetch a task, a subsequent call to :meth:`task_done` tells the queue that the processing on the task is complete. If a :meth:`join` is currently blocking, it will resume when all items have been processed (meaning that a :meth:`task_done` call was received for every item that had been :meth:`put ` into the queue). Raises a :exc:`ValueError` if called more times than there were items placed in the queue. ''' if self.unfinished_tasks <= 0: raise ValueError('task_done() called too many times') self.unfinished_tasks -= 1 if self.unfinished_tasks == 0: self._cond.send(None) def join(self): '''Block until all items in the queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls :meth:`task_done` to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, :meth:`join` unblocks. ''' self._cond.wait() class PriorityQueue(Queue): '''A subclass of :class:`Queue` that retrieves entries in priority order (lowest first). Entries are typically tuples of the form: ``(priority number, data)``. ''' def _init(self, maxsize): self.queue = [] def _put(self, item, heappush=heapq.heappush): heappush(self.queue, item) self._put_bookkeeping() def _get(self, heappop=heapq.heappop): return heappop(self.queue) class LifoQueue(Queue): '''A subclass of :class:`Queue` that retrieves most recently added entries first.''' def _init(self, maxsize): self.queue = [] def _put(self, item): self.queue.append(item) self._put_bookkeeping() def _get(self): return self.queue.pop() eventlet-0.13.0/eventlet/greenthread.py0000644000175000017500000002543112164577340020767 0ustar temototemoto00000000000000from collections import deque import sys from eventlet import event from eventlet import hubs from eventlet import timeout from eventlet.hubs import timer from eventlet.support import greenlets as greenlet import warnings __all__ = ['getcurrent', 'sleep', 'spawn', 'spawn_n', 'spawn_after', 'spawn_after_local', 'GreenThread'] getcurrent = greenlet.getcurrent def sleep(seconds=0): """Yield control to another eligible coroutine until at least *seconds* have elapsed. *seconds* may be specified as an integer, or a float if fractional seconds are desired. Calling :func:`~greenthread.sleep` with *seconds* of 0 is the canonical way of expressing a cooperative yield. For example, if one is looping over a large list performing an expensive calculation without calling any socket methods, it's a good idea to call ``sleep(0)`` occasionally; otherwise nothing else will run. """ hub = hubs.get_hub() current = getcurrent() assert hub.greenlet is not current, 'do not call blocking functions from the mainloop' timer = hub.schedule_call_global(seconds, current.switch) try: hub.switch() finally: timer.cancel() def spawn(func, *args, **kwargs): """Create a greenthread to run ``func(*args, **kwargs)``. Returns a :class:`GreenThread` object which you can use to get the results of the call. Execution control returns immediately to the caller; the created greenthread is merely scheduled to be run at the next available opportunity. Use :func:`spawn_after` to arrange for greenthreads to be spawned after a finite delay. """ hub = hubs.get_hub() g = GreenThread(hub.greenlet) hub.schedule_call_global(0, g.switch, func, args, kwargs) return g def spawn_n(func, *args, **kwargs): """Same as :func:`spawn`, but returns a ``greenlet`` object from which it is not possible to retrieve either a return value or whether it raised any exceptions. This is faster than :func:`spawn`; it is fastest if there are no keyword arguments. If an exception is raised in the function, spawn_n prints a stack trace; the print can be disabled by calling :func:`eventlet.debug.hub_exceptions` with False. """ return _spawn_n(0, func, args, kwargs)[1] def spawn_after(seconds, func, *args, **kwargs): """Spawns *func* after *seconds* have elapsed. It runs as scheduled even if the current greenthread has completed. *seconds* may be specified as an integer, or a float if fractional seconds are desired. The *func* will be called with the given *args* and keyword arguments *kwargs*, and will be executed within its own greenthread. The return value of :func:`spawn_after` is a :class:`GreenThread` object, which can be used to retrieve the results of the call. To cancel the spawn and prevent *func* from being called, call :meth:`GreenThread.cancel` on the return value of :func:`spawn_after`. This will not abort the function if it's already started running, which is generally the desired behavior. If terminating *func* regardless of whether it's started or not is the desired behavior, call :meth:`GreenThread.kill`. """ hub = hubs.get_hub() g = GreenThread(hub.greenlet) hub.schedule_call_global(seconds, g.switch, func, args, kwargs) return g def spawn_after_local(seconds, func, *args, **kwargs): """Spawns *func* after *seconds* have elapsed. The function will NOT be called if the current greenthread has exited. *seconds* may be specified as an integer, or a float if fractional seconds are desired. The *func* will be called with the given *args* and keyword arguments *kwargs*, and will be executed within its own greenthread. The return value of :func:`spawn_after` is a :class:`GreenThread` object, which can be used to retrieve the results of the call. To cancel the spawn and prevent *func* from being called, call :meth:`GreenThread.cancel` on the return value. This will not abort the function if it's already started running. If terminating *func* regardless of whether it's started or not is the desired behavior, call :meth:`GreenThread.kill`. """ hub = hubs.get_hub() g = GreenThread(hub.greenlet) hub.schedule_call_local(seconds, g.switch, func, args, kwargs) return g def call_after_global(seconds, func, *args, **kwargs): warnings.warn("call_after_global is renamed to spawn_after, which" "has the same signature and semantics (plus a bit extra). Please do a" " quick search-and-replace on your codebase, thanks!", DeprecationWarning, stacklevel=2) return _spawn_n(seconds, func, args, kwargs)[0] def call_after_local(seconds, function, *args, **kwargs): warnings.warn("call_after_local is renamed to spawn_after_local, which" "has the same signature and semantics (plus a bit extra).", DeprecationWarning, stacklevel=2) hub = hubs.get_hub() g = greenlet.greenlet(function, parent=hub.greenlet) t = hub.schedule_call_local(seconds, g.switch, *args, **kwargs) return t call_after = call_after_local def exc_after(seconds, *throw_args): warnings.warn("Instead of exc_after, which is deprecated, use " "Timeout(seconds, exception)", DeprecationWarning, stacklevel=2) if seconds is None: # dummy argument, do nothing return timer.Timer(seconds, lambda: None) hub = hubs.get_hub() return hub.schedule_call_local(seconds, getcurrent().throw, *throw_args) # deprecate, remove TimeoutError = timeout.Timeout with_timeout = timeout.with_timeout def _spawn_n(seconds, func, args, kwargs): hub = hubs.get_hub() g = greenlet.greenlet(func, parent=hub.greenlet) t = hub.schedule_call_global(seconds, g.switch, *args, **kwargs) return t, g class GreenThread(greenlet.greenlet): """The GreenThread class is a type of Greenlet which has the additional property of being able to retrieve the return value of the main function. Do not construct GreenThread objects directly; call :func:`spawn` to get one. """ def __init__(self, parent): greenlet.greenlet.__init__(self, self.main, parent) self._exit_event = event.Event() self._resolving_links = False def wait(self): """ Returns the result of the main function of this GreenThread. If the result is a normal return value, :meth:`wait` returns it. If it raised an exception, :meth:`wait` will raise the same exception (though the stack trace will unavoidably contain some frames from within the greenthread module).""" return self._exit_event.wait() def link(self, func, *curried_args, **curried_kwargs): """ Set up a function to be called with the results of the GreenThread. The function must have the following signature:: def func(gt, [curried args/kwargs]): When the GreenThread finishes its run, it calls *func* with itself and with the `curried arguments `_ supplied at link-time. If the function wants to retrieve the result of the GreenThread, it should call wait() on its first argument. Note that *func* is called within execution context of the GreenThread, so it is possible to interfere with other linked functions by doing things like switching explicitly to another greenthread. """ self._exit_funcs = getattr(self, '_exit_funcs', deque()) self._exit_funcs.append((func, curried_args, curried_kwargs)) if self._exit_event.ready(): self._resolve_links() def main(self, function, args, kwargs): try: result = function(*args, **kwargs) except: self._exit_event.send_exception(*sys.exc_info()) self._resolve_links() raise else: self._exit_event.send(result) self._resolve_links() def _resolve_links(self): # ca and ckw are the curried function arguments if self._resolving_links: return self._resolving_links = True try: exit_funcs = getattr(self, '_exit_funcs', deque()) while exit_funcs: f, ca, ckw = exit_funcs.popleft() f(self, *ca, **ckw) finally: self._resolving_links = False def kill(self, *throw_args): """Kills the greenthread using :func:`kill`. After being killed all calls to :meth:`wait` will raise *throw_args* (which default to :class:`greenlet.GreenletExit`).""" return kill(self, *throw_args) def cancel(self, *throw_args): """Kills the greenthread using :func:`kill`, but only if it hasn't already started running. After being canceled, all calls to :meth:`wait` will raise *throw_args* (which default to :class:`greenlet.GreenletExit`).""" return cancel(self, *throw_args) def cancel(g, *throw_args): """Like :func:`kill`, but only terminates the greenthread if it hasn't already started execution. If the grenthread has already started execution, :func:`cancel` has no effect.""" if not g: kill(g, *throw_args) def kill(g, *throw_args): """Terminates the target greenthread by raising an exception into it. Whatever that greenthread might be doing; be it waiting for I/O or another primitive, it sees an exception right away. By default, this exception is GreenletExit, but a specific exception may be specified. *throw_args* should be the same as the arguments to raise; either an exception instance or an exc_info tuple. Calling :func:`kill` causes the calling greenthread to cooperatively yield. """ if g.dead: return hub = hubs.get_hub() if not g: # greenlet hasn't started yet and therefore throw won't work # on its own; semantically we want it to be as though the main # method never got called def just_raise(*a, **kw): if throw_args: raise throw_args[0], throw_args[1], throw_args[2] else: raise greenlet.GreenletExit() g.run = just_raise if isinstance(g, GreenThread): # it's a GreenThread object, so we want to call its main # method to take advantage of the notification try: g.main(just_raise, (), {}) except: pass current = getcurrent() if current is not hub.greenlet: # arrange to wake the caller back up immediately hub.ensure_greenlet() hub.schedule_call_global(0, current.switch) g.throw(*throw_args) eventlet-0.13.0/eventlet/event.py0000644000175000017500000001563212164577340017622 0ustar temototemoto00000000000000from eventlet import hubs from eventlet.support import greenlets as greenlet __all__ = ['Event'] class NOT_USED: def __repr__(self): return 'NOT_USED' NOT_USED = NOT_USED() class Event(object): """An abstraction where an arbitrary number of coroutines can wait for one event from another. Events are similar to a Queue that can only hold one item, but differ in two important ways: 1. calling :meth:`send` never unschedules the current greenthread 2. :meth:`send` can only be called once; create a new event to send again. They are good for communicating results between coroutines, and are the basis for how :meth:`GreenThread.wait() ` is implemented. >>> from eventlet import event >>> import eventlet >>> evt = event.Event() >>> def baz(b): ... evt.send(b + 1) ... >>> _ = eventlet.spawn_n(baz, 3) >>> evt.wait() 4 """ _result = None _exc = None def __init__(self): self._waiters = set() self.reset() def __str__(self): params = (self.__class__.__name__, hex(id(self)), self._result, self._exc, len(self._waiters)) return '<%s at %s result=%r _exc=%r _waiters[%d]>' % params def reset(self): # this is kind of a misfeature and doesn't work perfectly well, # it's better to create a new event rather than reset an old one # removing documentation so that we don't get new use cases for it assert self._result is not NOT_USED, 'Trying to re-reset() a fresh event.' self._result = NOT_USED self._exc = None def ready(self): """ Return true if the :meth:`wait` call will return immediately. Used to avoid waiting for things that might take a while to time out. For example, you can put a bunch of events into a list, and then visit them all repeatedly, calling :meth:`ready` until one returns ``True``, and then you can :meth:`wait` on that one.""" return self._result is not NOT_USED def has_exception(self): return self._exc is not None def has_result(self): return self._result is not NOT_USED and self._exc is None def poll(self, notready=None): if self.ready(): return self.wait() return notready # QQQ make it return tuple (type, value, tb) instead of raising # because # 1) "poll" does not imply raising # 2) it's better not to screw up caller's sys.exc_info() by default # (e.g. if caller wants to calls the function in except or finally) def poll_exception(self, notready=None): if self.has_exception(): return self.wait() return notready def poll_result(self, notready=None): if self.has_result(): return self.wait() return notready def wait(self): """Wait until another coroutine calls :meth:`send`. Returns the value the other coroutine passed to :meth:`send`. >>> from eventlet import event >>> import eventlet >>> evt = event.Event() >>> def wait_on(): ... retval = evt.wait() ... print "waited for", retval >>> _ = eventlet.spawn(wait_on) >>> evt.send('result') >>> eventlet.sleep(0) waited for result Returns immediately if the event has already occured. >>> evt.wait() 'result' """ current = greenlet.getcurrent() if self._result is NOT_USED: self._waiters.add(current) try: return hubs.get_hub().switch() finally: self._waiters.discard(current) if self._exc is not None: current.throw(*self._exc) return self._result def send(self, result=None, exc=None): """Makes arrangements for the waiters to be woken with the result and then returns immediately to the parent. >>> from eventlet import event >>> import eventlet >>> evt = event.Event() >>> def waiter(): ... print 'about to wait' ... result = evt.wait() ... print 'waited for', result >>> _ = eventlet.spawn(waiter) >>> eventlet.sleep(0) about to wait >>> evt.send('a') >>> eventlet.sleep(0) waited for a It is an error to call :meth:`send` multiple times on the same event. >>> evt.send('whoops') Traceback (most recent call last): ... AssertionError: Trying to re-send() an already-triggered event. Use :meth:`reset` between :meth:`send` s to reuse an event object. """ assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.' self._result = result if exc is not None and not isinstance(exc, tuple): exc = (exc, ) self._exc = exc hub = hubs.get_hub() for waiter in self._waiters: hub.schedule_call_global( 0, self._do_send, self._result, self._exc, waiter) def _do_send(self, result, exc, waiter): if waiter in self._waiters: if exc is None: waiter.switch(result) else: waiter.throw(*exc) def send_exception(self, *args): """Same as :meth:`send`, but sends an exception to waiters. The arguments to send_exception are the same as the arguments to ``raise``. If a single exception object is passed in, it will be re-raised when :meth:`wait` is called, generating a new stacktrace. >>> from eventlet import event >>> evt = event.Event() >>> evt.send_exception(RuntimeError()) >>> evt.wait() Traceback (most recent call last): File "", line 1, in File "eventlet/event.py", line 120, in wait current.throw(*self._exc) RuntimeError If it's important to preserve the entire original stack trace, you must pass in the entire :func:`sys.exc_info` tuple. >>> import sys >>> evt = event.Event() >>> try: ... raise RuntimeError() ... except RuntimeError: ... evt.send_exception(*sys.exc_info()) ... >>> evt.wait() Traceback (most recent call last): File "", line 1, in File "eventlet/event.py", line 120, in wait current.throw(*self._exc) File "", line 2, in RuntimeError Note that doing so stores a traceback object directly on the Event object, which may cause reference cycles. See the :func:`sys.exc_info` documentation. """ # the arguments and the same as for greenlet.throw return self.send(None, args) eventlet-0.13.0/eventlet/debug.py0000644000175000017500000001433312164577340017564 0ustar temototemoto00000000000000"""The debug module contains utilities and functions for better debugging Eventlet-powered applications.""" import os import sys import linecache import re import inspect __all__ = ['spew', 'unspew', 'format_hub_listeners', 'format_hub_timers', 'hub_listener_stacks', 'hub_exceptions', 'tpool_exceptions', 'hub_prevent_multiple_readers', 'hub_timer_stacks', 'hub_blocking_detection'] _token_splitter = re.compile('\W+') class Spew(object): def __init__(self, trace_names=None, show_values=True): self.trace_names = trace_names self.show_values = show_values def __call__(self, frame, event, arg): if event == 'line': lineno = frame.f_lineno if '__file__' in frame.f_globals: filename = frame.f_globals['__file__'] if (filename.endswith('.pyc') or filename.endswith('.pyo')): filename = filename[:-1] name = frame.f_globals['__name__'] line = linecache.getline(filename, lineno) else: name = '[unknown]' try: src = inspect.getsourcelines(frame) line = src[lineno] except IOError: line = 'Unknown code named [%s]. VM instruction #%d' % ( frame.f_code.co_name, frame.f_lasti) if self.trace_names is None or name in self.trace_names: print '%s:%s: %s' % (name, lineno, line.rstrip()) if not self.show_values: return self details = [] tokens = _token_splitter.split(line) for tok in tokens: if tok in frame.f_globals: details.append('%s=%r' % (tok, frame.f_globals[tok])) if tok in frame.f_locals: details.append('%s=%r' % (tok, frame.f_locals[tok])) if details: print "\t%s" % ' '.join(details) return self def spew(trace_names=None, show_values=False): """Install a trace hook which writes incredibly detailed logs about what code is being executed to stdout. """ sys.settrace(Spew(trace_names, show_values)) def unspew(): """Remove the trace hook installed by spew. """ sys.settrace(None) def format_hub_listeners(): """ Returns a formatted string of the current listeners on the current hub. This can be useful in determining what's going on in the event system, especially when used in conjunction with :func:`hub_listener_stacks`. """ from eventlet import hubs hub = hubs.get_hub() result = ['READERS:'] for l in hub.get_readers(): result.append(repr(l)) result.append('WRITERS:') for l in hub.get_writers(): result.append(repr(l)) return os.linesep.join(result) def format_hub_timers(): """ Returns a formatted string of the current timers on the current hub. This can be useful in determining what's going on in the event system, especially when used in conjunction with :func:`hub_timer_stacks`. """ from eventlet import hubs hub = hubs.get_hub() result = ['TIMERS:'] for l in hub.timers: result.append(repr(l)) return os.linesep.join(result) def hub_listener_stacks(state=False): """Toggles whether or not the hub records the stack when clients register listeners on file descriptors. This can be useful when trying to figure out what the hub is up to at any given moment. To inspect the stacks of the current listeners, call :func:`format_hub_listeners` at critical junctures in the application logic. """ from eventlet import hubs hubs.get_hub().set_debug_listeners(state) def hub_timer_stacks(state=False): """Toggles whether or not the hub records the stack when timers are set. To inspect the stacks of the current timers, call :func:`format_hub_timers` at critical junctures in the application logic. """ from eventlet.hubs import timer timer._g_debug = state def hub_prevent_multiple_readers(state=True): """Toggle prevention of multiple greenlets reading from a socket When multiple greenlets read from the same socket it is often hard to predict which greenlet will receive what data. To achieve resource sharing consider using ``eventlet.pools.Pool`` instead. But if you really know what you are doing you can change the state to ``False`` to stop the hub from protecting against this mistake. """ from eventlet.hubs import hub hub.g_prevent_multiple_readers = state def hub_exceptions(state=True): """Toggles whether the hub prints exceptions that are raised from its timers. This can be useful to see how greenthreads are terminating. """ from eventlet import hubs hubs.get_hub().set_timer_exceptions(state) from eventlet import greenpool greenpool.DEBUG = state def tpool_exceptions(state=False): """Toggles whether tpool itself prints exceptions that are raised from functions that are executed in it, in addition to raising them like it normally does.""" from eventlet import tpool tpool.QUIET = not state def hub_blocking_detection(state=False, resolution=1): """Toggles whether Eventlet makes an effort to detect blocking behavior in an application. It does this by telling the kernel to raise a SIGALARM after a short timeout, and clearing the timeout every time the hub greenlet is resumed. Therefore, any code that runs for a long time without yielding to the hub will get interrupted by the blocking detector (don't use it in production!). The *resolution* argument governs how long the SIGALARM timeout waits in seconds. If on Python 2.6 or later, the implementation uses :func:`signal.setitimer` and can be specified as a floating-point value. On 2.5 or earlier, 1 second is the minimum. The shorter the resolution, the greater the chance of false positives. """ from eventlet import hubs assert resolution > 0 hubs.get_hub().debug_blocking = state hubs.get_hub().debug_blocking_resolution = resolution if(not state): hubs.get_hub().block_detect_post() eventlet-0.13.0/eventlet/__init__.py0000644000175000017500000000265112164577734020244 0ustar temototemoto00000000000000version_info = (0, 13, 0) __version__ = ".".join(map(str, version_info)) try: from eventlet import greenthread from eventlet import greenpool from eventlet import queue from eventlet import timeout from eventlet import patcher from eventlet import convenience import greenlet sleep = greenthread.sleep spawn = greenthread.spawn spawn_n = greenthread.spawn_n spawn_after = greenthread.spawn_after kill = greenthread.kill Timeout = timeout.Timeout with_timeout = timeout.with_timeout GreenPool = greenpool.GreenPool GreenPile = greenpool.GreenPile Queue = queue.Queue import_patched = patcher.import_patched monkey_patch = patcher.monkey_patch connect = convenience.connect listen = convenience.listen serve = convenience.serve StopServe = convenience.StopServe wrap_ssl = convenience.wrap_ssl getcurrent = greenlet.greenlet.getcurrent # deprecated TimeoutError = timeout.Timeout exc_after = greenthread.exc_after call_after_global = greenthread.call_after_global except ImportError, e: # This is to make Debian packaging easier, it ignores import # errors of greenlet so that the packager can still at least # access the version. Also this makes easy_install a little quieter if 'greenlet' not in str(e): # any other exception should be printed import traceback traceback.print_exc() eventlet-0.13.0/eventlet/semaphore.py0000644000175000017500000002617112164577340020464 0ustar temototemoto00000000000000from __future__ import with_statement from eventlet import greenthread from eventlet import hubs from eventlet.timeout import Timeout class Semaphore(object): """An unbounded semaphore. Optionally initialize with a resource *count*, then :meth:`acquire` and :meth:`release` resources as needed. Attempting to :meth:`acquire` when *count* is zero suspends the calling greenthread until *count* becomes nonzero again. This is API-compatible with :class:`threading.Semaphore`. It is a context manager, and thus can be used in a with block:: sem = Semaphore(2) with sem: do_some_stuff() If not specified, *value* defaults to 1. It is possible to limit acquire time:: sem = Semaphore() ok = sem.acquire(timeout=0.1) # True if acquired, False if timed out. """ def __init__(self, value=1): self.counter = value if value < 0: raise ValueError("Semaphore must be initialized with a positive " "number, got %s" % value) self._waiters = set() def __repr__(self): params = (self.__class__.__name__, hex(id(self)), self.counter, len(self._waiters)) return '<%s at %s c=%s _w[%s]>' % params def __str__(self): params = (self.__class__.__name__, self.counter, len(self._waiters)) return '<%s c=%s _w[%s]>' % params def locked(self): """Returns true if a call to acquire would block. """ return self.counter <= 0 def bounded(self): """Returns False; for consistency with :class:`~eventlet.semaphore.CappedSemaphore`. """ return False def acquire(self, blocking=True, timeout=None): """Acquire a semaphore. When invoked without arguments: if the internal counter is larger than zero on entry, decrement it by one and return immediately. If it is zero on entry, block, waiting until some other thread has called release() to make it larger than zero. This is done with proper interlocking so that if multiple acquire() calls are blocked, release() will wake exactly one of them up. The implementation may pick one at random, so the order in which blocked threads are awakened should not be relied on. There is no return value in this case. When invoked with blocking set to true, do the same thing as when called without arguments, and return true. When invoked with blocking set to false, do not block. If a call without an argument would block, return false immediately; otherwise, do the same thing as when called without arguments, and return true. """ if not blocking and timeout is not None: raise ValueError("can't specify timeout for non-blocking acquire") if not blocking and self.locked(): return False if self.counter <= 0: self._waiters.add(greenthread.getcurrent()) try: if timeout is not None: ok = False with Timeout(timeout, False): while self.counter <= 0: hubs.get_hub().switch() ok = True if not ok: return False else: while self.counter <= 0: hubs.get_hub().switch() finally: self._waiters.discard(greenthread.getcurrent()) self.counter -= 1 return True def __enter__(self): self.acquire() def release(self, blocking=True): """Release a semaphore, incrementing the internal counter by one. When it was zero on entry and another thread is waiting for it to become larger than zero again, wake up that thread. The *blocking* argument is for consistency with CappedSemaphore and is ignored """ self.counter += 1 if self._waiters: hubs.get_hub().schedule_call_global(0, self._do_acquire) return True def _do_acquire(self): if self._waiters and self.counter > 0: waiter = self._waiters.pop() waiter.switch() def __exit__(self, typ, val, tb): self.release() @property def balance(self): """An integer value that represents how many new calls to :meth:`acquire` or :meth:`release` would be needed to get the counter to 0. If it is positive, then its value is the number of acquires that can happen before the next acquire would block. If it is negative, it is the negative of the number of releases that would be required in order to make the counter 0 again (one more release would push the counter to 1 and unblock acquirers). It takes into account how many greenthreads are currently blocking in :meth:`acquire`. """ # positive means there are free items # zero means there are no free items but nobody has requested one # negative means there are requests for items, but no items return self.counter - len(self._waiters) class BoundedSemaphore(Semaphore): """A bounded semaphore checks to make sure its current value doesn't exceed its initial value. If it does, ValueError is raised. In most situations semaphores are used to guard resources with limited capacity. If the semaphore is released too many times it's a sign of a bug. If not given, *value* defaults to 1. """ def __init__(self, value=1): super(BoundedSemaphore, self).__init__(value) self.original_counter = value def release(self, blocking=True): """Release a semaphore, incrementing the internal counter by one. If the counter would exceed the initial value, raises ValueError. When it was zero on entry and another thread is waiting for it to become larger than zero again, wake up that thread. The *blocking* argument is for consistency with :class:`CappedSemaphore` and is ignored """ if self.counter >= self.original_counter: raise ValueError, "Semaphore released too many times" return super(BoundedSemaphore, self).release(blocking) class CappedSemaphore(object): """A blockingly bounded semaphore. Optionally initialize with a resource *count*, then :meth:`acquire` and :meth:`release` resources as needed. Attempting to :meth:`acquire` when *count* is zero suspends the calling greenthread until count becomes nonzero again. Attempting to :meth:`release` after *count* has reached *limit* suspends the calling greenthread until *count* becomes less than *limit* again. This has the same API as :class:`threading.Semaphore`, though its semantics and behavior differ subtly due to the upper limit on calls to :meth:`release`. It is **not** compatible with :class:`threading.BoundedSemaphore` because it blocks when reaching *limit* instead of raising a ValueError. It is a context manager, and thus can be used in a with block:: sem = CappedSemaphore(2) with sem: do_some_stuff() """ def __init__(self, count, limit): if count < 0: raise ValueError("CappedSemaphore must be initialized with a " "positive number, got %s" % count) if count > limit: # accidentally, this also catches the case when limit is None raise ValueError("'count' cannot be more than 'limit'") self.lower_bound = Semaphore(count) self.upper_bound = Semaphore(limit - count) def __repr__(self): params = (self.__class__.__name__, hex(id(self)), self.balance, self.lower_bound, self.upper_bound) return '<%s at %s b=%s l=%s u=%s>' % params def __str__(self): params = (self.__class__.__name__, self.balance, self.lower_bound, self.upper_bound) return '<%s b=%s l=%s u=%s>' % params def locked(self): """Returns true if a call to acquire would block. """ return self.lower_bound.locked() def bounded(self): """Returns true if a call to release would block. """ return self.upper_bound.locked() def acquire(self, blocking=True): """Acquire a semaphore. When invoked without arguments: if the internal counter is larger than zero on entry, decrement it by one and return immediately. If it is zero on entry, block, waiting until some other thread has called release() to make it larger than zero. This is done with proper interlocking so that if multiple acquire() calls are blocked, release() will wake exactly one of them up. The implementation may pick one at random, so the order in which blocked threads are awakened should not be relied on. There is no return value in this case. When invoked with blocking set to true, do the same thing as when called without arguments, and return true. When invoked with blocking set to false, do not block. If a call without an argument would block, return false immediately; otherwise, do the same thing as when called without arguments, and return true. """ if not blocking and self.locked(): return False self.upper_bound.release() try: return self.lower_bound.acquire() except: self.upper_bound.counter -= 1 # using counter directly means that it can be less than zero. # however I certainly don't need to wait here and I don't seem to have # a need to care about such inconsistency raise def __enter__(self): self.acquire() def release(self, blocking=True): """Release a semaphore. In this class, this behaves very much like an :meth:`acquire` but in the opposite direction. Imagine the docs of :meth:`acquire` here, but with every direction reversed. When calling this method, it will block if the internal counter is greater than or equal to *limit*. """ if not blocking and self.bounded(): return False self.lower_bound.release() try: return self.upper_bound.acquire() except: self.lower_bound.counter -= 1 raise def __exit__(self, typ, val, tb): self.release() @property def balance(self): """An integer value that represents how many new calls to :meth:`acquire` or :meth:`release` would be needed to get the counter to 0. If it is positive, then its value is the number of acquires that can happen before the next acquire would block. If it is negative, it is the negative of the number of releases that would be required in order to make the counter 0 again (one more release would push the counter to 1 and unblock acquirers). It takes into account how many greenthreads are currently blocking in :meth:`acquire` and :meth:`release`. """ return self.lower_bound.balance - self.upper_bound.balance eventlet-0.13.0/eventlet/coros.py0000644000175000017500000003241712164577340017626 0ustar temototemoto00000000000000import collections import traceback import warnings import eventlet from eventlet import event as _event from eventlet import hubs from eventlet import greenthread from eventlet import semaphore as semaphoremod class NOT_USED: def __repr__(self): return 'NOT_USED' NOT_USED = NOT_USED() def Event(*a, **kw): warnings.warn("The Event class has been moved to the event module! " "Please construct event.Event objects instead.", DeprecationWarning, stacklevel=2) return _event.Event(*a, **kw) def event(*a, **kw): warnings.warn("The event class has been capitalized and moved! Please " "construct event.Event objects instead.", DeprecationWarning, stacklevel=2) return _event.Event(*a, **kw) def Semaphore(count): warnings.warn("The Semaphore class has moved! Please " "use semaphore.Semaphore instead.", DeprecationWarning, stacklevel=2) return semaphoremod.Semaphore(count) def BoundedSemaphore(count): warnings.warn("The BoundedSemaphore class has moved! Please " "use semaphore.BoundedSemaphore instead.", DeprecationWarning, stacklevel=2) return semaphoremod.BoundedSemaphore(count) def semaphore(count=0, limit=None): warnings.warn("coros.semaphore is deprecated. Please use either " "semaphore.Semaphore or semaphore.BoundedSemaphore instead.", DeprecationWarning, stacklevel=2) if limit is None: return Semaphore(count) else: return BoundedSemaphore(count) class metaphore(object): """This is sort of an inverse semaphore: a counter that starts at 0 and waits only if nonzero. It's used to implement a "wait for all" scenario. >>> from eventlet import api, coros >>> count = coros.metaphore() >>> count.wait() >>> def decrementer(count, id): ... print "%s decrementing" % id ... count.dec() ... >>> _ = eventlet.spawn(decrementer, count, 'A') >>> _ = eventlet.spawn(decrementer, count, 'B') >>> count.inc(2) >>> count.wait() A decrementing B decrementing """ def __init__(self): self.counter = 0 self.event = _event.Event() # send() right away, else we'd wait on the default 0 count! self.event.send() def inc(self, by=1): """Increment our counter. If this transitions the counter from zero to nonzero, make any subsequent :meth:`wait` call wait. """ assert by > 0 self.counter += by if self.counter == by: # If we just incremented self.counter by 'by', and the new count # equals 'by', then the old value of self.counter was 0. # Transitioning from 0 to a nonzero value means wait() must # actually wait. self.event.reset() def dec(self, by=1): """Decrement our counter. If this transitions the counter from nonzero to zero, a current or subsequent wait() call need no longer wait. """ assert by > 0 self.counter -= by if self.counter <= 0: # Don't leave self.counter < 0, that will screw things up in # future calls. self.counter = 0 # Transitioning from nonzero to 0 means wait() need no longer wait. self.event.send() def wait(self): """Suspend the caller only if our count is nonzero. In that case, resume the caller once the count decrements to zero again. """ self.event.wait() def execute(func, *args, **kw): """ Executes an operation asynchronously in a new coroutine, returning an event to retrieve the return value. This has the same api as the :meth:`eventlet.coros.CoroutinePool.execute` method; the only difference is that this one creates a new coroutine instead of drawing from a pool. >>> from eventlet import coros >>> evt = coros.execute(lambda a: ('foo', a), 1) >>> evt.wait() ('foo', 1) """ warnings.warn("Coros.execute is deprecated. Please use eventlet.spawn " "instead.", DeprecationWarning, stacklevel=2) return greenthread.spawn(func, *args, **kw) def CoroutinePool(*args, **kwargs): warnings.warn("CoroutinePool is deprecated. Please use " "eventlet.GreenPool instead.", DeprecationWarning, stacklevel=2) from eventlet.pool import Pool return Pool(*args, **kwargs) class Queue(object): def __init__(self): warnings.warn("coros.Queue is deprecated. Please use " "eventlet.queue.Queue instead.", DeprecationWarning, stacklevel=2) self.items = collections.deque() self._waiters = set() def __nonzero__(self): return len(self.items)>0 def __len__(self): return len(self.items) def __repr__(self): params = (self.__class__.__name__, hex(id(self)), len(self.items), len(self._waiters)) return '<%s at %s items[%d] _waiters[%s]>' % params def send(self, result=None, exc=None): if exc is not None and not isinstance(exc, tuple): exc = (exc, ) self.items.append((result, exc)) if self._waiters: hubs.get_hub().schedule_call_global(0, self._do_send) def send_exception(self, *args): # the arguments are the same as for greenlet.throw return self.send(exc=args) def _do_send(self): if self._waiters and self.items: waiter = self._waiters.pop() result, exc = self.items.popleft() waiter.switch((result, exc)) def wait(self): if self.items: result, exc = self.items.popleft() if exc is None: return result else: eventlet.getcurrent().throw(*exc) else: self._waiters.add(eventlet.getcurrent()) try: result, exc = hubs.get_hub().switch() if exc is None: return result else: eventlet.getcurrent().throw(*exc) finally: self._waiters.discard(eventlet.getcurrent()) def ready(self): return len(self.items) > 0 def full(self): # for consistency with Channel return False def waiting(self): return len(self._waiters) def __iter__(self): return self def next(self): return self.wait() class Channel(object): def __init__(self, max_size=0): warnings.warn("coros.Channel is deprecated. Please use " "eventlet.queue.Queue(0) instead.", DeprecationWarning, stacklevel=2) self.max_size = max_size self.items = collections.deque() self._waiters = set() self._senders = set() def __nonzero__(self): return len(self.items)>0 def __len__(self): return len(self.items) def __repr__(self): params = (self.__class__.__name__, hex(id(self)), self.max_size, len(self.items), len(self._waiters), len(self._senders)) return '<%s at %s max=%s items[%d] _w[%s] _s[%s]>' % params def send(self, result=None, exc=None): if exc is not None and not isinstance(exc, tuple): exc = (exc, ) if eventlet.getcurrent() is hubs.get_hub().greenlet: self.items.append((result, exc)) if self._waiters: hubs.get_hub().schedule_call_global(0, self._do_switch) else: self.items.append((result, exc)) # note that send() does not work well with timeouts. if your timeout fires # after this point, the item will remain in the queue if self._waiters: hubs.get_hub().schedule_call_global(0, self._do_switch) if len(self.items) > self.max_size: self._senders.add(eventlet.getcurrent()) try: hubs.get_hub().switch() finally: self._senders.discard(eventlet.getcurrent()) def send_exception(self, *args): # the arguments are the same as for greenlet.throw return self.send(exc=args) def _do_switch(self): while True: if self._waiters and self.items: waiter = self._waiters.pop() result, exc = self.items.popleft() try: waiter.switch((result, exc)) except: traceback.print_exc() elif self._senders and len(self.items) <= self.max_size: sender = self._senders.pop() try: sender.switch() except: traceback.print_exc() else: break def wait(self): if self.items: result, exc = self.items.popleft() if len(self.items) <= self.max_size: hubs.get_hub().schedule_call_global(0, self._do_switch) if exc is None: return result else: eventlet.getcurrent().throw(*exc) else: if self._senders: hubs.get_hub().schedule_call_global(0, self._do_switch) self._waiters.add(eventlet.getcurrent()) try: result, exc = hubs.get_hub().switch() if exc is None: return result else: eventlet.getcurrent().throw(*exc) finally: self._waiters.discard(eventlet.getcurrent()) def ready(self): return len(self.items) > 0 def full(self): return len(self.items) >= self.max_size def waiting(self): return max(0, len(self._waiters) - len(self.items)) def queue(max_size=None): if max_size is None: return Queue() else: return Channel(max_size) class Actor(object): """ A free-running coroutine that accepts and processes messages. Kind of the equivalent of an Erlang process, really. It processes a queue of messages in the order that they were sent. You must subclass this and implement your own version of :meth:`received`. The actor's reference count will never drop to zero while the coroutine exists; if you lose all references to the actor object it will never be freed. """ def __init__(self, concurrency = 1): """ Constructs an Actor, kicking off a new coroutine to process the messages. The concurrency argument specifies how many messages the actor will try to process concurrently. If it is 1, the actor will process messages serially. """ warnings.warn("We're phasing out the Actor class, so as to get rid of" "the coros module. If you use Actor, please speak up on " "eventletdev@lists.secondlife.com, and we'll come up with a " "transition plan. If no one speaks up, we'll remove Actor " "in a future release of Eventlet.", DeprecationWarning, stacklevel=2) self._mailbox = collections.deque() self._event = _event.Event() self._killer = eventlet.spawn(self.run_forever) from eventlet import greenpool self._pool = greenpool.GreenPool(concurrency) def run_forever(self): """ Loops forever, continually checking the mailbox. """ while True: if not self._mailbox: self._event.wait() self._event = _event.Event() else: # leave the message in the mailbox until after it's # been processed so the event doesn't get triggered # while in the received method self._pool.spawn_n( self.received, self._mailbox[0]) self._mailbox.popleft() def cast(self, message): """ Send a message to the actor. If the actor is busy, the message will be enqueued for later consumption. There is no return value. >>> a = Actor() >>> a.received = lambda msg: msg >>> a.cast("hello") """ self._mailbox.append(message) # if this is the only message, the coro could be waiting if len(self._mailbox) == 1: self._event.send() def received(self, message): """ Called to process each incoming message. The default implementation just raises an exception, so replace it with something useful! >>> class Greeter(Actor): ... def received(self, (message, evt) ): ... print "received", message ... if evt: evt.send() ... >>> a = Greeter() This example uses Events to synchronize between the actor and the main coroutine in a predictable manner, but this kinda defeats the point of the :class:`Actor`, so don't do it in a real application. >>> from eventlet.event import Event >>> evt = Event() >>> a.cast( ("message 1", evt) ) >>> evt.wait() # force it to run at this exact moment received message 1 >>> evt.reset() >>> a.cast( ("message 2", None) ) >>> a.cast( ("message 3", evt) ) >>> evt.wait() received message 2 received message 3 >>> eventlet.kill(a._killer) # test cleanup """ raise NotImplementedError() eventlet-0.13.0/eventlet/greenio.py0000644000175000017500000004504412164577340020131 0ustar temototemoto00000000000000from eventlet.support import get_errno from eventlet.hubs import trampoline BUFFER_SIZE = 4096 import array import errno import os import socket from socket import socket as _original_socket import sys import time import warnings __all__ = ['GreenSocket', 'GreenPipe', 'shutdown_safe'] CONNECT_ERR = set((errno.EINPROGRESS, errno.EALREADY, errno.EWOULDBLOCK)) CONNECT_SUCCESS = set((0, errno.EISCONN)) if sys.platform[:3] == "win": CONNECT_ERR.add(errno.WSAEINVAL) # Bug 67 # Emulate _fileobject class in 3.x implementation # Eventually this internal socket structure could be replaced with makefile calls. try: _fileobject = socket._fileobject except AttributeError: def _fileobject(sock, *args, **kwargs): return _original_socket.makefile(sock, *args, **kwargs) def socket_connect(descriptor, address): """ Attempts to connect to the address, returns the descriptor if it succeeds, returns None if it needs to trampoline, and raises any exceptions. """ err = descriptor.connect_ex(address) if err in CONNECT_ERR: return None if err not in CONNECT_SUCCESS: raise socket.error(err, errno.errorcode[err]) return descriptor def socket_checkerr(descriptor): err = descriptor.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) if err not in CONNECT_SUCCESS: raise socket.error(err, errno.errorcode[err]) def socket_accept(descriptor): """ Attempts to accept() on the descriptor, returns a client,address tuple if it succeeds; returns None if it needs to trampoline, and raises any exceptions. """ try: return descriptor.accept() except socket.error, e: if get_errno(e) == errno.EWOULDBLOCK: return None raise if sys.platform[:3] == "win": # winsock sometimes throws ENOTCONN SOCKET_BLOCKING = set((errno.EWOULDBLOCK,)) SOCKET_CLOSED = set((errno.ECONNRESET, errno.ENOTCONN, errno.ESHUTDOWN)) else: # oddly, on linux/darwin, an unconnected socket is expected to block, # so we treat ENOTCONN the same as EWOULDBLOCK SOCKET_BLOCKING = set((errno.EWOULDBLOCK, errno.ENOTCONN)) SOCKET_CLOSED = set((errno.ECONNRESET, errno.ESHUTDOWN, errno.EPIPE)) def set_nonblocking(fd): """ Sets the descriptor to be nonblocking. Works on many file-like objects as well as sockets. Only sockets can be nonblocking on Windows, however. """ try: setblocking = fd.setblocking except AttributeError: # fd has no setblocking() method. It could be that this version of # Python predates socket.setblocking(). In that case, we can still set # the flag "by hand" on the underlying OS fileno using the fcntl # module. try: import fcntl except ImportError: # Whoops, Windows has no fcntl module. This might not be a socket # at all, but rather a file-like object with no setblocking() # method. In particular, on Windows, pipes don't support # non-blocking I/O and therefore don't have that method. Which # means fcntl wouldn't help even if we could load it. raise NotImplementedError("set_nonblocking() on a file object " "with no setblocking() method " "(Windows pipes don't support non-blocking I/O)") # We managed to import fcntl. fileno = fd.fileno() orig_flags = fcntl.fcntl(fileno, fcntl.F_GETFL) new_flags = orig_flags | os.O_NONBLOCK if new_flags != orig_flags: fcntl.fcntl(fileno, fcntl.F_SETFL, new_flags) else: # socket supports setblocking() setblocking(0) try: from socket import _GLOBAL_DEFAULT_TIMEOUT except ImportError: _GLOBAL_DEFAULT_TIMEOUT = object() class GreenSocket(object): """ Green version of socket.socket class, that is intended to be 100% API-compatible. It also recognizes the keyword parameter, 'set_nonblocking=True'. Pass False to indicate that socket is already in non-blocking mode to save syscalls. """ def __init__(self, family_or_realsock=socket.AF_INET, *args, **kwargs): should_set_nonblocking = kwargs.pop('set_nonblocking', True) if isinstance(family_or_realsock, (int, long)): fd = _original_socket(family_or_realsock, *args, **kwargs) else: fd = family_or_realsock assert not args, args assert not kwargs, kwargs # import timeout from other socket, if it was there try: self._timeout = fd.gettimeout() or socket.getdefaulttimeout() except AttributeError: self._timeout = socket.getdefaulttimeout() if should_set_nonblocking: set_nonblocking(fd) self.fd = fd # when client calls setblocking(0) or settimeout(0) the socket must # act non-blocking self.act_non_blocking = False # Copy some attributes from underlying real socket. # This is the easiest way that i found to fix # https://bitbucket.org/eventlet/eventlet/issue/136 # Only `getsockopt` is required to fix that issue, others # are just premature optimization to save __getattr__ call. self.bind = fd.bind self.close = fd.close self.fileno = fd.fileno self.getsockname = fd.getsockname self.getsockopt = fd.getsockopt self.listen = fd.listen self.setsockopt = fd.setsockopt self.shutdown = fd.shutdown @property def _sock(self): return self # Forward unknown attributes to fd, cache the value for future use. # I do not see any simple attribute which could be changed # so caching everything in self is fine. # If we find such attributes - only attributes having __get__ might be cached. # For now - I do not want to complicate it. def __getattr__(self, name): attr = getattr(self.fd, name) setattr(self, name, attr) return attr def accept(self): if self.act_non_blocking: return self.fd.accept() fd = self.fd while True: res = socket_accept(fd) if res is not None: client, addr = res set_nonblocking(client) return type(self)(client), addr trampoline(fd, read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) def connect(self, address): if self.act_non_blocking: return self.fd.connect(address) fd = self.fd if self.gettimeout() is None: while not socket_connect(fd, address): trampoline(fd, write=True) socket_checkerr(fd) else: end = time.time() + self.gettimeout() while True: if socket_connect(fd, address): return if time.time() >= end: raise socket.timeout("timed out") trampoline(fd, write=True, timeout=end - time.time(), timeout_exc=socket.timeout("timed out")) socket_checkerr(fd) def connect_ex(self, address): if self.act_non_blocking: return self.fd.connect_ex(address) fd = self.fd if self.gettimeout() is None: while not socket_connect(fd, address): try: trampoline(fd, write=True) socket_checkerr(fd) except socket.error, ex: return get_errno(ex) else: end = time.time() + self.gettimeout() while True: try: if socket_connect(fd, address): return 0 if time.time() >= end: raise socket.timeout(errno.EAGAIN) trampoline(fd, write=True, timeout=end - time.time(), timeout_exc=socket.timeout(errno.EAGAIN)) socket_checkerr(fd) except socket.error, ex: return get_errno(ex) def dup(self, *args, **kw): sock = self.fd.dup(*args, **kw) newsock = type(self)(sock, set_nonblocking=False) newsock.settimeout(self.gettimeout()) return newsock def makefile(self, *args, **kw): return _fileobject(self.dup(), *args, **kw) def makeGreenFile(self, *args, **kw): warnings.warn("makeGreenFile has been deprecated, please use " "makefile instead", DeprecationWarning, stacklevel=2) return self.makefile(*args, **kw) def recv(self, buflen, flags=0): fd = self.fd if self.act_non_blocking: return fd.recv(buflen, flags) while True: try: return fd.recv(buflen, flags) except socket.error, e: if get_errno(e) in SOCKET_BLOCKING: pass elif get_errno(e) in SOCKET_CLOSED: return '' else: raise trampoline(fd, read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) def recvfrom(self, *args): if not self.act_non_blocking: trampoline(self.fd, read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) return self.fd.recvfrom(*args) def recvfrom_into(self, *args): if not self.act_non_blocking: trampoline(self.fd, read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) return self.fd.recvfrom_into(*args) def recv_into(self, *args): if not self.act_non_blocking: trampoline(self.fd, read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) return self.fd.recv_into(*args) def send(self, data, flags=0): fd = self.fd if self.act_non_blocking: return fd.send(data, flags) # blocking socket behavior - sends all, blocks if the buffer is full total_sent = 0 len_data = len(data) while 1: try: total_sent += fd.send(data[total_sent:], flags) except socket.error, e: if get_errno(e) not in SOCKET_BLOCKING: raise if total_sent == len_data: break trampoline(self.fd, write=True, timeout=self.gettimeout(), timeout_exc=socket.timeout("timed out")) return total_sent def sendall(self, data, flags=0): tail = self.send(data, flags) len_data = len(data) while tail < len_data: tail += self.send(data[tail:], flags) def sendto(self, *args): trampoline(self.fd, write=True) return self.fd.sendto(*args) def setblocking(self, flag): if flag: self.act_non_blocking = False self._timeout = None else: self.act_non_blocking = True self._timeout = 0.0 def settimeout(self, howlong): if howlong is None or howlong == _GLOBAL_DEFAULT_TIMEOUT: self.setblocking(True) return try: f = howlong.__float__ except AttributeError: raise TypeError('a float is required') howlong = f() if howlong < 0.0: raise ValueError('Timeout value out of range') if howlong == 0.0: self.act_non_blocking = True self._timeout = 0.0 else: self.act_non_blocking = False self._timeout = howlong def gettimeout(self): return self._timeout class _SocketDuckForFd(object): """ Class implementing all socket method used by _fileobject in cooperative manner using low level os I/O calls.""" def __init__(self, fileno): self._fileno = fileno @property def _sock(self): return self def fileno(self): return self._fileno def recv(self, buflen): while True: try: data = os.read(self._fileno, buflen) return data except OSError, e: if get_errno(e) != errno.EAGAIN: raise IOError(*e.args) trampoline(self, read=True) def sendall(self, data): len_data = len(data) os_write = os.write fileno = self._fileno try: total_sent = os_write(fileno, data) except OSError, e: if get_errno(e) != errno.EAGAIN: raise IOError(*e.args) total_sent = 0 while total_sent < len_data: trampoline(self, write=True) try: total_sent += os_write(fileno, data[total_sent:]) except OSError, e: if get_errno(e) != errno. EAGAIN: raise IOError(*e.args) def __del__(self): try: os.close(self._fileno) except: # os.close may fail if __init__ didn't complete (i.e file dscriptor passed to popen was invalid pass def __repr__(self): return "%s:%d" % (self.__class__.__name__, self._fileno) def _operationOnClosedFile(*args, **kwargs): raise ValueError("I/O operation on closed file") class GreenPipe(_fileobject): """ GreenPipe is a cooperative replacement for file class. It will cooperate on pipes. It will block on regular file. Differneces from file class: - mode is r/w property. Should re r/o - encoding property not implemented - write/writelines will not raise TypeError exception when non-string data is written it will write str(data) instead - Universal new lines are not supported and newlines property not implementeded - file argument can be descriptor, file name or file object. """ def __init__(self, f, mode='r', bufsize=-1): if not isinstance(f, (basestring, int, file)): raise TypeError('f(ile) should be int, str, unicode or file, not %r' % f) if isinstance(f, basestring): f = open(f, mode, 0) if isinstance(f, int): fileno = f self._name = "" % fileno else: fileno = os.dup(f.fileno()) self._name = f.name if f.mode != mode: raise ValueError('file.mode %r does not match mode parameter %r' % (f.mode, mode)) self._name = f.name f.close() super(GreenPipe, self).__init__(_SocketDuckForFd(fileno), mode, bufsize) set_nonblocking(self) self.softspace = 0 @property def name(self): return self._name def __repr__(self): return "<%s %s %r, mode %r at 0x%x>" % ( self.closed and 'closed' or 'open', self.__class__.__name__, self.name, self.mode, (id(self) < 0) and (sys.maxint + id(self)) or id(self)) def close(self): super(GreenPipe, self).close() for method in ['fileno', 'flush', 'isatty', 'next', 'read', 'readinto', 'readline', 'readlines', 'seek', 'tell', 'truncate', 'write', 'xreadlines', '__iter__', 'writelines']: setattr(self, method, _operationOnClosedFile) if getattr(file, '__enter__', None): def __enter__(self): return self def __exit__(self, *args): self.close() def readinto(self, buf): data = self.read(len(buf)) # FIXME could it be done without allocating intermediate? n = len(data) try: buf[:n] = data except TypeError, err: if not isinstance(buf, array.array): raise err buf[:n] = array.array('c', data) return n def _get_readahead_len(self): try: return len(self._rbuf.getvalue()) # StringIO in 2.5 except AttributeError: return len(self._rbuf) # str in 2.4 def _clear_readahead_buf(self): len = self._get_readahead_len() if len > 0: self.read(len) def tell(self): self.flush() try: return os.lseek(self.fileno(), 0, 1) - self._get_readahead_len() except OSError, e: raise IOError(*e.args) def seek(self, offset, whence=0): self.flush() if whence == 1 and offset == 0: # tell synonym return self.tell() if whence == 1: # adjust offset by what is read ahead offset -= self._get_readahead_len() try: rv = os.lseek(self.fileno(), offset, whence) except OSError, e: raise IOError(*e.args) else: self._clear_readahead_buf() return rv if getattr(file, "truncate", None): # not all OSes implement truncate def truncate(self, size=-1): self.flush() if size == -1: size = self.tell() try: rv = os.ftruncate(self.fileno(), size) except OSError, e: raise IOError(*e.args) else: self.seek(size) # move position&clear buffer return rv def isatty(self): try: return os.isatty(self.fileno()) except OSError, e: raise IOError(*e.args) # import SSL module here so we can refer to greenio.SSL.exceptionclass try: from OpenSSL import SSL except ImportError: # pyOpenSSL not installed, define exceptions anyway for convenience class SSL(object): class WantWriteError(object): pass class WantReadError(object): pass class ZeroReturnError(object): pass class SysCallError(object): pass def shutdown_safe(sock): """ Shuts down the socket. This is a convenience method for code that wants to gracefully handle regular sockets, SSL.Connection sockets from PyOpenSSL and ssl.SSLSocket objects from Python 2.6 interchangeably. Both types of ssl socket require a shutdown() before close, but they have different arity on their shutdown method. Regular sockets don't need a shutdown before close, but it doesn't hurt. """ try: try: # socket, ssl.SSLSocket return sock.shutdown(socket.SHUT_RDWR) except TypeError: # SSL.Connection return sock.shutdown() except socket.error, e: # we don't care if the socket is already closed; # this will often be the case in an http server context if get_errno(e) != errno.ENOTCONN: raise eventlet-0.13.0/eventlet/green/0000755000175000017500000000000012164600754017214 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/green/select.py0000644000175000017500000000530312164577340021052 0ustar temototemoto00000000000000__select = __import__('select') error = __select.error from eventlet.greenthread import getcurrent from eventlet.hubs import get_hub __patched__ = ['select'] def get_fileno(obj): # The purpose of this function is to exactly replicate # the behavior of the select module when confronted with # abnormal filenos; the details are extensively tested in # the stdlib test/test_select.py. try: f = obj.fileno except AttributeError: if not isinstance(obj, (int, long)): raise TypeError("Expected int or long, got " + type(obj)) return obj else: rv = f() if not isinstance(rv, (int, long)): raise TypeError("Expected int or long, got " + type(rv)) return rv def select(read_list, write_list, error_list, timeout=None): # error checking like this is required by the stdlib unit tests if timeout is not None: try: timeout = float(timeout) except ValueError: raise TypeError("Expected number for timeout") hub = get_hub() timers = [] current = getcurrent() assert hub.greenlet is not current, 'do not call blocking functions from the mainloop' ds = {} for r in read_list: ds[get_fileno(r)] = {'read' : r} for w in write_list: ds.setdefault(get_fileno(w), {})['write'] = w for e in error_list: ds.setdefault(get_fileno(e), {})['error'] = e listeners = [] def on_read(d): original = ds[get_fileno(d)]['read'] current.switch(([original], [], [])) def on_write(d): original = ds[get_fileno(d)]['write'] current.switch(([], [original], [])) def on_error(d, _err=None): original = ds[get_fileno(d)]['error'] current.switch(([], [], [original])) def on_timeout2(): current.switch(([], [], [])) def on_timeout(): # ensure that BaseHub.run() has a chance to call self.wait() # at least once before timed out. otherwise the following code # can time out erroneously. # # s1, s2 = socket.socketpair() # print select.select([], [s1], [], 0) timers.append(hub.schedule_call_global(0, on_timeout2)) if timeout is not None: timers.append(hub.schedule_call_global(timeout, on_timeout)) try: for k, v in ds.iteritems(): if v.get('read'): listeners.append(hub.add(hub.READ, k, on_read)) if v.get('write'): listeners.append(hub.add(hub.WRITE, k, on_write)) try: return hub.switch() finally: for l in listeners: hub.remove(l) finally: for t in timers: t.cancel() eventlet-0.13.0/eventlet/green/Queue.py0000644000175000017500000000152112164577340020655 0ustar temototemoto00000000000000from eventlet import queue __all__ = ['Empty', 'Full', 'LifoQueue', 'PriorityQueue', 'Queue'] __patched__ = ['LifoQueue', 'PriorityQueue', 'Queue'] # these classes exist to paper over the major operational difference between # eventlet.queue.Queue and the stdlib equivalents class Queue(queue.Queue): def __init__(self, maxsize=0): if maxsize == 0: maxsize = None super(Queue, self).__init__(maxsize) class PriorityQueue(queue.PriorityQueue): def __init__(self, maxsize=0): if maxsize == 0: maxsize = None super(PriorityQueue, self).__init__(maxsize) class LifoQueue(queue.LifoQueue): def __init__(self, maxsize=0): if maxsize == 0: maxsize = None super(LifoQueue, self).__init__(maxsize) Empty = queue.Empty Full = queue.Full eventlet-0.13.0/eventlet/green/ssl.py0000644000175000017500000003034712164577340020402 0ustar temototemoto00000000000000__ssl = __import__('ssl') from eventlet.patcher import slurp_properties slurp_properties(__ssl, globals(), srckeys=dir(__ssl)) import sys import errno time = __import__('time') from eventlet.support import get_errno from eventlet.hubs import trampoline from eventlet.greenio import set_nonblocking, GreenSocket, SOCKET_CLOSED, CONNECT_ERR, CONNECT_SUCCESS orig_socket = __import__('socket') socket = orig_socket.socket if sys.version_info >= (2,7): has_ciphers = True timeout_exc = SSLError else: has_ciphers = False timeout_exc = orig_socket.timeout __patched__ = ['SSLSocket', 'wrap_socket', 'sslwrap_simple'] class GreenSSLSocket(__ssl.SSLSocket): """ This is a green version of the SSLSocket class from the ssl module added in 2.6. For documentation on it, please see the Python standard documentation. Python nonblocking ssl objects don't give errors when the other end of the socket is closed (they do notice when the other end is shutdown, though). Any write/read operations will simply hang if the socket is closed from the other end. There is no obvious fix for this problem; it appears to be a limitation of Python's ssl object implementation. A workaround is to set a reasonable timeout on the socket using settimeout(), and to close/reopen the connection when a timeout occurs at an unexpected juncture in the code. """ # we are inheriting from SSLSocket because its constructor calls # do_handshake whose behavior we wish to override def __init__(self, sock, *args, **kw): if not isinstance(sock, GreenSocket): sock = GreenSocket(sock) self.act_non_blocking = sock.act_non_blocking self._timeout = sock.gettimeout() super(GreenSSLSocket, self).__init__(sock.fd, *args, **kw) # the superclass initializer trashes the methods so we remove # the local-object versions of them and let the actual class # methods shine through try: for fn in orig_socket._delegate_methods: delattr(self, fn) except AttributeError: pass def settimeout(self, timeout): self._timeout = timeout def gettimeout(self): return self._timeout def setblocking(self, flag): if flag: self.act_non_blocking = False self._timeout = None else: self.act_non_blocking = True self._timeout = 0.0 def _call_trampolining(self, func, *a, **kw): if self.act_non_blocking: return func(*a, **kw) else: while True: try: return func(*a, **kw) except SSLError, exc: if get_errno(exc) == SSL_ERROR_WANT_READ: trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) elif get_errno(exc) == SSL_ERROR_WANT_WRITE: trampoline(self, write=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) else: raise def write(self, data): """Write DATA to the underlying SSL channel. Returns number of bytes of DATA actually transmitted.""" return self._call_trampolining( super(GreenSSLSocket, self).write, data) def read(self, len=1024): """Read up to LEN bytes and return them. Return zero-length string on EOF.""" return self._call_trampolining( super(GreenSSLSocket, self).read, len) def send (self, data, flags=0): if self._sslobj: return self._call_trampolining( super(GreenSSLSocket, self).send, data, flags) else: trampoline(self, write=True, timeout_exc=timeout_exc('timed out')) return socket.send(self, data, flags) def sendto (self, data, addr, flags=0): # *NOTE: gross, copied code from ssl.py becase it's not factored well enough to be used as-is if self._sslobj: raise ValueError("sendto not allowed on instances of %s" % self.__class__) else: trampoline(self, write=True, timeout_exc=timeout_exc('timed out')) return socket.sendto(self, data, addr, flags) def sendall (self, data, flags=0): # *NOTE: gross, copied code from ssl.py becase it's not factored well enough to be used as-is if self._sslobj: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to sendall() on %s" % self.__class__) amount = len(data) count = 0 while (count < amount): v = self.send(data[count:]) count += v if v == 0: trampoline(self, write=True, timeout_exc=timeout_exc('timed out')) return amount else: while True: try: return socket.sendall(self, buflen, flags) except orig_socket.error, e: if self.act_non_blocking: raise if get_errno(e) == errno.EWOULDBLOCK: trampoline(self, write=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) if get_errno(e) in SOCKET_CLOSED: return '' raise def recv(self, buflen=1024, flags=0): # *NOTE: gross, copied code from ssl.py becase it's not factored well enough to be used as-is if self._sslobj: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to recv() on %s" % self.__class__) read = self.read(buflen) return read else: while True: try: return socket.recv(self, buflen, flags) except orig_socket.error, e: if self.act_non_blocking: raise if get_errno(e) == errno.EWOULDBLOCK: trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) if get_errno(e) in SOCKET_CLOSED: return '' raise def recv_into (self, buffer, nbytes=None, flags=0): if not self.act_non_blocking: trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) return super(GreenSSLSocket, self).recv_into(buffer, nbytes, flags) def recvfrom (self, addr, buflen=1024, flags=0): if not self.act_non_blocking: trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) return super(GreenSSLSocket, self).recvfrom(addr, buflen, flags) def recvfrom_into (self, buffer, nbytes=None, flags=0): if not self.act_non_blocking: trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) return super(GreenSSLSocket, self).recvfrom_into(buffer, nbytes, flags) def unwrap(self): return GreenSocket(self._call_trampolining( super(GreenSSLSocket, self).unwrap)) def do_handshake(self): """Perform a TLS/SSL handshake.""" return self._call_trampolining( super(GreenSSLSocket, self).do_handshake) def _socket_connect(self, addr): real_connect = socket.connect if self.act_non_blocking: return real_connect(self, addr) else: # *NOTE: gross, copied code from greenio because it's not factored # well enough to reuse if self.gettimeout() is None: while True: try: return real_connect(self, addr) except orig_socket.error, exc: if get_errno(exc) in CONNECT_ERR: trampoline(self, write=True) elif get_errno(exc) in CONNECT_SUCCESS: return else: raise else: end = time.time() + self.gettimeout() while True: try: real_connect(self, addr) except orig_socket.error, exc: if get_errno(exc) in CONNECT_ERR: trampoline(self, write=True, timeout=end-time.time(), timeout_exc=timeout_exc('timed out')) elif get_errno(exc) in CONNECT_SUCCESS: return else: raise if time.time() >= end: raise timeout_exc('timed out') def connect(self, addr): """Connects to remote ADDR, and then wraps the connection in an SSL channel.""" # *NOTE: grrrrr copied this code from ssl.py because of the reference # to socket.connect which we don't want to call directly if self._sslobj: raise ValueError("attempt to connect already-connected SSLSocket!") self._socket_connect(addr) if has_ciphers: self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) else: self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs) if self.do_handshake_on_connect: self.do_handshake() def accept(self): """Accepts a new connection from a remote client, and returns a tuple containing that new connection wrapped with a server-side SSL channel, and the address of the remote client.""" # RDW grr duplication of code from greenio if self.act_non_blocking: newsock, addr = socket.accept(self) else: while True: try: newsock, addr = socket.accept(self) set_nonblocking(newsock) break except orig_socket.error, e: if get_errno(e) != errno.EWOULDBLOCK: raise trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out')) new_ssl = type(self)(newsock, keyfile=self.keyfile, certfile=self.certfile, server_side=True, cert_reqs=self.cert_reqs, ssl_version=self.ssl_version, ca_certs=self.ca_certs, do_handshake_on_connect=self.do_handshake_on_connect, suppress_ragged_eofs=self.suppress_ragged_eofs) return (new_ssl, addr) def dup(self): raise NotImplementedError("Can't dup an ssl object") SSLSocket = GreenSSLSocket def wrap_socket(sock, *a, **kw): return GreenSSLSocket(sock, *a, **kw) if hasattr(__ssl, 'sslwrap_simple'): def sslwrap_simple(sock, keyfile=None, certfile=None): """A replacement for the old socket.ssl function. Designed for compability with Python 2.5 and earlier. Will disappear in Python 3.0.""" ssl_sock = GreenSSLSocket(sock, keyfile=keyfile, certfile=certfile, server_side=False, cert_reqs=CERT_NONE, ssl_version=PROTOCOL_SSLv23, ca_certs=None) return ssl_sock eventlet-0.13.0/eventlet/green/ftplib.py0000644000175000017500000000046312164577340021055 0ustar temototemoto00000000000000from eventlet import patcher # *NOTE: there might be some funny business with the "SOCKS" module # if it even still exists from eventlet.green import socket patcher.inject('ftplib', globals(), ('socket', socket)) del patcher # Run test program when run as a script if __name__ == '__main__': test() eventlet-0.13.0/eventlet/green/time.py0000644000175000017500000000035712164577340020535 0ustar temototemoto00000000000000__time = __import__('time') from eventlet.patcher import slurp_properties __patched__ = ['sleep'] slurp_properties(__time, globals(), ignore=__patched__, srckeys=dir(__time)) from eventlet.greenthread import sleep sleep # silence pyflakes eventlet-0.13.0/eventlet/green/urllib.py0000644000175000017500000000223712164577340021067 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket from eventlet.green import time from eventlet.green import httplib from eventlet.green import ftplib to_patch = [('socket', socket), ('httplib', httplib), ('time', time), ('ftplib', ftplib)] try: from eventlet.green import ssl to_patch.append(('ssl', ssl)) except ImportError: pass patcher.inject('urllib', globals(), *to_patch) # patch a bunch of things that have imports inside the # function body; this is lame and hacky but I don't feel # too bad because urllib is a hacky pile of junk that no # one should be using anyhow URLopener.open_http = patcher.patch_function(URLopener.open_http, ('httplib', httplib)) if hasattr(URLopener, 'open_https'): URLopener.open_https = patcher.patch_function(URLopener.open_https, ('httplib', httplib)) URLopener.open_ftp = patcher.patch_function(URLopener.open_ftp, ('ftplib', ftplib)) ftpwrapper.init = patcher.patch_function(ftpwrapper.init, ('ftplib', ftplib)) ftpwrapper.retrfile = patcher.patch_function(ftpwrapper.retrfile, ('ftplib', ftplib)) del patcher # Run test program when run as a script if __name__ == '__main__': main() eventlet-0.13.0/eventlet/green/threading.py0000644000175000017500000000576412164577340021553 0ustar temototemoto00000000000000"""Implements the standard threading module, using greenthreads.""" from eventlet import patcher from eventlet.green import thread from eventlet.green import time from eventlet.support import greenlets as greenlet __patched__ = ['_start_new_thread', '_allocate_lock', '_get_ident', '_sleep', 'local', 'stack_size', 'Lock', 'currentThread', 'current_thread', '_after_fork', '_shutdown'] __orig_threading = patcher.original('threading') __threadlocal = __orig_threading.local() patcher.inject('threading', globals(), ('thread', thread), ('time', time)) del patcher _count = 1 class _GreenThread(object): """Wrapper for GreenThread objects to provide Thread-like attributes and methods""" def __init__(self, g): global _count self._g = g self._name = 'GreenThread-%d' % _count _count += 1 def __repr__(self): return '<_GreenThread(%s, %r)>' % (self._name, self._g) def join(self, timeout=None): return self._g.wait() def getName(self): return self._name get_name = getName def setName(self, name): self._name = str(name) set_name = setName name = property(getName, setName) ident = property(lambda self: id(self._g)) def isAlive(self): return True is_alive = isAlive daemon = property(lambda self: True) def isDaemon(self): return self.daemon is_daemon = isDaemon __threading = None def _fixup_thread(t): # Some third-party packages (lockfile) will try to patch the # threading.Thread class with a get_name attribute if it doesn't # exist. Since we might return Thread objects from the original # threading package that won't get patched, let's make sure each # individual object gets patched too our patched threading.Thread # class has been patched. This is why monkey patching can be bad... global __threading if not __threading: __threading = __import__('threading') if (hasattr(__threading.Thread, 'get_name') and not hasattr(t, 'get_name')): t.get_name = t.getName return t def current_thread(): g = greenlet.getcurrent() if not g: # Not currently in a greenthread, fall back to standard function return _fixup_thread(__orig_threading.current_thread()) try: active = __threadlocal.active except AttributeError: active = __threadlocal.active = {} try: t = active[id(g)] except KeyError: # Add green thread to active if we can clean it up on exit def cleanup(g): del active[id(g)] try: g.link(cleanup) except AttributeError: # Not a GreenThread type, so there's no way to hook into # the green thread exiting. Fall back to the standard # function then. t = _fixup_thread(__orig_threading.currentThread()) else: t = active[id(g)] = _GreenThread(g) return t currentThread = current_thread eventlet-0.13.0/eventlet/green/asynchat.py0000644000175000017500000000031612164577340021404 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import asyncore from eventlet.green import socket patcher.inject('asynchat', globals(), ('asyncore', asyncore), ('socket', socket)) del patchereventlet-0.13.0/eventlet/green/__init__.py0000644000175000017500000000012412164577340021326 0ustar temototemoto00000000000000# this package contains modules from the standard library converted to use eventlet eventlet-0.13.0/eventlet/green/httplib.py0000644000175000017500000000046012164577340021240 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket to_patch = [('socket', socket)] try: from eventlet.green import ssl to_patch.append(('ssl', ssl)) except ImportError: pass patcher.inject('httplib', globals(), *to_patch) if __name__ == '__main__': test() eventlet-0.13.0/eventlet/green/profile.py0000644000175000017500000002230112164577340021230 0ustar temototemoto00000000000000# Copyright (c) 2010, CCP Games # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of CCP Games nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY CCP GAMES ``AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL CCP GAMES BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """This module is API-equivalent to the standard library :mod:`profile` module but it is greenthread-aware as well as thread-aware. Use this module to profile Eventlet-based applications in preference to either :mod:`profile` or :mod:`cProfile`. FIXME: No testcases for this module. """ profile_orig = __import__('profile') __all__ = profile_orig.__all__ from eventlet.patcher import slurp_properties slurp_properties(profile_orig, globals(), srckeys=dir(profile_orig)) import new import sys import traceback import functools from eventlet import greenthread from eventlet import patcher thread = patcher.original('thread') # non-monkeypatched module needed #This class provides the start() and stop() functions class Profile(profile_orig.Profile): base = profile_orig.Profile def __init__(self, timer = None, bias=None): self.current_tasklet = greenthread.getcurrent() self.thread_id = thread.get_ident() self.base.__init__(self, timer, bias) self.sleeping = {} def __call__(self, *args): "make callable, allowing an instance to be the profiler" r = self.dispatcher(*args) def _setup(self): self.cur, self.timings, self.current_tasklet = None, {}, greenthread.getcurrent() self.thread_id = thread.get_ident() self.simulate_call("profiler") def start(self, name = "start"): if getattr(self, "running", False): return self._setup() self.simulate_call("start") self.running = True sys.setprofile(self.dispatcher) def stop(self): sys.setprofile(None) self.running = False self.TallyTimings() #special cases for the original run commands, makin sure to #clear the timer context. def runctx(self, cmd, globals, locals): self._setup() try: return profile_orig.Profile.runctx(self, cmd, globals, locals) finally: self.TallyTimings() def runcall(self, func, *args, **kw): self._setup() try: return profile_orig.Profile.runcall(self, func, *args, **kw) finally: self.TallyTimings() def trace_dispatch_return_extend_back(self, frame, t): """A hack function to override error checking in parent class. It allows invalid returns (where frames weren't preveiously entered into the profiler) which can happen for all the tasklets that suddenly start to get monitored. This means that the time will eventually be attributed to a call high in the chain, when there is a tasklet switch """ if isinstance(self.cur[-2], Profile.fake_frame): return False self.trace_dispatch_call(frame, 0) return self.trace_dispatch_return(frame, t); def trace_dispatch_c_return_extend_back(self, frame, t): #same for c return if isinstance(self.cur[-2], Profile.fake_frame): return False #ignore bogus returns self.trace_dispatch_c_call(frame, 0) return self.trace_dispatch_return(frame,t) #Add "return safety" to the dispatchers dispatch = dict(profile_orig.Profile.dispatch) dispatch.update({ "return": trace_dispatch_return_extend_back, "c_return": trace_dispatch_c_return_extend_back, }) def SwitchTasklet(self, t0, t1, t): #tally the time spent in the old tasklet pt, it, et, fn, frame, rcur = self.cur cur = (pt, it+t, et, fn, frame, rcur) #we are switching to a new tasklet, store the old self.sleeping[t0] = cur, self.timings self.current_tasklet = t1 #find the new one try: self.cur, self.timings = self.sleeping.pop(t1) except KeyError: self.cur, self.timings = None, {} self.simulate_call("profiler") self.simulate_call("new_tasklet") def ContextWrap(f): @functools.wraps(f) def ContextWrapper(self, arg, t): current = greenthread.getcurrent() if current != self.current_tasklet: self.SwitchTasklet(self.current_tasklet, current, t) t = 0.0 #the time was billed to the previous tasklet return f(self, arg, t) return ContextWrapper #Add automatic tasklet detection to the callbacks. dispatch = dict([(key, ContextWrap(val)) for key,val in dispatch.iteritems()]) def TallyTimings(self): oldtimings = self.sleeping self.sleeping = {} #first, unwind the main "cur" self.cur = self.Unwind(self.cur, self.timings) #we must keep the timings dicts separate for each tasklet, since it contains #the 'ns' item, recursion count of each function in that tasklet. This is #used in the Unwind dude. for tasklet, (cur,timings) in oldtimings.iteritems(): self.Unwind(cur, timings) for k,v in timings.iteritems(): if k not in self.timings: self.timings[k] = v else: #accumulate all to the self.timings cc, ns, tt, ct, callers = self.timings[k] #ns should be 0 after unwinding cc+=v[0] tt+=v[2] ct+=v[3] for k1,v1 in v[4].iteritems(): callers[k1] = callers.get(k1, 0)+v1 self.timings[k] = cc, ns, tt, ct, callers def Unwind(self, cur, timings): "A function to unwind a 'cur' frame and tally the results" "see profile.trace_dispatch_return() for details" #also see simulate_cmd_complete() while(cur[-1]): rpt, rit, ret, rfn, frame, rcur = cur frame_total = rit+ret if rfn in timings: cc, ns, tt, ct, callers = timings[rfn] else: cc, ns, tt, ct, callers = 0, 0, 0, 0, {} if not ns: ct = ct + frame_total cc = cc + 1 if rcur: ppt, pit, pet, pfn, pframe, pcur = rcur else: pfn = None if pfn in callers: callers[pfn] = callers[pfn] + 1 # hack: gather more elif pfn: callers[pfn] = 1 timings[rfn] = cc, ns - 1, tt + rit, ct, callers ppt, pit, pet, pfn, pframe, pcur = rcur rcur = ppt, pit + rpt, pet + frame_total, pfn, pframe, pcur cur = rcur return cur # run statements shamelessly stolen from profile.py def run(statement, filename=None, sort=-1): """Run statement under profiler optionally saving results in filename This function takes a single argument that can be passed to the "exec" statement, and an optional file name. In all cases this routine attempts to "exec" its first argument and gather profiling statistics from the execution. If no file name is present, then this function automatically prints a simple profiling report, sorted by the standard name string (file/line/function-name) that is presented in each line. """ prof = Profile() try: prof = prof.run(statement) except SystemExit: pass if filename is not None: prof.dump_stats(filename) else: return prof.print_stats(sort) def runctx(statement, globals, locals, filename=None): """Run statement under profiler, supplying your own globals and locals, optionally saving results in filename. statement and filename have the same semantics as profile.run """ prof = Profile() try: prof = prof.runctx(statement, globals, locals) except SystemExit: pass if filename is not None: prof.dump_stats(filename) else: return prof.print_stats() eventlet-0.13.0/eventlet/green/zmq.py0000644000175000017500000003175212164577340020411 0ustar temototemoto00000000000000"""The :mod:`zmq` module wraps the :class:`Socket` and :class:`Context` found in :mod:`pyzmq ` to be non blocking """ from __future__ import with_statement __zmq__ = __import__('zmq') from eventlet import hubs from eventlet.patcher import slurp_properties from eventlet.support import greenlets as greenlet __patched__ = ['Context', 'Socket'] slurp_properties(__zmq__, globals(), ignore=__patched__) from collections import deque try: # alias XREQ/XREP to DEALER/ROUTER if available if not hasattr(__zmq__, 'XREQ'): XREQ = DEALER if not hasattr(__zmq__, 'XREP'): XREP = ROUTER except NameError: pass class LockReleaseError(Exception): pass class _QueueLock(object): """A Lock that can be acquired by at most one thread. Any other thread calling acquire will be blocked in a queue. When release is called, the threads are awoken in the order they blocked, one at a time. This lock can be required recursively by the same thread.""" def __init__(self): self._waiters = deque() self._count = 0 self._holder = None self._hub = hubs.get_hub() def __nonzero__(self): return self._count def __enter__(self): self.acquire() def __exit__(self, type, value, traceback): self.release() def acquire(self): current = greenlet.getcurrent() if (self._waiters or self._count > 0) and self._holder is not current: # block until lock is free self._waiters.append(current) self._hub.switch() w = self._waiters.popleft() assert w is current, 'Waiting threads woken out of order' assert self._count == 0, 'After waking a thread, the lock must be unacquired' self._holder = current self._count += 1 def release(self): if self._count <= 0: raise LockReleaseError("Cannot release unacquired lock") self._count -= 1 if self._count == 0: self._holder = None if self._waiters: # wake next self._hub.schedule_call_global(0, self._waiters[0].switch) class _BlockedThread(object): """Is either empty, or represents a single blocked thread that blocked itself by calling the block() method. The thread can be awoken by calling wake(). Wake() can be called multiple times and all but the first call will have no effect.""" def __init__(self): self._blocked_thread = None self._wakeupper = None self._hub = hubs.get_hub() def __nonzero__(self): return self._blocked_thread is not None def block(self): if self._blocked_thread is not None: raise Exception("Cannot block more than one thread on one BlockedThread") self._blocked_thread = greenlet.getcurrent() try: self._hub.switch() finally: self._blocked_thread = None # cleanup the wakeup task if self._wakeupper is not None: # Important to cancel the wakeup task so it doesn't # spuriously wake this greenthread later on. self._wakeupper.cancel() self._wakeupper = None def wake(self): """Schedules the blocked thread to be awoken and return True. If wake has already been called or if there is no blocked thread, then this call has no effect and returns False.""" if self._blocked_thread is not None and self._wakeupper is None: self._wakeupper = self._hub.schedule_call_global(0, self._blocked_thread.switch) return True return False class Context(__zmq__.Context): """Subclass of :class:`zmq.core.context.Context` """ def socket(self, socket_type): """Overridden method to ensure that the green version of socket is used Behaves the same as :meth:`zmq.core.context.Context.socket`, but ensures that a :class:`Socket` with all of its send and recv methods set to be non-blocking is returned """ if self.closed: raise ZMQError(ENOTSUP) return Socket(self, socket_type) def _wraps(source_fn): """A decorator that copies the __name__ and __doc__ from the given function """ def wrapper(dest_fn): dest_fn.__name__ = source_fn.__name__ dest_fn.__doc__ = source_fn.__doc__ return dest_fn return wrapper # Implementation notes: Each socket in 0mq contains a pipe that the # background IO threads use to communicate with the socket. These # events are important because they tell the socket when it is able to # send and when it has messages waiting to be received. The read end # of the events pipe is the same FD that getsockopt(zmq.FD) returns. # # Events are read from the socket's event pipe only on the thread that # the 0mq context is associated with, which is the native thread the # greenthreads are running on, and the only operations that cause the # events to be read and processed are send(), recv() and # getsockopt(zmq.EVENTS). This means that after doing any of these # three operations, the ability of the socket to send or receive a # message without blocking may have changed, but after the events are # read the FD is no longer readable so the hub may not signal our # listener. # # If we understand that after calling send() a message might be ready # to be received and that after calling recv() a message might be able # to be sent, what should we do next? There are two approaches: # # 1. Always wake the other thread if there is one waiting. This # wakeup may be spurious because the socket might not actually be # ready for a send() or recv(). However, if a thread is in a # tight-loop successfully calling send() or recv() then the wakeups # are naturally batched and there's very little cost added to each # send/recv call. # # or # # 2. Call getsockopt(zmq.EVENTS) and explicitly check if the other # thread should be woken up. This avoids spurious wake-ups but may # add overhead because getsockopt will cause all events to be # processed, whereas send and recv throttle processing # events. Admittedly, all of the events will need to be processed # eventually, but it is likely faster to batch the processing. # # Which approach is better? I have no idea. # # TODO: # - Support MessageTrackers and make MessageTracker.wait green _Socket = __zmq__.Socket _Socket_recv = _Socket.recv _Socket_send = _Socket.send _Socket_send_multipart = _Socket.send_multipart _Socket_recv_multipart = _Socket.recv_multipart _Socket_getsockopt = _Socket.getsockopt class Socket(_Socket): """Green version of :class:`zmq.core.socket.Socket The following three methods are always overridden: * send * recv * getsockopt To ensure that the ``zmq.NOBLOCK`` flag is set and that sending or recieving is deferred to the hub (using :func:`eventlet.hubs.trampoline`) if a ``zmq.EAGAIN`` (retry) error is raised For some socket types, the following methods are also overridden: * send_multipart * recv_multipart """ def __init__(self, context, socket_type): super(Socket, self).__init__(context, socket_type) self.__dict__['_eventlet_send_event'] = _BlockedThread() self.__dict__['_eventlet_recv_event'] = _BlockedThread() self.__dict__['_eventlet_send_lock'] = _QueueLock() self.__dict__['_eventlet_recv_lock'] = _QueueLock() def event(fd): # Some events arrived at the zmq socket. This may mean # there's a message that can be read or there's space for # a message to be written. send_wake = self._eventlet_send_event.wake() recv_wake = self._eventlet_recv_event.wake() if not send_wake and not recv_wake: # if no waiting send or recv thread was woken up, then # force the zmq socket's events to be processed to # avoid repeated wakeups _Socket_getsockopt(self, EVENTS) hub = hubs.get_hub() self.__dict__['_eventlet_listener'] = hub.add(hub.READ, self.getsockopt(FD), event) @_wraps(_Socket.close) def close(self, linger=None): super(Socket, self).close(linger) if self._eventlet_listener is not None: hubs.get_hub().remove(self._eventlet_listener) self.__dict__['_eventlet_listener'] = None # wake any blocked threads self._eventlet_send_event.wake() self._eventlet_recv_event.wake() @_wraps(_Socket.getsockopt) def getsockopt(self, option): result = _Socket_getsockopt(self, option) if option == EVENTS: # Getting the events causes the zmq socket to process # events which may mean a msg can be sent or received. If # there is a greenthread blocked and waiting for events, # it will miss the edge-triggered read event, so wake it # up. if (result & POLLOUT): self._eventlet_send_event.wake() if (result & POLLIN): self._eventlet_recv_event.wake() return result @_wraps(_Socket.send) def send(self, msg, flags=0, copy=True, track=False): """A send method that's safe to use when multiple greenthreads are calling send, send_multipart, recv and recv_multipart on the same socket. """ if flags & NOBLOCK: result = _Socket_send(self, msg, flags, copy, track) # Instead of calling both wake methods, could call # self.getsockopt(EVENTS) which would trigger wakeups if # needed. self._eventlet_send_event.wake() self._eventlet_recv_event.wake() return result # TODO: pyzmq will copy the message buffer and create Message # objects under some circumstances. We could do that work here # once to avoid doing it every time the send is retried. flags |= NOBLOCK with self._eventlet_send_lock: while True: try: return _Socket_send(self, msg, flags, copy, track) except ZMQError, e: if e.errno == EAGAIN: self._eventlet_send_event.block() else: raise finally: # The call to send processes 0mq events and may # make the socket ready to recv. Wake the next # receiver. (Could check EVENTS for POLLIN here) self._eventlet_recv_event.wake() @_wraps(_Socket.send_multipart) def send_multipart(self, msg_parts, flags=0, copy=True, track=False): """A send_multipart method that's safe to use when multiple greenthreads are calling send, send_multipart, recv and recv_multipart on the same socket. """ if flags & NOBLOCK: return _Socket_send_multipart(self, msg_parts, flags, copy, track) # acquire lock here so the subsequent calls to send for the # message parts after the first don't block with self._eventlet_send_lock: return _Socket_send_multipart(self, msg_parts, flags, copy, track) @_wraps(_Socket.recv) def recv(self, flags=0, copy=True, track=False): """A recv method that's safe to use when multiple greenthreads are calling send, send_multipart, recv and recv_multipart on the same socket. """ if flags & NOBLOCK: msg = _Socket_recv(self, flags, copy, track) # Instead of calling both wake methods, could call # self.getsockopt(EVENTS) which would trigger wakeups if # needed. self._eventlet_send_event.wake() self._eventlet_recv_event.wake() return msg flags |= NOBLOCK with self._eventlet_recv_lock: while True: try: return _Socket_recv(self, flags, copy, track) except ZMQError, e: if e.errno == EAGAIN: self._eventlet_recv_event.block() else: raise finally: # The call to recv processes 0mq events and may # make the socket ready to send. Wake the next # receiver. (Could check EVENTS for POLLOUT here) self._eventlet_send_event.wake() @_wraps(_Socket.recv_multipart) def recv_multipart(self, flags=0, copy=True, track=False): """A recv_multipart method that's safe to use when multiple greenthreads are calling send, send_multipart, recv and recv_multipart on the same socket. """ if flags & NOBLOCK: return _Socket_recv_multipart(self, flags, copy, track) # acquire lock here so the subsequent calls to recv for the # message parts after the first don't block with self._eventlet_recv_lock: return _Socket_recv_multipart(self, flags, copy, track) eventlet-0.13.0/eventlet/green/_socket_nodns.py0000644000175000017500000000743112164577340022427 0ustar temototemoto00000000000000__socket = __import__('socket') __all__ = __socket.__all__ __patched__ = ['fromfd', 'socketpair', 'ssl', 'socket'] from eventlet.patcher import slurp_properties slurp_properties(__socket, globals(), ignore=__patched__, srckeys=dir(__socket)) os = __import__('os') import sys import warnings from eventlet.hubs import get_hub from eventlet.greenio import GreenSocket as socket from eventlet.greenio import SSL as _SSL # for exceptions from eventlet.greenio import _GLOBAL_DEFAULT_TIMEOUT from eventlet.greenio import _fileobject try: __original_fromfd__ = __socket.fromfd def fromfd(*args): return socket(__original_fromfd__(*args)) except AttributeError: pass try: __original_socketpair__ = __socket.socketpair def socketpair(*args): one, two = __original_socketpair__(*args) return socket(one), socket(two) except AttributeError: pass def _convert_to_sslerror(ex): """ Transliterates SSL.SysCallErrors to socket.sslerrors""" return sslerror((ex.args[0], ex.args[1])) class GreenSSLObject(object): """ Wrapper object around the SSLObjects returned by socket.ssl, which have a slightly different interface from SSL.Connection objects. """ def __init__(self, green_ssl_obj): """ Should only be called by a 'green' socket.ssl """ self.connection = green_ssl_obj try: # if it's already connected, do the handshake self.connection.getpeername() except: pass else: try: self.connection.do_handshake() except _SSL.SysCallError, e: raise _convert_to_sslerror(e) def read(self, n=1024): """If n is provided, read n bytes from the SSL connection, otherwise read until EOF. The return value is a string of the bytes read.""" try: return self.connection.read(n) except _SSL.ZeroReturnError: return '' except _SSL.SysCallError, e: raise _convert_to_sslerror(e) def write(self, s): """Writes the string s to the on the object's SSL connection. The return value is the number of bytes written. """ try: return self.connection.write(s) except _SSL.SysCallError, e: raise _convert_to_sslerror(e) def server(self): """ Returns a string describing the server's certificate. Useful for debugging purposes; do not parse the content of this string because its format can't be parsed unambiguously. """ return str(self.connection.get_peer_certificate().get_subject()) def issuer(self): """Returns a string describing the issuer of the server's certificate. Useful for debugging purposes; do not parse the content of this string because its format can't be parsed unambiguously.""" return str(self.connection.get_peer_certificate().get_issuer()) try: try: # >= Python 2.6 from eventlet.green import ssl as ssl_module sslerror = __socket.sslerror __socket.ssl def ssl(sock, certificate=None, private_key=None): warnings.warn("socket.ssl() is deprecated. Use ssl.wrap_socket() instead.", DeprecationWarning, stacklevel=2) return ssl_module.sslwrap_simple(sock, private_key, certificate) except ImportError: # <= Python 2.5 compatibility sslerror = __socket.sslerror __socket.ssl def ssl(sock, certificate=None, private_key=None): from eventlet import util wrapped = util.wrap_ssl(sock, certificate, private_key) return GreenSSLObject(wrapped) except AttributeError: # if the real socket module doesn't have the ssl method or sslerror # exception, we can't emulate them pass eventlet-0.13.0/eventlet/green/socket.py0000644000175000017500000000365212164577340021070 0ustar temototemoto00000000000000import os import sys from eventlet.hubs import get_hub __import__('eventlet.green._socket_nodns') __socket = sys.modules['eventlet.green._socket_nodns'] __all__ = __socket.__all__ __patched__ = __socket.__patched__ + ['gethostbyname', 'getaddrinfo', 'create_connection',] from eventlet.patcher import slurp_properties slurp_properties(__socket, globals(), srckeys=dir(__socket)) greendns = None if os.environ.get("EVENTLET_NO_GREENDNS",'').lower() != "yes": try: from eventlet.support import greendns except ImportError, ex: pass if greendns: gethostbyname = greendns.gethostbyname getaddrinfo = greendns.getaddrinfo gethostbyname_ex = greendns.gethostbyname_ex getnameinfo = greendns.getnameinfo __patched__ = __patched__ + ['gethostbyname_ex', 'getnameinfo'] def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None): """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`getdefaulttimeout` is used. """ msg = "getaddrinfo returns an empty list" host, port = address for res in getaddrinfo(host, port, 0, SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket(af, socktype, proto) if timeout is not _GLOBAL_DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) sock.connect(sa) return sock except error, msg: if sock is not None: sock.close() raise error, msg eventlet-0.13.0/eventlet/green/subprocess.py0000644000175000017500000001066512164577340021772 0ustar temototemoto00000000000000import errno import new import time import eventlet from eventlet import greenio from eventlet import patcher from eventlet.green import os from eventlet.green import select patcher.inject('subprocess', globals(), ('select', select)) subprocess_orig = __import__("subprocess") if getattr(subprocess_orig, 'TimeoutExpired', None) is None: # Backported from Python 3.3. # https://bitbucket.org/eventlet/eventlet/issue/89 class TimeoutExpired(Exception): """This exception is raised when the timeout expires while waiting for a child process. """ def __init__(self, cmd, output=None): self.cmd = cmd self.output = output def __str__(self): return ("Command '%s' timed out after %s seconds" % (self.cmd, self.timeout)) # This is the meat of this module, the green version of Popen. class Popen(subprocess_orig.Popen): """eventlet-friendly version of subprocess.Popen""" # We do not believe that Windows pipes support non-blocking I/O. At least, # the Python file objects stored on our base-class object have no # setblocking() method, and the Python fcntl module doesn't exist on # Windows. (see eventlet.greenio.set_nonblocking()) As the sole purpose of # this __init__() override is to wrap the pipes for eventlet-friendly # non-blocking I/O, don't even bother overriding it on Windows. if not subprocess_orig.mswindows: def __init__(self, args, bufsize=0, *argss, **kwds): self.args = args # Forward the call to base-class constructor subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds) # Now wrap the pipes, if any. This logic is loosely borrowed from # eventlet.processes.Process.run() method. for attr in "stdin", "stdout", "stderr": pipe = getattr(self, attr) if pipe is not None and not type(pipe) == greenio.GreenPipe: wrapped_pipe = greenio.GreenPipe(pipe, pipe.mode, bufsize) setattr(self, attr, wrapped_pipe) __init__.__doc__ = subprocess_orig.Popen.__init__.__doc__ def wait(self, timeout=None, check_interval=0.01): # Instead of a blocking OS call, this version of wait() uses logic # borrowed from the eventlet 0.2 processes.Process.wait() method. if timeout is not None: endtime = time.time() + timeout try: while True: status = self.poll() if status is not None: return status if timeout is not None and time.time() > endtime: raise TimeoutExpired(self.args) eventlet.sleep(check_interval) except OSError, e: if e.errno == errno.ECHILD: # no child process, this happens if the child process # already died and has been cleaned up return -1 else: raise wait.__doc__ = subprocess_orig.Popen.wait.__doc__ if not subprocess_orig.mswindows: # don't want to rewrite the original _communicate() method, we # just want a version that uses eventlet.green.select.select() # instead of select.select(). try: _communicate = new.function(subprocess_orig.Popen._communicate.im_func.func_code, globals()) try: _communicate_with_select = new.function( subprocess_orig.Popen._communicate_with_select.im_func.func_code, globals()) _communicate_with_poll = new.function( subprocess_orig.Popen._communicate_with_poll.im_func.func_code, globals()) except AttributeError: pass except AttributeError: # 2.4 only has communicate _communicate = new.function(subprocess_orig.Popen.communicate.im_func.func_code, globals()) def communicate(self, input=None): return self._communicate(input) # Borrow subprocess.call() and check_call(), but patch them so they reference # OUR Popen class rather than subprocess.Popen. call = new.function(subprocess_orig.call.func_code, globals()) try: check_call = new.function(subprocess_orig.check_call.func_code, globals()) except AttributeError: pass # check_call added in 2.5 eventlet-0.13.0/eventlet/green/BaseHTTPServer.py0000644000175000017500000000041012164577340022326 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket from eventlet.green import SocketServer patcher.inject('BaseHTTPServer', globals(), ('socket', socket), ('SocketServer', SocketServer)) del patcher if __name__ == '__main__': test() eventlet-0.13.0/eventlet/green/CGIHTTPServer.py0000644000175000017500000000104312164577340022061 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import BaseHTTPServer from eventlet.green import SimpleHTTPServer from eventlet.green import urllib from eventlet.green import select test = None # bind prior to patcher.inject to silence pyflakes warning below patcher.inject('CGIHTTPServer', globals(), ('BaseHTTPServer', BaseHTTPServer), ('SimpleHTTPServer', SimpleHTTPServer), ('urllib', urllib), ('select', select)) del patcher if __name__ == '__main__': test() # pyflakes false alarm here unless test = None above eventlet-0.13.0/eventlet/green/os.py0000644000175000017500000000451012164577340020213 0ustar temototemoto00000000000000os_orig = __import__("os") import errno socket = __import__("socket") from eventlet import greenio from eventlet.support import get_errno from eventlet import greenthread from eventlet import hubs from eventlet.patcher import slurp_properties __all__ = os_orig.__all__ __patched__ = ['fdopen', 'read', 'write', 'wait', 'waitpid'] slurp_properties(os_orig, globals(), ignore=__patched__, srckeys=dir(os_orig)) def fdopen(fd, *args, **kw): """fdopen(fd [, mode='r' [, bufsize]]) -> file_object Return an open file object connected to a file descriptor.""" if not isinstance(fd, int): raise TypeError('fd should be int, not %r' % fd) try: return greenio.GreenPipe(fd, *args, **kw) except IOError, e: raise OSError(*e.args) __original_read__ = os_orig.read def read(fd, n): """read(fd, buffersize) -> string Read a file descriptor.""" while True: try: return __original_read__(fd, n) except (OSError, IOError), e: if get_errno(e) != errno.EAGAIN: raise except socket.error, e: if get_errno(e) == errno.EPIPE: return '' raise hubs.trampoline(fd, read=True) __original_write__ = os_orig.write def write(fd, st): """write(fd, string) -> byteswritten Write a string to a file descriptor. """ while True: try: return __original_write__(fd, st) except (OSError, IOError), e: if get_errno(e) != errno.EAGAIN: raise except socket.error, e: if get_errno(e) != errno.EPIPE: raise hubs.trampoline(fd, write=True) def wait(): """wait() -> (pid, status) Wait for completion of a child process.""" return waitpid(0,0) __original_waitpid__ = os_orig.waitpid def waitpid(pid, options): """waitpid(...) waitpid(pid, options) -> (pid, status) Wait for completion of a given child process.""" if options & os_orig.WNOHANG != 0: return __original_waitpid__(pid, options) else: new_options = options | os_orig.WNOHANG while True: rpid, status = __original_waitpid__(pid, new_options) if rpid and status >= 0: return rpid, status greenthread.sleep(0.01) # TODO: open eventlet-0.13.0/eventlet/green/urllib2.py0000644000175000017500000000066212164577340021151 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import ftplib from eventlet.green import httplib from eventlet.green import socket from eventlet.green import time from eventlet.green import urllib patcher.inject('urllib2', globals(), ('httplib', httplib), ('socket', socket), ('time', time), ('urllib', urllib)) FTPHandler.ftp_open = patcher.patch_function(FTPHandler.ftp_open, ('ftplib', ftplib)) del patcher eventlet-0.13.0/eventlet/green/SimpleHTTPServer.py0000644000175000017500000000042012164577340022706 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import BaseHTTPServer from eventlet.green import urllib patcher.inject('SimpleHTTPServer', globals(), ('BaseHTTPServer', BaseHTTPServer), ('urllib', urllib)) del patcher if __name__ == '__main__': test()eventlet-0.13.0/eventlet/green/SocketServer.py0000644000175000017500000000047512164577340022217 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket from eventlet.green import select from eventlet.green import threading patcher.inject('SocketServer', globals(), ('socket', socket), ('select', select), ('threading', threading)) # QQQ ForkingMixIn should be fixed to use green waitpid? eventlet-0.13.0/eventlet/green/thread.py0000644000175000017500000000340012164577340021036 0ustar temototemoto00000000000000"""Implements the standard thread module, using greenthreads.""" __thread = __import__('thread') from eventlet.support import greenlets as greenlet from eventlet import greenthread from eventlet.semaphore import Semaphore as LockType __patched__ = ['get_ident', 'start_new_thread', 'start_new', 'allocate_lock', 'allocate', 'exit', 'interrupt_main', 'stack_size', '_local', 'LockType', '_count'] error = __thread.error __threadcount = 0 def _count(): return __threadcount def get_ident(gr=None): if gr is None: return id(greenlet.getcurrent()) else: return id(gr) def __thread_body(func, args, kwargs): global __threadcount __threadcount += 1 try: func(*args, **kwargs) finally: __threadcount -= 1 def start_new_thread(function, args=(), kwargs={}): g = greenthread.spawn_n(__thread_body, function, args, kwargs) return get_ident(g) start_new = start_new_thread def allocate_lock(*a): return LockType(1) allocate = allocate_lock def exit(): raise greenlet.GreenletExit exit_thread = __thread.exit_thread def interrupt_main(): curr = greenlet.getcurrent() if curr.parent and not curr.parent.dead: curr.parent.throw(KeyboardInterrupt()) else: raise KeyboardInterrupt() if hasattr(__thread, 'stack_size'): __original_stack_size__ = __thread.stack_size def stack_size(size=None): if size is None: return __original_stack_size__() if size > __original_stack_size__(): return __original_stack_size__(size) else: pass # not going to decrease stack_size, because otherwise other greenlets in this thread will suffer from eventlet.corolocal import local as _local eventlet-0.13.0/eventlet/green/asyncore.py0000644000175000017500000000037412164577340021421 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import select from eventlet.green import socket from eventlet.green import time patcher.inject("asyncore", globals(), ('select', select), ('socket', socket), ('time', time)) del patchereventlet-0.13.0/eventlet/green/OpenSSL/0000755000175000017500000000000012164600754020477 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/green/OpenSSL/rand.py0000644000175000017500000000003212164577340021774 0ustar temototemoto00000000000000from OpenSSL.rand import *eventlet-0.13.0/eventlet/green/OpenSSL/tsafe.py0000644000175000017500000000003312164577340022153 0ustar temototemoto00000000000000from OpenSSL.tsafe import *eventlet-0.13.0/eventlet/green/OpenSSL/__init__.py0000644000175000017500000000007712164577340022620 0ustar temototemoto00000000000000import rand, crypto, SSL, tsafe from version import __version__eventlet-0.13.0/eventlet/green/OpenSSL/crypto.py0000644000175000017500000000003412164577340022372 0ustar temototemoto00000000000000from OpenSSL.crypto import *eventlet-0.13.0/eventlet/green/OpenSSL/SSL.py0000644000175000017500000001107412164577340021521 0ustar temototemoto00000000000000from OpenSSL import SSL as orig_SSL from OpenSSL.SSL import * from eventlet.support import get_errno from eventlet import greenio from eventlet.hubs import trampoline import socket class GreenConnection(greenio.GreenSocket): """ Nonblocking wrapper for SSL.Connection objects. """ def __init__(self, ctx, sock=None): if sock is not None: fd = orig_SSL.Connection(ctx, sock) else: # if we're given a Connection object directly, use it; # this is used in the inherited accept() method fd = ctx super(ConnectionType, self).__init__(fd) def do_handshake(self): """ Perform an SSL handshake (usually called after renegotiate or one of set_accept_state or set_accept_state). This can raise the same exceptions as send and recv. """ if self.act_non_blocking: return self.fd.do_handshake() while True: try: return self.fd.do_handshake() except WantReadError: trampoline(self.fd.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) except WantWriteError: trampoline(self.fd.fileno(), write=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) def dup(self): raise NotImplementedError("Dup not supported on SSL sockets") def makefile(self, mode='r', bufsize=-1): raise NotImplementedError("Makefile not supported on SSL sockets") def read(self, size): """Works like a blocking call to SSL_read(), whose behavior is described here: http://www.openssl.org/docs/ssl/SSL_read.html""" if self.act_non_blocking: return self.fd.read(size) while True: try: return self.fd.read(size) except WantReadError: trampoline(self.fd.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) except WantWriteError: trampoline(self.fd.fileno(), write=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) except SysCallError, e: if get_errno(e) == -1 or get_errno(e) > 0: return '' recv = read def write(self, data): """Works like a blocking call to SSL_write(), whose behavior is described here: http://www.openssl.org/docs/ssl/SSL_write.html""" if not data: return 0 # calling SSL_write() with 0 bytes to be sent is undefined if self.act_non_blocking: return self.fd.write(data) while True: try: return self.fd.write(data) except WantReadError: trampoline(self.fd.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) except WantWriteError: trampoline(self.fd.fileno(), write=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) send = write def sendall(self, data): """Send "all" data on the connection. This calls send() repeatedly until all data is sent. If an error occurs, it's impossible to tell how much data has been sent. No return value.""" tail = self.send(data) while tail < len(data): tail += self.send(data[tail:]) def shutdown(self): if self.act_non_blocking: return self.fd.shutdown() while True: try: return self.fd.shutdown() except WantReadError: trampoline(self.fd.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) except WantWriteError: trampoline(self.fd.fileno(), write=True, timeout=self.gettimeout(), timeout_exc=socket.timeout) Connection = ConnectionType = GreenConnection del greenio eventlet-0.13.0/eventlet/green/OpenSSL/version.py0000644000175000017500000000006012164577340022536 0ustar temototemoto00000000000000from OpenSSL.version import __version__, __doc__eventlet-0.13.0/eventlet/green/MySQLdb.py0000644000175000017500000000230112164577340021041 0ustar temototemoto00000000000000__MySQLdb = __import__('MySQLdb') __all__ = __MySQLdb.__all__ __patched__ = ["connect", "Connect", 'Connection', 'connections'] from eventlet.patcher import slurp_properties slurp_properties(__MySQLdb, globals(), ignore=__patched__, srckeys=dir(__MySQLdb)) from eventlet import tpool __orig_connections = __import__('MySQLdb.connections').connections def Connection(*args, **kw): conn = tpool.execute(__orig_connections.Connection, *args, **kw) return tpool.Proxy(conn, autowrap_names=('cursor',)) connect = Connect = Connection # replicate the MySQLdb.connections module but with a tpooled Connection factory class MySQLdbConnectionsModule(object): pass connections = MySQLdbConnectionsModule() for var in dir(__orig_connections): if not var.startswith('__'): setattr(connections, var, getattr(__orig_connections, var)) connections.Connection = Connection cursors = __import__('MySQLdb.cursors').cursors converters = __import__('MySQLdb.converters').converters # TODO support instantiating cursors.FooCursor objects directly # TODO though this is a low priority, it would be nice if we supported # subclassing eventlet.green.MySQLdb.connections.Connection eventlet-0.13.0/eventlet/tpool.py0000644000175000017500000002323412164577340017633 0ustar temototemoto00000000000000# Copyright (c) 2007-2009, Linden Research, Inc. # Copyright (c) 2007, IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import imp import os import sys from eventlet import event from eventlet import greenio from eventlet import greenthread from eventlet import patcher from eventlet import timeout threading = patcher.original('threading') Queue_module = patcher.original('Queue') Queue = Queue_module.Queue Empty = Queue_module.Empty __all__ = ['execute', 'Proxy', 'killall'] QUIET=True _rfile = _wfile = None _bytetosend = ' '.encode() def _signal_t2e(): _wfile.write(_bytetosend) _wfile.flush() _reqq = None _rspq = None def tpool_trampoline(): global _rspq while(True): try: _c = _rfile.read(1) assert _c except ValueError: break # will be raised when pipe is closed while not _rspq.empty(): try: (e,rv) = _rspq.get(block=False) e.send(rv) e = rv = None except Empty: pass SYS_EXCS = (KeyboardInterrupt, SystemExit) EXC_CLASSES = (Exception, timeout.Timeout) def tworker(): global _rspq while(True): try: msg = _reqq.get() except AttributeError: return # can't get anything off of a dud queue if msg is None: return (e,meth,args,kwargs) = msg rv = None try: rv = meth(*args,**kwargs) except SYS_EXCS: raise except EXC_CLASSES: rv = sys.exc_info() # test_leakage_from_tracebacks verifies that the use of # exc_info does not lead to memory leaks _rspq.put((e,rv)) msg = meth = args = kwargs = e = rv = None _signal_t2e() def execute(meth,*args, **kwargs): """ Execute *meth* in a Python thread, blocking the current coroutine/ greenthread until the method completes. The primary use case for this is to wrap an object or module that is not amenable to monkeypatching or any of the other tricks that Eventlet uses to achieve cooperative yielding. With tpool, you can force such objects to cooperate with green threads by sticking them in native threads, at the cost of some overhead. """ setup() # if already in tpool, don't recurse into the tpool # also, call functions directly if we're inside an import lock, because # if meth does any importing (sadly common), it will hang my_thread = threading.currentThread() if my_thread in _threads or imp.lock_held() or _nthreads == 0: return meth(*args, **kwargs) e = event.Event() _reqq.put((e,meth,args,kwargs)) rv = e.wait() if isinstance(rv,tuple) \ and len(rv) == 3 \ and isinstance(rv[1],EXC_CLASSES): import traceback (c,e,tb) = rv if not QUIET: traceback.print_exception(c,e,tb) traceback.print_stack() raise c,e,tb return rv def proxy_call(autowrap, f, *args, **kwargs): """ Call a function *f* and returns the value. If the type of the return value is in the *autowrap* collection, then it is wrapped in a :class:`Proxy` object before return. Normally *f* will be called in the threadpool with :func:`execute`; if the keyword argument "nonblocking" is set to ``True``, it will simply be executed directly. This is useful if you have an object which has methods that don't need to be called in a separate thread, but which return objects that should be Proxy wrapped. """ if kwargs.pop('nonblocking',False): rv = f(*args, **kwargs) else: rv = execute(f,*args,**kwargs) if isinstance(rv, autowrap): return Proxy(rv, autowrap) else: return rv class Proxy(object): """ a simple proxy-wrapper of any object that comes with a methods-only interface, in order to forward every method invocation onto a thread in the native-thread pool. A key restriction is that the object's methods should not switch greenlets or use Eventlet primitives, since they are in a different thread from the main hub, and therefore might behave unexpectedly. This is for running native-threaded code only. It's common to want to have some of the attributes or return values also wrapped in Proxy objects (for example, database connection objects produce cursor objects which also should be wrapped in Proxy objects to remain nonblocking). *autowrap*, if supplied, is a collection of types; if an attribute or return value matches one of those types (via isinstance), it will be wrapped in a Proxy. *autowrap_names* is a collection of strings, which represent the names of attributes that should be wrapped in Proxy objects when accessed. """ def __init__(self, obj,autowrap=(), autowrap_names=()): self._obj = obj self._autowrap = autowrap self._autowrap_names = autowrap_names def __getattr__(self,attr_name): f = getattr(self._obj,attr_name) if not hasattr(f, '__call__'): if (isinstance(f, self._autowrap) or attr_name in self._autowrap_names): return Proxy(f, self._autowrap) return f def doit(*args, **kwargs): result = proxy_call(self._autowrap, f, *args, **kwargs) if attr_name in self._autowrap_names and not isinstance(result, Proxy): return Proxy(result) return result return doit # the following are a buncha methods that the python interpeter # doesn't use getattr to retrieve and therefore have to be defined # explicitly def __getitem__(self, key): return proxy_call(self._autowrap, self._obj.__getitem__, key) def __setitem__(self, key, value): return proxy_call(self._autowrap, self._obj.__setitem__, key, value) def __deepcopy__(self, memo=None): return proxy_call(self._autowrap, self._obj.__deepcopy__, memo) def __copy__(self, memo=None): return proxy_call(self._autowrap, self._obj.__copy__, memo) def __call__(self, *a, **kw): if '__call__' in self._autowrap_names: return Proxy(proxy_call(self._autowrap, self._obj, *a, **kw)) else: return proxy_call(self._autowrap, self._obj, *a, **kw) # these don't go through a proxy call, because they're likely to # be called often, and are unlikely to be implemented on the # wrapped object in such a way that they would block def __eq__(self, rhs): return self._obj == rhs def __hash__(self): return self._obj.__hash__() def __repr__(self): return self._obj.__repr__() def __str__(self): return self._obj.__str__() def __len__(self): return len(self._obj) def __nonzero__(self): return bool(self._obj) def __iter__(self): it = iter(self._obj) if it == self._obj: return self else: return Proxy(it) def next(self): return proxy_call(self._autowrap, self._obj.next) _nthreads = int(os.environ.get('EVENTLET_THREADPOOL_SIZE', 20)) _threads = [] _coro = None _setup_already = False def setup(): global _rfile, _wfile, _threads, _coro, _setup_already, _rspq, _reqq if _setup_already: return else: _setup_already = True try: _rpipe, _wpipe = os.pipe() _wfile = greenio.GreenPipe(_wpipe, 'wb', 0) _rfile = greenio.GreenPipe(_rpipe, 'rb', 0) except (ImportError, NotImplementedError): # This is Windows compatibility -- use a socket instead of a pipe because # pipes don't really exist on Windows. import socket from eventlet import util sock = util.__original_socket__(socket.AF_INET, socket.SOCK_STREAM) sock.bind(('localhost', 0)) sock.listen(50) csock = util.__original_socket__(socket.AF_INET, socket.SOCK_STREAM) csock.connect(('localhost', sock.getsockname()[1])) nsock, addr = sock.accept() _rfile = greenio.GreenSocket(csock).makefile('rb', 0) _wfile = nsock.makefile('wb',0) _reqq = Queue(maxsize=-1) _rspq = Queue(maxsize=-1) assert _nthreads >= 0, "Can't specify negative number of threads" if _nthreads == 0: import warnings warnings.warn("Zero threads in tpool. All tpool.execute calls will\ execute in main thread. Check the value of the environment \ variable EVENTLET_THREADPOOL_SIZE.", RuntimeWarning) for i in xrange(_nthreads): t = threading.Thread(target=tworker, name="tpool_thread_%s" % i) t.setDaemon(True) t.start() _threads.append(t) _coro = greenthread.spawn_n(tpool_trampoline) def killall(): global _setup_already, _rspq, _rfile, _wfile if not _setup_already: return for thr in _threads: _reqq.put(None) for thr in _threads: thr.join() del _threads[:] if _coro is not None: greenthread.kill(_coro) _rfile.close() _wfile.close() _rfile = None _wfile = None _rspq = None _setup_already = False def set_num_threads(nthreads): global _nthreads _nthreads = nthreads eventlet-0.13.0/eventlet/timeout.py0000644000175000017500000001311712164577340020163 0ustar temototemoto00000000000000# Copyright (c) 2009-2010 Denis Bilenko, denis.bilenko at gmail com # Copyright (c) 2010 Eventlet Contributors (see AUTHORS) # and licensed under the MIT license: # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE.from eventlet.support import greenlets as greenlet from eventlet.support import greenlets as greenlet, BaseException from eventlet.hubs import get_hub __all__ = ['Timeout', 'with_timeout'] _NONE = object() # deriving from BaseException so that "except Exception, e" doesn't catch # Timeout exceptions. class Timeout(BaseException): """Raises *exception* in the current greenthread after *timeout* seconds. When *exception* is omitted or ``None``, the :class:`Timeout` instance itself is raised. If *seconds* is None, the timer is not scheduled, and is only useful if you're planning to raise it directly. Timeout objects are context managers, and so can be used in with statements. When used in a with statement, if *exception* is ``False``, the timeout is still raised, but the context manager suppresses it, so the code outside the with-block won't see it. """ def __init__(self, seconds=None, exception=None): self.seconds = seconds self.exception = exception self.timer = None self.start() def start(self): """Schedule the timeout. This is called on construction, so it should not be called explicitly, unless the timer has been canceled.""" assert not self.pending, \ '%r is already started; to restart it, cancel it first' % self if self.seconds is None: # "fake" timeout (never expires) self.timer = None elif self.exception is None or isinstance(self.exception, bool): # timeout that raises self self.timer = get_hub().schedule_call_global( self.seconds, greenlet.getcurrent().throw, self) else: # regular timeout with user-provided exception self.timer = get_hub().schedule_call_global( self.seconds, greenlet.getcurrent().throw, self.exception) return self @property def pending(self): """True if the timeout is scheduled to be raised.""" if self.timer is not None: return self.timer.pending else: return False def cancel(self): """If the timeout is pending, cancel it. If not using Timeouts in ``with`` statements, always call cancel() in a ``finally`` after the block of code that is getting timed out. If not canceled, the timeout will be raised later on, in some unexpected section of the application.""" if self.timer is not None: self.timer.cancel() self.timer = None def __repr__(self): try: classname = self.__class__.__name__ except AttributeError: # Python < 2.5 classname = 'Timeout' if self.pending: pending = ' pending' else: pending = '' if self.exception is None: exception = '' else: exception = ' exception=%r' % self.exception return '<%s at %s seconds=%s%s%s>' % ( classname, hex(id(self)), self.seconds, exception, pending) def __str__(self): """ >>> raise Timeout Traceback (most recent call last): ... Timeout """ if self.seconds is None: return '' if self.seconds == 1: suffix = '' else: suffix = 's' if self.exception is None or self.exception is True: return '%s second%s' % (self.seconds, suffix) elif self.exception is False: return '%s second%s (silent)' % (self.seconds, suffix) else: return '%s second%s (%s)' % (self.seconds, suffix, self.exception) def __enter__(self): if self.timer is None: self.start() return self def __exit__(self, typ, value, tb): self.cancel() if value is self and self.exception is False: return True def with_timeout(seconds, function, *args, **kwds): """Wrap a call to some (yielding) function with a timeout; if the called function fails to return before the timeout, cancel it and return a flag value. """ timeout_value = kwds.pop("timeout_value", _NONE) timeout = Timeout(seconds) try: try: return function(*args, **kwds) except Timeout, ex: if ex is timeout and timeout_value is not _NONE: return timeout_value raise finally: timeout.cancel() eventlet-0.13.0/eventlet/support/0000755000175000017500000000000012164600754017630 5ustar temototemoto00000000000000eventlet-0.13.0/eventlet/support/__init__.py0000644000175000017500000000235112164577340021746 0ustar temototemoto00000000000000import sys from eventlet.support import greenlets def get_errno(exc): """ Get the error code out of socket.error objects. socket.error in <2.5 does not have errno attribute socket.error in 3.x does not allow indexing access e.args[0] works for all. There are cases when args[0] is not errno. i.e. http://bugs.python.org/issue6471 Maybe there are cases when errno is set, but it is not the first argument? """ try: if exc.errno is not None: return exc.errno except AttributeError: pass try: return exc.args[0] except IndexError: return None if sys.version_info[0]<3 and not greenlets.preserves_excinfo: from sys import exc_clear as clear_sys_exc_info else: def clear_sys_exc_info(): """No-op In py3k. Exception information is not visible outside of except statements. sys.exc_clear became obsolete and removed.""" pass if sys.version_info[0]==2 and sys.version_info[1]<5: class BaseException: # pylint: disable-msg=W0622 # not subclassing from object() intentionally, because in # that case "raise Timeout" fails with TypeError. pass else: from __builtin__ import BaseException eventlet-0.13.0/eventlet/support/psycopg2_patcher.py0000644000175000017500000000432512164577340023466 0ustar temototemoto00000000000000"""A wait callback to allow psycopg2 cooperation with eventlet. Use `make_psycopg_green()` to enable eventlet support in Psycopg. """ # Copyright (C) 2010 Daniele Varrazzo # and licensed under the MIT license: # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import psycopg2 from psycopg2 import extensions from eventlet.hubs import trampoline def make_psycopg_green(): """Configure Psycopg to be used with eventlet in non-blocking way.""" if not hasattr(extensions, 'set_wait_callback'): raise ImportError( "support for coroutines not available in this Psycopg version (%s)" % psycopg2.__version__) extensions.set_wait_callback(eventlet_wait_callback) def eventlet_wait_callback(conn, timeout=-1): """A wait callback useful to allow eventlet to work with Psycopg.""" while 1: state = conn.poll() if state == extensions.POLL_OK: break elif state == extensions.POLL_READ: trampoline(conn.fileno(), read=True) elif state == extensions.POLL_WRITE: trampoline(conn.fileno(), write=True) else: raise psycopg2.OperationalError( "Bad result from poll: %r" % state) eventlet-0.13.0/eventlet/support/greendns.py0000644000175000017500000003750212164577340022022 0ustar temototemoto00000000000000#!/usr/bin/env python ''' greendns - non-blocking DNS support for Eventlet ''' # Portions of this code taken from the gogreen project: # http://github.com/slideinc/gogreen # # Copyright (c) 2005-2010 Slide, Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # * Neither the name of the author nor the names of other # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. from eventlet import patcher from eventlet.green import _socket_nodns from eventlet.green import time from eventlet.green import select dns = patcher.import_patched('dns', socket=_socket_nodns, time=time, select=select) for pkg in ('dns.query', 'dns.exception', 'dns.inet', 'dns.message', 'dns.rdatatype','dns.resolver', 'dns.reversename'): setattr(dns, pkg.split('.')[1], patcher.import_patched(pkg, socket=_socket_nodns, time=time, select=select)) socket = _socket_nodns DNS_QUERY_TIMEOUT = 10.0 # # Resolver instance used to perfrom DNS lookups. # class FakeAnswer(list): expiration = 0 class FakeRecord(object): pass class ResolverProxy(object): def __init__(self, *args, **kwargs): self._resolver = None self._filename = kwargs.get('filename', '/etc/resolv.conf') self._hosts = {} if kwargs.pop('dev', False): self._load_etc_hosts() def _load_etc_hosts(self): try: fd = open('/etc/hosts', 'r') contents = fd.read() fd.close() except (IOError, OSError): return contents = [line for line in contents.split('\n') if line and not line[0] == '#'] for line in contents: line = line.replace('\t', ' ') parts = line.split(' ') parts = [p for p in parts if p] if not len(parts): continue ip = parts[0] for part in parts[1:]: self._hosts[part] = ip def clear(self): self._resolver = None def query(self, *args, **kwargs): if self._resolver is None: self._resolver = dns.resolver.Resolver(filename = self._filename) self._resolver.cache = dns.resolver.Cache() query = args[0] if query is None: args = list(args) query = args[0] = '0.0.0.0' if self._hosts and self._hosts.get(query): answer = FakeAnswer() record = FakeRecord() setattr(record, 'address', self._hosts[query]) answer.append(record) return answer return self._resolver.query(*args, **kwargs) # # cache # resolver = ResolverProxy(dev=True) def resolve(name): error = None rrset = None if rrset is None or time.time() > rrset.expiration: try: rrset = resolver.query(name) except dns.exception.Timeout, e: error = (socket.EAI_AGAIN, 'Lookup timed out') except dns.exception.DNSException, e: error = (socket.EAI_NODATA, 'No address associated with hostname') else: pass #responses.insert(name, rrset) if error: if rrset is None: raise socket.gaierror(error) else: sys.stderr.write('DNS error: %r %r\n' % (name, error)) return rrset # # methods # def getaliases(host): """Checks for aliases of the given hostname (cname records) returns a list of alias targets will return an empty list if no aliases """ cnames = [] error = None try: answers = dns.resolver.query(host, 'cname') except dns.exception.Timeout, e: error = (socket.EAI_AGAIN, 'Lookup timed out') except dns.exception.DNSException, e: error = (socket.EAI_NODATA, 'No address associated with hostname') else: for record in answers: cnames.append(str(answers[0].target)) if error: sys.stderr.write('DNS error: %r %r\n' % (host, error)) return cnames def getaddrinfo(host, port, family=0, socktype=0, proto=0, flags=0): """Replacement for Python's socket.getaddrinfo. Currently only supports IPv4. At present, flags are not implemented. """ socktype = socktype or socket.SOCK_STREAM if is_ipv4_addr(host): return [(socket.AF_INET, socktype, proto, '', (host, port))] rrset = resolve(host) value = [] for rr in rrset: value.append((socket.AF_INET, socktype, proto, '', (rr.address, port))) return value def gethostbyname(hostname): """Replacement for Python's socket.gethostbyname. Currently only supports IPv4. """ if is_ipv4_addr(hostname): return hostname rrset = resolve(hostname) return rrset[0].address def gethostbyname_ex(hostname): """Replacement for Python's socket.gethostbyname_ex. Currently only supports IPv4. """ if is_ipv4_addr(hostname): return (hostname, [], [hostname]) rrset = resolve(hostname) addrs = [] for rr in rrset: addrs.append(rr.address) return (hostname, [], addrs) def getnameinfo(sockaddr, flags): """Replacement for Python's socket.getnameinfo. Currently only supports IPv4. """ try: host, port = sockaddr except (ValueError, TypeError): if not isinstance(sockaddr, tuple): del sockaddr # to pass a stdlib test that is # hyper-careful about reference counts raise TypeError('getnameinfo() argument 1 must be a tuple') else: # must be ipv6 sockaddr, pretending we don't know how to resolve it raise socket.gaierror(-2, 'name or service not known') if (flags & socket.NI_NAMEREQD) and (flags & socket.NI_NUMERICHOST): # Conflicting flags. Punt. raise socket.gaierror( (socket.EAI_NONAME, 'Name or service not known')) if is_ipv4_addr(host): try: rrset = resolver.query( dns.reversename.from_address(host), dns.rdatatype.PTR) if len(rrset) > 1: raise socket.error('sockaddr resolved to multiple addresses') host = rrset[0].target.to_text(omit_final_dot=True) except dns.exception.Timeout, e: if flags & socket.NI_NAMEREQD: raise socket.gaierror((socket.EAI_AGAIN, 'Lookup timed out')) except dns.exception.DNSException, e: if flags & socket.NI_NAMEREQD: raise socket.gaierror( (socket.EAI_NONAME, 'Name or service not known')) else: try: rrset = resolver.query(host) if len(rrset) > 1: raise socket.error('sockaddr resolved to multiple addresses') if flags & socket.NI_NUMERICHOST: host = rrset[0].address except dns.exception.Timeout, e: raise socket.gaierror((socket.EAI_AGAIN, 'Lookup timed out')) except dns.exception.DNSException, e: raise socket.gaierror( (socket.EAI_NODATA, 'No address associated with hostname')) if not (flags & socket.NI_NUMERICSERV): proto = (flags & socket.NI_DGRAM) and 'udp' or 'tcp' port = socket.getservbyport(port, proto) return (host, port) def is_ipv4_addr(host): """is_ipv4_addr returns true if host is a valid IPv4 address in dotted quad notation. """ try: d1, d2, d3, d4 = map(int, host.split('.')) except (ValueError, AttributeError): return False if 0 <= d1 <= 255 and 0 <= d2 <= 255 and 0 <= d3 <= 255 and 0 <= d4 <= 255: return True return False def _net_read(sock, count, expiration): """coro friendly replacement for dns.query._net_write Read the specified number of bytes from sock. Keep trying until we either get the desired amount, or we hit EOF. A Timeout exception will be raised if the operation is not completed by the expiration time. """ s = '' while count > 0: try: n = sock.recv(count) except socket.timeout: ## Q: Do we also need to catch coro.CoroutineSocketWake and pass? if expiration - time.time() <= 0.0: raise dns.exception.Timeout if n == '': raise EOFError count = count - len(n) s = s + n return s def _net_write(sock, data, expiration): """coro friendly replacement for dns.query._net_write Write the specified data to the socket. A Timeout exception will be raised if the operation is not completed by the expiration time. """ current = 0 l = len(data) while current < l: try: current += sock.send(data[current:]) except socket.timeout: ## Q: Do we also need to catch coro.CoroutineSocketWake and pass? if expiration - time.time() <= 0.0: raise dns.exception.Timeout def udp( q, where, timeout=DNS_QUERY_TIMEOUT, port=53, af=None, source=None, source_port=0, ignore_unexpected=False): """coro friendly replacement for dns.query.udp Return the response obtained after sending a query via UDP. @param q: the query @type q: dns.message.Message @param where: where to send the message @type where: string containing an IPv4 or IPv6 address @param timeout: The number of seconds to wait before the query times out. If None, the default, wait forever. @type timeout: float @param port: The port to which to send the message. The default is 53. @type port: int @param af: the address family to use. The default is None, which causes the address family to use to be inferred from the form of of where. If the inference attempt fails, AF_INET is used. @type af: int @rtype: dns.message.Message object @param source: source address. The default is the IPv4 wildcard address. @type source: string @param source_port: The port from which to send the message. The default is 0. @type source_port: int @param ignore_unexpected: If True, ignore responses from unexpected sources. The default is False. @type ignore_unexpected: bool""" wire = q.to_wire() if af is None: try: af = dns.inet.af_for_address(where) except: af = dns.inet.AF_INET if af == dns.inet.AF_INET: destination = (where, port) if source is not None: source = (source, source_port) elif af == dns.inet.AF_INET6: destination = (where, port, 0, 0) if source is not None: source = (source, source_port, 0, 0) s = socket.socket(af, socket.SOCK_DGRAM) s.settimeout(timeout) try: expiration = dns.query._compute_expiration(timeout) if source is not None: s.bind(source) try: s.sendto(wire, destination) except socket.timeout: ## Q: Do we also need to catch coro.CoroutineSocketWake and pass? if expiration - time.time() <= 0.0: raise dns.exception.Timeout while 1: try: (wire, from_address) = s.recvfrom(65535) except socket.timeout: ## Q: Do we also need to catch coro.CoroutineSocketWake and pass? if expiration - time.time() <= 0.0: raise dns.exception.Timeout if from_address == destination: break if not ignore_unexpected: raise dns.query.UnexpectedSource( 'got a response from %s instead of %s' % (from_address, destination)) finally: s.close() r = dns.message.from_wire(wire, keyring=q.keyring, request_mac=q.mac) if not q.is_response(r): raise dns.query.BadResponse() return r def tcp(q, where, timeout=DNS_QUERY_TIMEOUT, port=53, af=None, source=None, source_port=0): """coro friendly replacement for dns.query.tcp Return the response obtained after sending a query via TCP. @param q: the query @type q: dns.message.Message object @param where: where to send the message @type where: string containing an IPv4 or IPv6 address @param timeout: The number of seconds to wait before the query times out. If None, the default, wait forever. @type timeout: float @param port: The port to which to send the message. The default is 53. @type port: int @param af: the address family to use. The default is None, which causes the address family to use to be inferred from the form of of where. If the inference attempt fails, AF_INET is used. @type af: int @rtype: dns.message.Message object @param source: source address. The default is the IPv4 wildcard address. @type source: string @param source_port: The port from which to send the message. The default is 0. @type source_port: int""" wire = q.to_wire() if af is None: try: af = dns.inet.af_for_address(where) except: af = dns.inet.AF_INET if af == dns.inet.AF_INET: destination = (where, port) if source is not None: source = (source, source_port) elif af == dns.inet.AF_INET6: destination = (where, port, 0, 0) if source is not None: source = (source, source_port, 0, 0) s = socket.socket(af, socket.SOCK_STREAM) s.settimeout(timeout) try: expiration = dns.query._compute_expiration(timeout) if source is not None: s.bind(source) try: s.connect(destination) except socket.timeout: ## Q: Do we also need to catch coro.CoroutineSocketWake and pass? if expiration - time.time() <= 0.0: raise dns.exception.Timeout l = len(wire) # copying the wire into tcpmsg is inefficient, but lets us # avoid writev() or doing a short write that would get pushed # onto the net tcpmsg = struct.pack("!H", l) + wire _net_write(s, tcpmsg, expiration) ldata = _net_read(s, 2, expiration) (l,) = struct.unpack("!H", ldata) wire = _net_read(s, l, expiration) finally: s.close() r = dns.message.from_wire(wire, keyring=q.keyring, request_mac=q.mac) if not q.is_response(r): raise dns.query.BadResponse() return r def reset(): resolver.clear() # Install our coro-friendly replacements for the tcp and udp query methods. dns.query.tcp = tcp dns.query.udp = udp eventlet-0.13.0/eventlet/support/stacklesspypys.py0000644000175000017500000000042312164577340023306 0ustar temototemoto00000000000000from stackless import greenlet import sys import types def emulate(): module = types.ModuleType('greenlet') sys.modules['greenlet'] = module module.greenlet = greenlet module.getcurrent = greenlet.getcurrent module.GreenletExit = greenlet.GreenletExit eventlet-0.13.0/eventlet/support/pylib.py0000644000175000017500000000042212164577340021323 0ustar temototemoto00000000000000from py.magic import greenlet import sys import types def emulate(): module = types.ModuleType('greenlet') sys.modules['greenlet'] = module module.greenlet = greenlet module.getcurrent = greenlet.getcurrent module.GreenletExit = greenlet.GreenletExit eventlet-0.13.0/eventlet/support/greenlets.py0000644000175000017500000000207512164577340022202 0ustar temototemoto00000000000000import distutils.version try: import greenlet getcurrent = greenlet.greenlet.getcurrent GreenletExit = greenlet.greenlet.GreenletExit preserves_excinfo = (distutils.version.LooseVersion(greenlet.__version__) >= distutils.version.LooseVersion('0.3.2')) greenlet = greenlet.greenlet except ImportError, e: raise try: from py.magic import greenlet getcurrent = greenlet.getcurrent GreenletExit = greenlet.GreenletExit preserves_excinfo = False except ImportError: try: from stackless import greenlet getcurrent = greenlet.getcurrent GreenletExit = greenlet.GreenletExit preserves_excinfo = False except ImportError: try: from support.stacklesss import greenlet, getcurrent, GreenletExit preserves_excinfo = False (greenlet, getcurrent, GreenletExit) # silence pyflakes except ImportError, e: raise ImportError("Unable to find an implementation of greenlet.") eventlet-0.13.0/eventlet/support/stacklesss.py0000644000175000017500000000351012164577340022364 0ustar temototemoto00000000000000""" Support for using stackless python. Broken and riddled with print statements at the moment. Please fix it! """ import sys import types import stackless caller = None coro_args = {} tasklet_to_greenlet = {} def getcurrent(): return tasklet_to_greenlet[stackless.getcurrent()] class FirstSwitch(object): def __init__(self, gr): self.gr = gr def __call__(self, *args, **kw): #print "first call", args, kw gr = self.gr del gr.switch run, gr.run = gr.run, None t = stackless.tasklet(run) gr.t = t tasklet_to_greenlet[t] = gr t.setup(*args, **kw) t.run() class greenlet(object): def __init__(self, run=None, parent=None): self.dead = False if parent is None: parent = getcurrent() self.parent = parent if run is not None: self.run = run self.switch = FirstSwitch(self) def switch(self, *args): #print "switch", args global caller caller = stackless.getcurrent() coro_args[self] = args self.t.insert() stackless.schedule() if caller is not self.t: caller.remove() rval = coro_args[self] return rval def run(self): pass def __bool__(self): return self.run is None and not self.dead class GreenletExit(Exception): pass def emulate(): module = types.ModuleType('greenlet') sys.modules['greenlet'] = module module.greenlet = greenlet module.getcurrent = getcurrent module.GreenletExit = GreenletExit caller = stackless.getcurrent() tasklet_to_greenlet[caller] = None main_coro = greenlet() tasklet_to_greenlet[caller] = main_coro main_coro.t = caller del main_coro.switch ## It's already running coro_args[main_coro] = None eventlet-0.13.0/eventlet/api.py0000644000175000017500000001426612164577340017254 0ustar temototemoto00000000000000import errno import sys import socket import string import linecache import inspect import warnings from eventlet.support import greenlets as greenlet, BaseException from eventlet import hubs from eventlet import greenthread from eventlet import debug from eventlet import Timeout __all__ = [ 'call_after', 'exc_after', 'getcurrent', 'get_default_hub', 'get_hub', 'GreenletExit', 'kill', 'sleep', 'spawn', 'spew', 'switch', 'ssl_listener', 'tcp_listener', 'trampoline', 'unspew', 'use_hub', 'with_timeout', 'timeout'] warnings.warn("eventlet.api is deprecated! Nearly everything in it has moved " "to the eventlet module.", DeprecationWarning, stacklevel=2) def get_hub(*a, **kw): warnings.warn("eventlet.api.get_hub has moved to eventlet.hubs.get_hub", DeprecationWarning, stacklevel=2) return hubs.get_hub(*a, **kw) def get_default_hub(*a, **kw): warnings.warn("eventlet.api.get_default_hub has moved to" " eventlet.hubs.get_default_hub", DeprecationWarning, stacklevel=2) return hubs.get_default_hub(*a, **kw) def use_hub(*a, **kw): warnings.warn("eventlet.api.use_hub has moved to eventlet.hubs.use_hub", DeprecationWarning, stacklevel=2) return hubs.use_hub(*a, **kw) def switch(coro, result=None, exc=None): if exc is not None: return coro.throw(exc) return coro.switch(result) Greenlet = greenlet.greenlet def tcp_listener(address, backlog=50): """ Listen on the given ``(ip, port)`` *address* with a TCP socket. Returns a socket object on which one should call ``accept()`` to accept a connection on the newly bound socket. """ warnings.warn("""eventlet.api.tcp_listener is deprecated. Please use eventlet.listen instead.""", DeprecationWarning, stacklevel=2) from eventlet import greenio, util socket = greenio.GreenSocket(util.tcp_socket()) util.socket_bind_and_listen(socket, address, backlog=backlog) return socket def ssl_listener(address, certificate, private_key): """Listen on the given (ip, port) *address* with a TCP socket that can do SSL. Primarily useful for unit tests, don't use in production. *certificate* and *private_key* should be the filenames of the appropriate certificate and private key files to use with the SSL socket. Returns a socket object on which one should call ``accept()`` to accept a connection on the newly bound socket. """ warnings.warn("""eventlet.api.ssl_listener is deprecated. Please use eventlet.wrap_ssl(eventlet.listen()) instead.""", DeprecationWarning, stacklevel=2) from eventlet import util import socket socket = util.wrap_ssl(socket.socket(), certificate, private_key, True) socket.bind(address) socket.listen(50) return socket def connect_tcp(address, localaddr=None): """ Create a TCP connection to address ``(host, port)`` and return the socket. Optionally, bind to localaddr ``(host, port)`` first. """ warnings.warn("""eventlet.api.connect_tcp is deprecated. Please use eventlet.connect instead.""", DeprecationWarning, stacklevel=2) from eventlet import greenio, util desc = greenio.GreenSocket(util.tcp_socket()) if localaddr is not None: desc.bind(localaddr) desc.connect(address) return desc TimeoutError = greenthread.TimeoutError trampoline = hubs.trampoline spawn = greenthread.spawn spawn_n = greenthread.spawn_n kill = greenthread.kill call_after = greenthread.call_after call_after_local = greenthread.call_after_local call_after_global = greenthread.call_after_global class _SilentException(BaseException): pass class FakeTimer(object): def cancel(self): pass class timeout(object): """Raise an exception in the block after timeout. Example:: with timeout(10): urllib2.open('http://example.com') Assuming code block is yielding (i.e. gives up control to the hub), an exception provided in *exc* argument will be raised (:class:`~eventlet.api.TimeoutError` if *exc* is omitted):: try: with timeout(10, MySpecialError, error_arg_1): urllib2.open('http://example.com') except MySpecialError, e: print "special error received" When *exc* is ``None``, code block is interrupted silently. """ def __init__(self, seconds, *throw_args): self.seconds = seconds if seconds is None: return if not throw_args: self.throw_args = (TimeoutError(), ) elif throw_args == (None, ): self.throw_args = (_SilentException(), ) else: self.throw_args = throw_args def __enter__(self): if self.seconds is None: self.timer = FakeTimer() else: self.timer = exc_after(self.seconds, *self.throw_args) return self.timer def __exit__(self, typ, value, tb): self.timer.cancel() if typ is _SilentException and value in self.throw_args: return True with_timeout = greenthread.with_timeout exc_after = greenthread.exc_after sleep = greenthread.sleep getcurrent = greenlet.getcurrent GreenletExit = greenlet.GreenletExit spew = debug.spew unspew = debug.unspew def named(name): """Return an object given its name. The name uses a module-like syntax, eg:: os.path.join or:: mulib.mu.Resource """ toimport = name obj = None import_err_strings = [] while toimport: try: obj = __import__(toimport) break except ImportError, err: # print 'Import error on %s: %s' % (toimport, err) # debugging spam import_err_strings.append(err.__str__()) toimport = '.'.join(toimport.split('.')[:-1]) if obj is None: raise ImportError('%s could not be imported. Import errors: %r' % (name, import_err_strings)) for seg in name.split('.')[1:]: try: obj = getattr(obj, seg) except AttributeError: dirobj = dir(obj) dirobj.sort() raise AttributeError('attribute %r missing from %r (%r) %r. Import errors: %r' % ( seg, obj, dirobj, name, import_err_strings)) return obj eventlet-0.13.0/eventlet/pools.py0000644000175000017500000001466512164577340017642 0ustar temototemoto00000000000000import collections from eventlet import queue __all__ = ['Pool', 'TokenPool'] # have to stick this in an exec so it works in 2.4 try: from contextlib import contextmanager exec(''' @contextmanager def item_impl(self): """ Get an object out of the pool, for use with with statement. >>> from eventlet import pools >>> pool = pools.TokenPool(max_size=4) >>> with pool.item() as obj: ... print "got token" ... got token >>> pool.free() 4 """ obj = self.get() try: yield obj finally: self.put(obj) ''') except ImportError: item_impl = None class Pool(object): """ Pool class implements resource limitation and construction. There are two ways of using Pool: passing a `create` argument or subclassing. In either case you must provide a way to create the resource. When using `create` argument, pass a function with no arguments:: http_pool = pools.Pool(create=httplib2.Http) If you need to pass arguments, build a nullary function with either `lambda` expression:: http_pool = pools.Pool(create=lambda: httplib2.Http(timeout=90)) or :func:`functools.partial`:: from functools import partial http_pool = pools.Pool(create=partial(httplib2.Http, timeout=90)) When subclassing, define only the :meth:`create` method to implement the desired resource:: class MyPool(pools.Pool): def create(self): return MyObject() If using 2.5 or greater, the :meth:`item` method acts as a context manager; that's the best way to use it:: with mypool.item() as thing: thing.dostuff() If stuck on 2.4, the :meth:`get` and :meth:`put` methods are the preferred nomenclature. Use a ``finally`` to ensure that nothing is leaked:: thing = self.pool.get() try: thing.dostuff() finally: self.pool.put(thing) The maximum size of the pool can be modified at runtime via the :meth:`resize` method. Specifying a non-zero *min-size* argument pre-populates the pool with *min_size* items. *max-size* sets a hard limit to the size of the pool -- it cannot contain any more items than *max_size*, and if there are already *max_size* items 'checked out' of the pool, the pool will cause any greenthread calling :meth:`get` to cooperatively yield until an item is :meth:`put` in. """ def __init__(self, min_size=0, max_size=4, order_as_stack=False, create=None): """*order_as_stack* governs the ordering of the items in the free pool. If ``False`` (the default), the free items collection (of items that were created and were put back in the pool) acts as a round-robin, giving each item approximately equal utilization. If ``True``, the free pool acts as a FILO stack, which preferentially re-uses items that have most recently been used. """ self.min_size = min_size self.max_size = max_size self.order_as_stack = order_as_stack self.current_size = 0 self.channel = queue.LightQueue(0) self.free_items = collections.deque() if create is not None: self.create = create for x in xrange(min_size): self.current_size += 1 self.free_items.append(self.create()) def get(self): """Return an item from the pool, when one is available. This may cause the calling greenthread to block. """ if self.free_items: return self.free_items.popleft() self.current_size += 1 if self.current_size <= self.max_size: try: created = self.create() except: self.current_size -= 1 raise return created self.current_size -= 1 # did not create return self.channel.get() if item_impl is not None: item = item_impl def put(self, item): """Put an item back into the pool, when done. This may cause the putting greenthread to block. """ if self.current_size > self.max_size: self.current_size -= 1 return if self.waiting(): self.channel.put(item) else: if self.order_as_stack: self.free_items.appendleft(item) else: self.free_items.append(item) def resize(self, new_size): """Resize the pool to *new_size*. Adjusting this number does not affect existing items checked out of the pool, nor on any greenthreads who are waiting for an item to free up. Some indeterminate number of :meth:`get`/:meth:`put` cycles will be necessary before the new maximum size truly matches the actual operation of the pool. """ self.max_size = new_size def free(self): """Return the number of free items in the pool. This corresponds to the number of :meth:`get` calls needed to empty the pool. """ return len(self.free_items) + self.max_size - self.current_size def waiting(self): """Return the number of routines waiting for a pool item. """ return max(0, self.channel.getting() - self.channel.putting()) def create(self): """Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the `create` argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, :meth:`create` is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with *min_size* of 0, the very first call to :meth:`get` will result in a call to :meth:`create`. If the first caller calls :meth:`put` before some other caller calls :meth:`get`, then the first item will be returned, and :meth:`create` will not be called a second time. """ raise NotImplementedError("Implement in subclass") class Token(object): pass class TokenPool(Pool): """A pool which gives out tokens (opaque unique objects), which indicate that the coroutine which holds the token has a right to consume some limited resource. """ def create(self): return Token() eventlet-0.13.0/eventlet/convenience.py0000644000175000017500000001317112164577340020771 0ustar temototemoto00000000000000import sys from eventlet import greenio from eventlet import greenthread from eventlet import greenpool from eventlet.green import socket from eventlet.support import greenlets as greenlet def connect(addr, family=socket.AF_INET, bind=None): """Convenience function for opening client sockets. :param addr: Address of the server to connect to. For TCP sockets, this is a (host, port) tuple. :param family: Socket family, optional. See :mod:`socket` documentation for available families. :param bind: Local address to bind to, optional. :return: The connected green socket object. """ sock = socket.socket(family, socket.SOCK_STREAM) if bind is not None: sock.bind(bind) sock.connect(addr) return sock def listen(addr, family=socket.AF_INET, backlog=50): """Convenience function for opening server sockets. This socket can be used in :func:`~eventlet.serve` or a custom ``accept()`` loop. Sets SO_REUSEADDR on the socket to save on annoyance. :param addr: Address to listen on. For TCP sockets, this is a (host, port) tuple. :param family: Socket family, optional. See :mod:`socket` documentation for available families. :param backlog: The maximum number of queued connections. Should be at least 1; the maximum value is system-dependent. :return: The listening green socket object. """ sock = socket.socket(family, socket.SOCK_STREAM) if sys.platform[:3] != "win": sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(addr) sock.listen(backlog) return sock class StopServe(Exception): """Exception class used for quitting :func:`~eventlet.serve` gracefully.""" pass def _stop_checker(t, server_gt, conn): try: try: t.wait() finally: conn.close() except greenlet.GreenletExit: pass except Exception: greenthread.kill(server_gt, *sys.exc_info()) def serve(sock, handle, concurrency=1000): """Runs a server on the supplied socket. Calls the function *handle* in a separate greenthread for every incoming client connection. *handle* takes two arguments: the client socket object, and the client address:: def myhandle(client_sock, client_addr): print "client connected", client_addr eventlet.serve(eventlet.listen(('127.0.0.1', 9999)), myhandle) Returning from *handle* closes the client socket. :func:`serve` blocks the calling greenthread; it won't return until the server completes. If you desire an immediate return, spawn a new greenthread for :func:`serve`. Any uncaught exceptions raised in *handle* are raised as exceptions from :func:`serve`, terminating the server, so be sure to be aware of the exceptions your application can raise. The return value of *handle* is ignored. Raise a :class:`~eventlet.StopServe` exception to gracefully terminate the server -- that's the only way to get the server() function to return rather than raise. The value in *concurrency* controls the maximum number of greenthreads that will be open at any time handling requests. When the server hits the concurrency limit, it stops accepting new connections until the existing ones complete. """ pool = greenpool.GreenPool(concurrency) server_gt = greenthread.getcurrent() while True: try: conn, addr = sock.accept() gt = pool.spawn(handle, conn, addr) gt.link(_stop_checker, server_gt, conn) conn, addr, gt = None, None, None except StopServe: return def wrap_ssl(sock, *a, **kw): """Convenience function for converting a regular socket into an SSL socket. Has the same interface as :func:`ssl.wrap_socket`, but works on 2.5 or earlier, using PyOpenSSL (though note that it ignores the *cert_reqs*, *ssl_version*, *ca_certs*, *do_handshake_on_connect*, and *suppress_ragged_eofs* arguments when using PyOpenSSL). The preferred idiom is to call wrap_ssl directly on the creation method, e.g., ``wrap_ssl(connect(addr))`` or ``wrap_ssl(listen(addr), server_side=True)``. This way there is no "naked" socket sitting around to accidentally corrupt the SSL session. :return Green SSL object. """ return wrap_ssl_impl(sock, *a, **kw) try: from eventlet.green import ssl wrap_ssl_impl = ssl.wrap_socket except ImportError: # < 2.6, trying PyOpenSSL try: from eventlet.green.OpenSSL import SSL def wrap_ssl_impl(sock, keyfile=None, certfile=None, server_side=False, cert_reqs=None, ssl_version=None, ca_certs=None, do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None): # theoretically the ssl_version could be respected in this # next line context = SSL.Context(SSL.SSLv23_METHOD) if certfile is not None: context.use_certificate_file(certfile) if keyfile is not None: context.use_privatekey_file(keyfile) context.set_verify(SSL.VERIFY_NONE, lambda *x: True) connection = SSL.Connection(context, sock) if server_side: connection.set_accept_state() else: connection.set_connect_state() return connection except ImportError: def wrap_ssl_impl(*a, **kw): raise ImportError("To use SSL with Eventlet, " "you must install PyOpenSSL or use Python 2.6 or later.") eventlet-0.13.0/eventlet/pool.py0000644000175000017500000003127112164577340017447 0ustar temototemoto00000000000000from eventlet import coros, proc, api from eventlet.semaphore import Semaphore import warnings warnings.warn("The pool module is deprecated. Please use the " "eventlet.GreenPool and eventlet.GreenPile classes instead.", DeprecationWarning, stacklevel=2) class Pool(object): def __init__(self, min_size=0, max_size=4, track_events=False): if min_size > max_size: raise ValueError('min_size cannot be bigger than max_size') self.max_size = max_size self.sem = Semaphore(max_size) self.procs = proc.RunningProcSet() if track_events: self.results = coros.queue() else: self.results = None def resize(self, new_max_size): """ Change the :attr:`max_size` of the pool. If the pool gets resized when there are more than *new_max_size* coroutines checked out, when they are returned to the pool they will be discarded. The return value of :meth:`free` will be negative in this situation. """ max_size_delta = new_max_size - self.max_size self.sem.counter += max_size_delta self.max_size = new_max_size @property def current_size(self): """ The number of coroutines that are currently executing jobs. """ return len(self.procs) def free(self): """ Returns the number of coroutines that are available for doing work.""" return self.sem.counter def execute(self, func, *args, **kwargs): """Execute func in one of the coroutines maintained by the pool, when one is free. Immediately returns a :class:`~eventlet.proc.Proc` object which can be queried for the func's result. >>> pool = Pool() >>> task = pool.execute(lambda a: ('foo', a), 1) >>> task.wait() ('foo', 1) """ # if reentering an empty pool, don't try to wait on a coroutine freeing # itself -- instead, just execute in the current coroutine if self.sem.locked() and api.getcurrent() in self.procs: p = proc.spawn(func, *args, **kwargs) try: p.wait() except: pass else: self.sem.acquire() p = self.procs.spawn(func, *args, **kwargs) # assuming the above line cannot raise p.link(lambda p: self.sem.release()) if self.results is not None: p.link(self.results) return p execute_async = execute def _execute(self, evt, func, args, kw): p = self.execute(func, *args, **kw) p.link(evt) return p def waitall(self): """ Calling this function blocks until every coroutine completes its work (i.e. there are 0 running coroutines).""" return self.procs.waitall() wait_all = waitall def wait(self): """Wait for the next execute in the pool to complete, and return the result.""" return self.results.wait() def waiting(self): """Return the number of coroutines waiting to execute. """ if self.sem.balance < 0: return -self.sem.balance else: return 0 def killall(self): """ Kill every running coroutine as immediately as possible.""" return self.procs.killall() def launch_all(self, function, iterable): """For each tuple (sequence) in *iterable*, launch ``function(*tuple)`` in its own coroutine -- like ``itertools.starmap()``, but in parallel. Discard values returned by ``function()``. You should call ``wait_all()`` to wait for all coroutines, newly-launched plus any previously-submitted :meth:`execute` or :meth:`execute_async` calls, to complete. >>> pool = Pool() >>> def saw(x): ... print "I saw %s!" % x ... >>> pool.launch_all(saw, "ABC") >>> pool.wait_all() I saw A! I saw B! I saw C! """ for tup in iterable: self.execute(function, *tup) def process_all(self, function, iterable): """For each tuple (sequence) in *iterable*, launch ``function(*tuple)`` in its own coroutine -- like ``itertools.starmap()``, but in parallel. Discard values returned by ``function()``. Don't return until all coroutines, newly-launched plus any previously-submitted :meth:`execute()` or :meth:`execute_async` calls, have completed. >>> from eventlet import coros >>> pool = coros.CoroutinePool() >>> def saw(x): print "I saw %s!" % x ... >>> pool.process_all(saw, "DEF") I saw D! I saw E! I saw F! """ self.launch_all(function, iterable) self.wait_all() def generate_results(self, function, iterable, qsize=None): """For each tuple (sequence) in *iterable*, launch ``function(*tuple)`` in its own coroutine -- like ``itertools.starmap()``, but in parallel. Yield each of the values returned by ``function()``, in the order they're completed rather than the order the coroutines were launched. Iteration stops when we've yielded results for each arguments tuple in *iterable*. Unlike :meth:`wait_all` and :meth:`process_all`, this function does not wait for any previously-submitted :meth:`execute` or :meth:`execute_async` calls. Results are temporarily buffered in a queue. If you pass *qsize=*, this value is used to limit the max size of the queue: an attempt to buffer too many results will suspend the completed :class:`CoroutinePool` coroutine until the requesting coroutine (the caller of :meth:`generate_results`) has retrieved one or more results by calling this generator-iterator's ``next()``. If any coroutine raises an uncaught exception, that exception will propagate to the requesting coroutine via the corresponding ``next()`` call. What I particularly want these tests to illustrate is that using this generator function:: for result in generate_results(function, iterable): # ... do something with result ... pass executes coroutines at least as aggressively as the classic eventlet idiom:: events = [pool.execute(function, *args) for args in iterable] for event in events: result = event.wait() # ... do something with result ... even without a distinct event object for every arg tuple in *iterable*, and despite the funny flow control from interleaving launches of new coroutines with yields of completed coroutines' results. (The use case that makes this function preferable to the classic idiom above is when the *iterable*, which may itself be a generator, produces millions of items.) >>> from eventlet import coros >>> import string >>> pool = coros.CoroutinePool(max_size=5) >>> pausers = [coros.Event() for x in xrange(2)] >>> def longtask(evt, desc): ... print "%s woke up with %s" % (desc, evt.wait()) ... >>> pool.launch_all(longtask, zip(pausers, "AB")) >>> def quicktask(desc): ... print "returning %s" % desc ... return desc ... (Instead of using a ``for`` loop, step through :meth:`generate_results` items individually to illustrate timing) >>> step = iter(pool.generate_results(quicktask, string.ascii_lowercase)) >>> print step.next() returning a returning b returning c a >>> print step.next() b >>> print step.next() c >>> print step.next() returning d returning e returning f d >>> pausers[0].send("A") >>> print step.next() e >>> print step.next() f >>> print step.next() A woke up with A returning g returning h returning i g >>> print "".join([step.next() for x in xrange(3)]) returning j returning k returning l returning m hij >>> pausers[1].send("B") >>> print "".join([step.next() for x in xrange(4)]) B woke up with B returning n returning o returning p returning q klmn """ # Get an iterator because of our funny nested loop below. Wrap the # iterable in enumerate() so we count items that come through. tuples = iter(enumerate(iterable)) # If the iterable is empty, this whole function is a no-op, and we can # save ourselves some grief by just quitting out. In particular, once # we enter the outer loop below, we're going to wait on the queue -- # but if we launched no coroutines with that queue as the destination, # we could end up waiting a very long time. try: index, args = tuples.next() except StopIteration: return # From this point forward, 'args' is the current arguments tuple and # 'index+1' counts how many such tuples we've seen. # This implementation relies on the fact that _execute() accepts an # event-like object, and -- unless it's None -- the completed # coroutine calls send(result). We slyly pass a queue rather than an # event -- the same queue instance for all coroutines. This is why our # queue interface intentionally resembles the event interface. q = coros.queue(max_size=qsize) # How many results have we yielded so far? finished = 0 # This first loop is only until we've launched all the coroutines. Its # complexity is because if iterable contains more args tuples than the # size of our pool, attempting to _execute() the (poolsize+1)th # coroutine would suspend until something completes and send()s its # result to our queue. But to keep down queue overhead and to maximize # responsiveness to our caller, we'd rather suspend on reading the # queue. So we stuff the pool as full as we can, then wait for # something to finish, then stuff more coroutines into the pool. try: while True: # Before each yield, start as many new coroutines as we can fit. # (The self.free() test isn't 100% accurate: if we happen to be # executing in one of the pool's coroutines, we could _execute() # without waiting even if self.free() reports 0. See _execute().) # The point is that we don't want to wait in the _execute() call, # we want to wait in the q.wait() call. # IMPORTANT: at start, and whenever we've caught up with all # coroutines we've launched so far, we MUST iterate this inner # loop at least once, regardless of self.free() -- otherwise the # q.wait() call below will deadlock! # Recall that index is the index of the NEXT args tuple that we # haven't yet launched. Therefore it counts how many args tuples # we've launched so far. while self.free() > 0 or finished == index: # Just like the implementation of execute_async(), save that # we're passing our queue instead of None as the "event" to # which to send() the result. self._execute(q, function, args, {}) # We've consumed that args tuple, advance to next. index, args = tuples.next() # Okay, we've filled up the pool again, yield a result -- which # will probably wait for a coroutine to complete. Although we do # have q.ready(), so we could iterate without waiting, we avoid # that because every yield could involve considerable real time. # We don't know how long it takes to return from yield, so every # time we do, take the opportunity to stuff more requests into the # pool before yielding again. yield q.wait() # Be sure to count results so we know when to stop! finished += 1 except StopIteration: pass # Here we've exhausted the input iterable. index+1 is the total number # of coroutines we've launched. We probably haven't yielded that many # results yet. Wait for the rest of the results, yielding them as they # arrive. while finished < index + 1: yield q.wait() finished += 1 eventlet-0.13.0/eventlet/proc.py0000644000175000017500000005447512164577340017454 0ustar temototemoto00000000000000import warnings warnings.warn("The proc module is deprecated! Please use the greenthread " "module, or any of the many other Eventlet cross-coroutine " "primitives, instead.", DeprecationWarning, stacklevel=2) """ This module provides means to spawn, kill and link coroutines. Linking means subscribing to the coroutine's result, either in form of return value or unhandled exception. To create a linkable coroutine use spawn function provided by this module: >>> def demofunc(x, y): ... return x / y >>> p = spawn(demofunc, 6, 2) The return value of :func:`spawn` is an instance of :class:`Proc` class that you can "link": * ``p.link(obj)`` - notify *obj* when the coroutine is finished What "notify" means here depends on the type of *obj*: a callable is simply called, an :class:`~eventlet.coros.Event` or a :class:`~eventlet.coros.queue` is notified using ``send``/``send_exception`` methods and if *obj* is another greenlet it's killed with :class:`LinkedExited` exception. Here's an example: >>> event = coros.Event() >>> _ = p.link(event) >>> event.wait() 3 Now, even though *p* is finished it's still possible to link it. In this case the notification is performed immediatelly: >>> try: ... p.link() ... except LinkedCompleted: ... print 'LinkedCompleted' LinkedCompleted (Without an argument, the link is created to the current greenlet) There are also :meth:`~eventlet.proc.Source.link_value` and :func:`link_exception` methods that only deliver a return value and an unhandled exception respectively (plain :meth:`~eventlet.proc.Source.link` delivers both). Suppose we want to spawn a greenlet to do an important part of the task; if it fails then there's no way to complete the task so the parent must fail as well; :meth:`~eventlet.proc.Source.link_exception` is useful here: >>> p = spawn(demofunc, 1, 0) >>> _ = p.link_exception() >>> try: ... api.sleep(1) ... except LinkedFailed: ... print 'LinkedFailed' LinkedFailed One application of linking is :func:`waitall` function: link to a bunch of coroutines and wait for all them to complete. Such a function is provided by this module. """ import sys from eventlet import api, coros, hubs __all__ = ['LinkedExited', 'LinkedFailed', 'LinkedCompleted', 'LinkedKilled', 'ProcExit', 'Link', 'waitall', 'killall', 'Source', 'Proc', 'spawn', 'spawn_link', 'spawn_link_value', 'spawn_link_exception'] class LinkedExited(Exception): """Raised when a linked proc exits""" msg = "%r exited" def __init__(self, name=None, msg=None): self.name = name if msg is None: msg = self.msg % self.name Exception.__init__(self, msg) class LinkedCompleted(LinkedExited): """Raised when a linked proc finishes the execution cleanly""" msg = "%r completed successfully" class LinkedFailed(LinkedExited): """Raised when a linked proc dies because of unhandled exception""" msg = "%r failed with %s" def __init__(self, name, typ, value=None, tb=None): msg = self.msg % (name, typ.__name__) LinkedExited.__init__(self, name, msg) class LinkedKilled(LinkedFailed): """Raised when a linked proc dies because of unhandled GreenletExit (i.e. it was killed) """ msg = """%r was killed with %s""" def getLinkedFailed(name, typ, value=None, tb=None): if issubclass(typ, api.GreenletExit): return LinkedKilled(name, typ, value, tb) return LinkedFailed(name, typ, value, tb) class ProcExit(api.GreenletExit): """Raised when this proc is killed.""" class Link(object): """ A link to a greenlet, triggered when the greenlet exits. """ def __init__(self, listener): self.listener = listener def cancel(self): self.listener = None def __enter__(self): pass def __exit__(self, *args): self.cancel() class LinkToEvent(Link): def __call__(self, source): if self.listener is None: return if source.has_value(): self.listener.send(source.value) else: self.listener.send_exception(*source.exc_info()) class LinkToGreenlet(Link): def __call__(self, source): if source.has_value(): self.listener.throw(LinkedCompleted(source.name)) else: self.listener.throw(getLinkedFailed(source.name, *source.exc_info())) class LinkToCallable(Link): def __call__(self, source): self.listener(source) def waitall(lst, trap_errors=False, queue=None): if queue is None: queue = coros.queue() index = -1 for (index, linkable) in enumerate(lst): linkable.link(decorate_send(queue, index)) len = index + 1 results = [None] * len count = 0 while count < len: try: index, value = queue.wait() except Exception: if not trap_errors: raise else: results[index] = value count += 1 return results class decorate_send(object): def __init__(self, event, tag): self._event = event self._tag = tag def __repr__(self): params = (type(self).__name__, self._tag, self._event) return '<%s tag=%r event=%r>' % params def __getattr__(self, name): assert name != '_event' return getattr(self._event, name) def send(self, value): self._event.send((self._tag, value)) def killall(procs, *throw_args, **kwargs): if not throw_args: throw_args = (ProcExit, ) wait = kwargs.pop('wait', False) if kwargs: raise TypeError('Invalid keyword argument for proc.killall(): %s' % ', '.join(kwargs.keys())) for g in procs: if not g.dead: hubs.get_hub().schedule_call_global(0, g.throw, *throw_args) if wait and api.getcurrent() is not hubs.get_hub().greenlet: api.sleep(0) class NotUsed(object): def __str__(self): return '' __repr__ = __str__ _NOT_USED = NotUsed() def spawn_greenlet(function, *args): """Create a new greenlet that will run ``function(*args)``. The current greenlet won't be unscheduled. Keyword arguments aren't supported (limitation of greenlet), use :func:`spawn` to work around that. """ g = api.Greenlet(function) g.parent = hubs.get_hub().greenlet hubs.get_hub().schedule_call_global(0, g.switch, *args) return g class Source(object): """Maintain a set of links to the listeners. Delegate the sent value or the exception to all of them. To set up a link, use :meth:`link_value`, :meth:`link_exception` or :meth:`link` method. The latter establishes both "value" and "exception" link. It is possible to link to events, queues, greenlets and callables. >>> source = Source() >>> event = coros.Event() >>> _ = source.link(event) Once source's :meth:`send` or :meth:`send_exception` method is called, all the listeners with the right type of link will be notified ("right type" means that exceptions won't be delivered to "value" links and values won't be delivered to "exception" links). Once link has been fired it is removed. Notifying listeners is performed in the **mainloop** greenlet. Under the hood notifying a link means executing a callback, see :class:`Link` class for details. Notification *must not* attempt to switch to the hub, i.e. call any blocking functions. >>> source.send('hello') >>> event.wait() 'hello' Any error happened while sending will be logged as a regular unhandled exception. This won't prevent other links from being fired. There 3 kinds of listeners supported: 1. If *listener* is a greenlet (regardless if it's a raw greenlet or an extension like :class:`Proc`), a subclass of :class:`LinkedExited` exception is raised in it. 2. If *listener* is something with send/send_exception methods (event, queue, :class:`Source` but not :class:`Proc`) the relevant method is called. 3. If *listener* is a callable, it is called with 1 argument (the result) for "value" links and with 3 arguments ``(typ, value, tb)`` for "exception" links. """ def __init__(self, name=None): self.name = name self._value_links = {} self._exception_links = {} self.value = _NOT_USED self._exc = None def _repr_helper(self): result = [] result.append(repr(self.name)) if self.value is not _NOT_USED: if self._exc is None: res = repr(self.value) if len(res)>50: res = res[:50]+'...' result.append('result=%s' % res) else: result.append('raised=%s' % (self._exc, )) result.append('{%s:%s}' % (len(self._value_links), len(self._exception_links))) return result def __repr__(self): klass = type(self).__name__ return '<%s at %s %s>' % (klass, hex(id(self)), ' '.join(self._repr_helper())) def ready(self): return self.value is not _NOT_USED def has_value(self): return self.value is not _NOT_USED and self._exc is None def has_exception(self): return self.value is not _NOT_USED and self._exc is not None def exc_info(self): if not self._exc: return (None, None, None) elif len(self._exc)==3: return self._exc elif len(self._exc)==1: if isinstance(self._exc[0], type): return self._exc[0], None, None else: return self._exc[0].__class__, self._exc[0], None elif len(self._exc)==2: return self._exc[0], self._exc[1], None else: return self._exc def link_value(self, listener=None, link=None): if self.ready() and self._exc is not None: return if listener is None: listener = api.getcurrent() if link is None: link = self.getLink(listener) if self.ready() and listener is api.getcurrent(): link(self) else: self._value_links[listener] = link if self.value is not _NOT_USED: self._start_send() return link def link_exception(self, listener=None, link=None): if self.value is not _NOT_USED and self._exc is None: return if listener is None: listener = api.getcurrent() if link is None: link = self.getLink(listener) if self.ready() and listener is api.getcurrent(): link(self) else: self._exception_links[listener] = link if self.value is not _NOT_USED: self._start_send_exception() return link def link(self, listener=None, link=None): if listener is None: listener = api.getcurrent() if link is None: link = self.getLink(listener) if self.ready() and listener is api.getcurrent(): if self._exc is None: link(self) else: link(self) else: self._value_links[listener] = link self._exception_links[listener] = link if self.value is not _NOT_USED: if self._exc is None: self._start_send() else: self._start_send_exception() return link def unlink(self, listener=None): if listener is None: listener = api.getcurrent() self._value_links.pop(listener, None) self._exception_links.pop(listener, None) @staticmethod def getLink(listener): if hasattr(listener, 'throw'): return LinkToGreenlet(listener) if hasattr(listener, 'send'): return LinkToEvent(listener) elif hasattr(listener, '__call__'): return LinkToCallable(listener) else: raise TypeError("Don't know how to link to %r" % (listener, )) def send(self, value): assert not self.ready(), "%s has been fired already" % self self.value = value self._exc = None self._start_send() def _start_send(self): hubs.get_hub().schedule_call_global(0, self._do_send, self._value_links.items(), self._value_links) def send_exception(self, *throw_args): assert not self.ready(), "%s has been fired already" % self self.value = None self._exc = throw_args self._start_send_exception() def _start_send_exception(self): hubs.get_hub().schedule_call_global(0, self._do_send, self._exception_links.items(), self._exception_links) def _do_send(self, links, consult): while links: listener, link = links.pop() try: if listener in consult: try: link(self) finally: consult.pop(listener, None) except: hubs.get_hub().schedule_call_global(0, self._do_send, links, consult) raise def wait(self, timeout=None, *throw_args): """Wait until :meth:`send` or :meth:`send_exception` is called or *timeout* has expired. Return the argument of :meth:`send` or raise the argument of :meth:`send_exception`. If *timeout* has expired, ``None`` is returned. The arguments, when provided, specify how many seconds to wait and what to do when *timeout* has expired. They are treated the same way as :func:`~eventlet.api.timeout` treats them. """ if self.value is not _NOT_USED: if self._exc is None: return self.value else: api.getcurrent().throw(*self._exc) if timeout is not None: timer = api.timeout(timeout, *throw_args) timer.__enter__() if timeout==0: if timer.__exit__(None, None, None): return else: try: api.getcurrent().throw(*timer.throw_args) except: if not timer.__exit__(*sys.exc_info()): raise return EXC = True try: try: waiter = Waiter() self.link(waiter) try: return waiter.wait() finally: self.unlink(waiter) except: EXC = False if timeout is None or not timer.__exit__(*sys.exc_info()): raise finally: if timeout is not None and EXC: timer.__exit__(None, None, None) class Waiter(object): def __init__(self): self.greenlet = None def send(self, value): """Wake up the greenlet that is calling wait() currently (if there is one). Can only be called from get_hub().greenlet. """ assert api.getcurrent() is hubs.get_hub().greenlet if self.greenlet is not None: self.greenlet.switch(value) def send_exception(self, *throw_args): """Make greenlet calling wait() wake up (if there is a wait()). Can only be called from get_hub().greenlet. """ assert api.getcurrent() is hubs.get_hub().greenlet if self.greenlet is not None: self.greenlet.throw(*throw_args) def wait(self): """Wait until send or send_exception is called. Return value passed into send() or raise exception passed into send_exception(). """ assert self.greenlet is None current = api.getcurrent() assert current is not hubs.get_hub().greenlet self.greenlet = current try: return hubs.get_hub().switch() finally: self.greenlet = None class Proc(Source): """A linkable coroutine based on Source. Upon completion, delivers coroutine's result to the listeners. """ def __init__(self, name=None): self.greenlet = None Source.__init__(self, name) def _repr_helper(self): if self.greenlet is not None and self.greenlet.dead: dead = '(dead)' else: dead = '' return ['%r%s' % (self.greenlet, dead)] + Source._repr_helper(self) def __repr__(self): klass = type(self).__name__ return '<%s %s>' % (klass, ' '.join(self._repr_helper())) def __nonzero__(self): if self.ready(): # with current _run this does not makes any difference # still, let keep it there return False # otherwise bool(proc) is the same as bool(greenlet) if self.greenlet is not None: return bool(self.greenlet) @property def dead(self): return self.ready() or self.greenlet.dead @classmethod def spawn(cls, function, *args, **kwargs): """Return a new :class:`Proc` instance that is scheduled to execute ``function(*args, **kwargs)`` upon the next hub iteration. """ proc = cls() proc.run(function, *args, **kwargs) return proc def run(self, function, *args, **kwargs): """Create a new greenlet to execute ``function(*args, **kwargs)``. The created greenlet is scheduled to run upon the next hub iteration. """ assert self.greenlet is None, "'run' can only be called once per instance" if self.name is None: self.name = str(function) self.greenlet = spawn_greenlet(self._run, function, args, kwargs) def _run(self, function, args, kwargs): """Internal top level function. Execute *function* and send its result to the listeners. """ try: result = function(*args, **kwargs) except: self.send_exception(*sys.exc_info()) raise # let mainloop log the exception else: self.send(result) def throw(self, *throw_args): """Used internally to raise the exception. Behaves exactly like greenlet's 'throw' with the exception that :class:`ProcExit` is raised by default. Do not use this function as it leaves the current greenlet unscheduled forever. Use :meth:`kill` method instead. """ if not self.dead: if not throw_args: throw_args = (ProcExit, ) self.greenlet.throw(*throw_args) def kill(self, *throw_args): """ Raise an exception in the greenlet. Unschedule the current greenlet so that this :class:`Proc` can handle the exception (or die). The exception can be specified with *throw_args*. By default, :class:`ProcExit` is raised. """ if not self.dead: if not throw_args: throw_args = (ProcExit, ) hubs.get_hub().schedule_call_global(0, self.greenlet.throw, *throw_args) if api.getcurrent() is not hubs.get_hub().greenlet: api.sleep(0) # QQQ maybe Proc should not inherit from Source (because its send() and send_exception() # QQQ methods are for internal use only) spawn = Proc.spawn def spawn_link(function, *args, **kwargs): p = spawn(function, *args, **kwargs) p.link() return p def spawn_link_value(function, *args, **kwargs): p = spawn(function, *args, **kwargs) p.link_value() return p def spawn_link_exception(function, *args, **kwargs): p = spawn(function, *args, **kwargs) p.link_exception() return p class wrap_errors(object): """Helper to make function return an exception, rather than raise it. Because every exception that is unhandled by greenlet will be logged by the hub, it is desirable to prevent non-error exceptions from leaving a greenlet. This can done with simple try/except construct: def func1(*args, **kwargs): try: return func(*args, **kwargs) except (A, B, C), ex: return ex wrap_errors provides a shortcut to write that in one line: func1 = wrap_errors((A, B, C), func) It also preserves __str__ and __repr__ of the original function. """ def __init__(self, errors, func): """Make a new function from `func', such that it catches `errors' (an Exception subclass, or a tuple of Exception subclasses) and return it as a value. """ self.errors = errors self.func = func def __call__(self, *args, **kwargs): try: return self.func(*args, **kwargs) except self.errors, ex: return ex def __str__(self): return str(self.func) def __repr__(self): return repr(self.func) def __getattr__(self, item): return getattr(self.func, item) class RunningProcSet(object): """ Maintain a set of :class:`Proc` s that are still running, that is, automatically remove a proc when it's finished. Provide a way to wait/kill all of them """ def __init__(self, *args): self.procs = set(*args) if args: for p in self.args[0]: p.link(lambda p: self.procs.discard(p)) def __len__(self): return len(self.procs) def __contains__(self, item): if isinstance(item, api.Greenlet): # special case for "api.getcurrent() in running_proc_set" to work for x in self.procs: if x.greenlet == item: return True else: return item in self.procs def __iter__(self): return iter(self.procs) def add(self, p): self.procs.add(p) p.link(lambda p: self.procs.discard(p)) def spawn(self, func, *args, **kwargs): p = spawn(func, *args, **kwargs) self.add(p) return p def waitall(self, trap_errors=False): while self.procs: waitall(self.procs, trap_errors=trap_errors) def killall(self, *throw_args, **kwargs): return killall(self.procs, *throw_args, **kwargs) class Pool(object): linkable_class = Proc def __init__(self, limit): self.semaphore = coros.Semaphore(limit) def allocate(self): self.semaphore.acquire() g = self.linkable_class() g.link(lambda *_args: self.semaphore.release()) return g eventlet-0.13.0/eventlet/processes.py0000644000175000017500000001155712164577340020511 0ustar temototemoto00000000000000import warnings warnings.warn("eventlet.processes is deprecated in favor of " "eventlet.green.subprocess, which is API-compatible with the standard " " library subprocess module.", DeprecationWarning, stacklevel=2) import errno import os import popen2 import signal from eventlet import api from eventlet import pools from eventlet import greenio class DeadProcess(RuntimeError): pass def cooperative_wait(pobj, check_interval=0.01): """ Waits for a child process to exit, returning the status code. Unlike ``os.wait``, :func:`cooperative_wait` does not block the entire process, only the calling coroutine. If the child process does not die, :func:`cooperative_wait` could wait forever. The argument *check_interval* is the amount of time, in seconds, that :func:`cooperative_wait` will sleep between calls to ``os.waitpid``. """ try: while True: status = pobj.poll() if status >= 0: return status api.sleep(check_interval) except OSError, e: if e.errno == errno.ECHILD: # no child process, this happens if the child process # already died and has been cleaned up, or if you just # called with a random pid value return -1 else: raise class Process(object): """Construct Process objects, then call read, and write on them.""" process_number = 0 def __init__(self, command, args, dead_callback=lambda:None): self.process_number = self.process_number + 1 Process.process_number = self.process_number self.command = command self.args = args self._dead_callback = dead_callback self.run() def run(self): self.dead = False self.started = False self.popen4 = None ## We use popen4 so that read() will read from either stdout or stderr self.popen4 = popen2.Popen4([self.command] + self.args) child_stdout_stderr = self.popen4.fromchild child_stdin = self.popen4.tochild self.child_stdout_stderr = greenio.GreenPipe(child_stdout_stderr, child_stdout_stderr.mode, 0) self.child_stdin = greenio.GreenPipe(child_stdin, child_stdin.mode, 0) self.sendall = self.child_stdin.write self.send = self.child_stdin.write self.recv = self.child_stdout_stderr.read self.readline = self.child_stdout_stderr.readline self._read_first_result = False def wait(self): return cooperative_wait(self.popen4) def dead_callback(self): self.wait() self.dead = True if self._dead_callback: self._dead_callback() def makefile(self, mode, *arg): if mode.startswith('r'): return self.child_stdout_stderr if mode.startswith('w'): return self.child_stdin raise RuntimeError("Unknown mode", mode) def read(self, amount=None): """Reads from the stdout and stderr of the child process. The first call to read() will return a string; subsequent calls may raise a DeadProcess when EOF occurs on the pipe. """ result = self.child_stdout_stderr.read(amount) if result == '' and self._read_first_result: # This process is dead. self.dead_callback() raise DeadProcess else: self._read_first_result = True return result def write(self, stuff): written = 0 try: written = self.child_stdin.write(stuff) self.child_stdin.flush() except ValueError, e: ## File was closed assert str(e) == 'I/O operation on closed file' if written == 0: self.dead_callback() raise DeadProcess def flush(self): self.child_stdin.flush() def close(self): self.child_stdout_stderr.close() self.child_stdin.close() self.dead_callback() def close_stdin(self): self.child_stdin.close() def kill(self, sig=None): if sig == None: sig = signal.SIGTERM pid = self.getpid() os.kill(pid, sig) def getpid(self): return self.popen4.pid class ProcessPool(pools.Pool): def __init__(self, command, args=None, min_size=0, max_size=4): """*command* the command to run """ self.command = command if args is None: args = [] self.args = args pools.Pool.__init__(self, min_size, max_size) def create(self): """Generate a process """ def dead_callback(): self.current_size -= 1 return Process(self.command, self.args, dead_callback) def put(self, item): if not item.dead: if item.popen4.poll() != -1: item.dead_callback() else: pools.Pool.put(self, item) eventlet-0.13.0/eventlet/corolocal.py0000644000175000017500000000334212164577340020451 0ustar temototemoto00000000000000import weakref from eventlet import greenthread __all__ = ['get_ident', 'local'] def get_ident(): """ Returns ``id()`` of current greenlet. Useful for debugging.""" return id(greenthread.getcurrent()) # the entire purpose of this class is to store off the constructor # arguments in a local variable without calling __init__ directly class _localbase(object): __slots__ = '_local__args', '_local__greens' def __new__(cls, *args, **kw): self = object.__new__(cls) object.__setattr__(self, '_local__args', (args, kw)) object.__setattr__(self, '_local__greens', weakref.WeakKeyDictionary()) if (args or kw) and (cls.__init__ is object.__init__): raise TypeError("Initialization arguments are not supported") return self def _patch(thrl): greens = object.__getattribute__(thrl, '_local__greens') # until we can store the localdict on greenlets themselves, # we store it in _local__greens on the local object cur = greenthread.getcurrent() if cur not in greens: # must be the first time we've seen this greenlet, call __init__ greens[cur] = {} cls = type(thrl) if cls.__init__ is not object.__init__: args, kw = object.__getattribute__(thrl, '_local__args') thrl.__init__(*args, **kw) object.__setattr__(thrl, '__dict__', greens[cur]) class local(_localbase): def __getattribute__(self, attr): _patch(self) return object.__getattribute__(self, attr) def __setattr__(self, attr, value): _patch(self) return object.__setattr__(self, attr, value) def __delattr__(self, attr): _patch(self) return object.__delattr__(self, attr) eventlet-0.13.0/eventlet/saranwrap.py0000644000175000017500000005670412164577340020504 0ustar temototemoto00000000000000import cPickle as Pickle import os import struct import sys from eventlet.processes import Process, DeadProcess from eventlet import pools import warnings warnings.warn("eventlet.saranwrap is deprecated due to underuse. If you love " "it, let us know by emailing eventletdev@lists.secondlife.com", DeprecationWarning, stacklevel=2) # debugging hooks _g_debug_mode = False if _g_debug_mode: import traceback import tempfile def pythonpath_sync(): """ apply the current ``sys.path`` to the environment variable ``PYTHONPATH``, so that child processes have the same paths as the caller does. """ pypath = os.pathsep.join(sys.path) os.environ['PYTHONPATH'] = pypath def wrap(obj, dead_callback = None): """ wrap in object in another process through a saranwrap proxy :param object: The object to wrap. :dead_callback: A callable to invoke if the process exits. """ if type(obj).__name__ == 'module': return wrap_module(obj.__name__, dead_callback) pythonpath_sync() if _g_debug_mode: p = Process(sys.executable, ["-W", "ignore", __file__, '--child', '--logfile', os.path.join(tempfile.gettempdir(), 'saranwrap.log')], dead_callback) else: p = Process(sys.executable, ["-W", "ignore", __file__, '--child'], dead_callback) prox = Proxy(ChildProcess(p, p)) prox.obj = obj return prox.obj def wrap_module(fqname, dead_callback = None): """ wrap a module in another process through a saranwrap proxy :param fqname: The fully qualified name of the module. :param dead_callback: A callable to invoke if the process exits. """ pythonpath_sync() global _g_debug_mode if _g_debug_mode: p = Process(sys.executable, ["-W", "ignore", __file__, '--module', fqname, '--logfile', os.path.join(tempfile.gettempdir(), 'saranwrap.log')], dead_callback) else: p = Process(sys.executable, ["-W", "ignore", __file__, '--module', fqname,], dead_callback) prox = Proxy(ChildProcess(p,p)) return prox def status(proxy): """ get the status from the server through a proxy :param proxy: a :class:`eventlet.saranwrap.Proxy` object connected to a server. """ return proxy.__local_dict['_cp'].make_request(Request('status', {})) class BadResponse(Exception): """This exception is raised by an saranwrap client when it could parse but cannot understand the response from the server.""" pass class BadRequest(Exception): """This exception is raised by a saranwrap server when it could parse but cannot understand the response from the server.""" pass class UnrecoverableError(Exception): pass class Request(object): "A wrapper class for proxy requests to the server." def __init__(self, action, param): self._action = action self._param = param def __str__(self): return "Request `"+self._action+"` "+str(self._param) def __getitem__(self, name): return self._param[name] def get(self, name, default = None): try: return self[name] except KeyError: return default def action(self): return self._action def _read_lp_hunk(stream): len_bytes = stream.read(4) if len_bytes == '': raise EOFError("No more data to read from %s" % stream) length = struct.unpack('I', len_bytes)[0] body = stream.read(length) return body def _read_response(id, attribute, input, cp): """local helper method to read respones from the rpc server.""" try: str = _read_lp_hunk(input) _prnt(repr(str)) response = Pickle.loads(str) except (AttributeError, DeadProcess, Pickle.UnpicklingError), e: raise UnrecoverableError(e) _prnt("response: %s" % response) if response[0] == 'value': return response[1] elif response[0] == 'callable': return CallableProxy(id, attribute, cp) elif response[0] == 'object': return ObjectProxy(cp, response[1]) elif response[0] == 'exception': exp = response[1] raise exp else: raise BadResponse(response[0]) def _write_lp_hunk(stream, hunk): write_length = struct.pack('I', len(hunk)) stream.write(write_length + hunk) if hasattr(stream, 'flush'): stream.flush() def _write_request(param, output): _prnt("request: %s" % param) str = Pickle.dumps(param) _write_lp_hunk(output, str) def _is_local(attribute): "Return ``True`` if the attribute should be handled locally" # return attribute in ('_in', '_out', '_id', '__getattribute__', # '__setattr__', '__dict__') # good enough for now. :) if '__local_dict' in attribute: return True return False def _prnt(message): global _g_debug_mode if _g_debug_mode: print message _g_logfile = None def _log(message): global _g_logfile if _g_logfile: _g_logfile.write(str(os.getpid()) + ' ' + message + '\n') _g_logfile.flush() def _unmunge_attr_name(name): """ Sometimes attribute names come in with classname prepended, not sure why. This function removes said classname, because we're huge hackers and we didn't find out what the true right thing to do is. *TODO: find out. """ if(name.startswith('_Proxy')): name = name[len('_Proxy'):] if(name.startswith('_ObjectProxy')): name = name[len('_ObjectProxy'):] return name class ChildProcess(object): """ This class wraps a remote python process, presumably available in an instance of a :class:`Server`. """ def __init__(self, instr, outstr, dead_list = None): """ :param instr: a file-like object which supports ``read()``. :param outstr: a file-like object which supports ``write()`` and ``flush()``. :param dead_list: a list of ids of remote objects that are dead """ # default dead_list inside the function because all objects in method # argument lists are init-ed only once globally _prnt("ChildProcess::__init__") if dead_list is None: dead_list = set() self._dead_list = dead_list self._in = instr self._out = outstr self._lock = pools.TokenPool(max_size=1) def make_request(self, request, attribute=None): _id = request.get('id') t = self._lock.get() try: _write_request(request, self._out) retval = _read_response(_id, attribute, self._in, self) finally: self._lock.put(t) return retval def __del__(self): self._in.close() class Proxy(object): """ This is the class you will typically use as a client to a child process. Simply instantiate one around a file-like interface and start calling methods on the thing that is exported. The ``dir()`` builtin is not supported, so you have to know what has been exported. """ def __init__(self, cp): """ :param cp: :class:`ChildProcess` instance that wraps the i/o to the child process. """ #_prnt("Proxy::__init__") self.__local_dict = dict( _cp = cp, _id = None) def __getattribute__(self, attribute): #_prnt("Proxy::__getattr__: %s" % attribute) if _is_local(attribute): # call base class getattribute so we actually get the local variable attribute = _unmunge_attr_name(attribute) return super(Proxy, self).__getattribute__(attribute) elif attribute in ('__deepcopy__', '__copy__'): # redirect copy function calls to our own versions instead of # to the proxied object return super(Proxy, self).__getattribute__('__deepcopy__') else: my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] _dead_list = my_cp._dead_list for dead_object in _dead_list.copy(): request = Request('del', {'id':dead_object}) my_cp.make_request(request) try: _dead_list.remove(dead_object) except KeyError: pass # Pass all public attributes across to find out if it is # callable or a simple attribute. request = Request('getattr', {'id':my_id, 'attribute':attribute}) return my_cp.make_request(request, attribute=attribute) def __setattr__(self, attribute, value): #_prnt("Proxy::__setattr__: %s" % attribute) if _is_local(attribute): # It must be local to this actual object, so we have to apply # it to the dict in a roundabout way attribute = _unmunge_attr_name(attribute) super(Proxy, self).__getattribute__('__dict__')[attribute]=value else: my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] # Pass the set attribute across request = Request('setattr', {'id':my_id, 'attribute':attribute, 'value':value}) return my_cp.make_request(request, attribute=attribute) class ObjectProxy(Proxy): """ This class wraps a remote object in the :class:`Server` This class will be created during normal operation, and users should not need to deal with this class directly. """ def __init__(self, cp, _id): """ :param cp: A :class:`ChildProcess` object that wraps the i/o of a child process. :param _id: an identifier for the remote object. humans do not provide this. """ Proxy.__init__(self, cp) self.__local_dict['_id'] = _id #_prnt("ObjectProxy::__init__ %s" % _id) def __del__(self): my_id = self.__local_dict['_id'] #_prnt("ObjectProxy::__del__ %s" % my_id) self.__local_dict['_cp']._dead_list.add(my_id) def __getitem__(self, key): my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('getitem', {'id':my_id, 'key':key}) return my_cp.make_request(request, attribute=key) def __setitem__(self, key, value): my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('setitem', {'id':my_id, 'key':key, 'value':value}) return my_cp.make_request(request, attribute=key) def __eq__(self, rhs): my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('eq', {'id':my_id, 'rhs':rhs.__local_dict['_id']}) return my_cp.make_request(request) def __repr__(self): # apparently repr(obj) skips the whole getattribute thing and just calls __repr__ # directly. Therefore we just pass it through the normal call pipeline, and # tack on a little header so that you can tell it's an object proxy. val = self.__repr__() return "saran:%s" % val def __str__(self): # see description for __repr__, because str(obj) works the same. We don't # tack anything on to the return value here because str values are used as data. return self.__str__() def __nonzero__(self): # bool(obj) is another method that skips __getattribute__. # There's no good way to just pass # the method on, so we use a special message. my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('nonzero', {'id':my_id}) return my_cp.make_request(request) def __len__(self): # see description for __repr__, len(obj) is the same. return self.__len__() def __contains__(self, item): # another special name that is normally called without recours to __getattribute__ return self.__contains__(item) def __deepcopy__(self, memo=None): """Copies the entire external object and returns its value. Will only work if the remote object is pickleable.""" my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('copy', {'id':my_id}) return my_cp.make_request(request) # since the remote object is being serialized whole anyway, # there's no semantic difference between copy and deepcopy __copy__ = __deepcopy__ def proxied_type(self): """ Returns the type of the object in the child process. Calling type(obj) on a saranwrapped object will always return , so this is a way to get at the 'real' type value.""" if type(self) is not ObjectProxy: return type(self) my_cp = self.__local_dict['_cp'] my_id = self.__local_dict['_id'] request = Request('type', {'id':my_id}) return my_cp.make_request(request) def getpid(self): """ Returns the pid of the child process. The argument should be a saranwrapped object.""" my_cp = self.__local_dict['_cp'] return my_cp._in.getpid() class CallableProxy(object): """ This class wraps a remote function in the :class:`Server` This class will be created by an :class:`Proxy` during normal operation, and users should not need to deal with this class directly. """ def __init__(self, object_id, name, cp): #_prnt("CallableProxy::__init__: %s, %s" % (object_id, name)) self._object_id = object_id self._name = name self._cp = cp def __call__(self, *args, **kwargs): #_prnt("CallableProxy::__call__: %s, %s" % (args, kwargs)) # Pass the call across. We never build a callable without # having already checked if the method starts with '_' so we # can safely pass this one to the remote object. #_prnt("calling %s %s" % (self._object_id, self._name) request = Request('call', {'id':self._object_id, 'name':self._name, 'args':args, 'kwargs':kwargs}) return self._cp.make_request(request, attribute=self._name) class Server(object): def __init__(self, input, output, export): """ :param input: a file-like object which supports ``read()``. :param output: a file-like object which supports ``write()`` and ``flush()``. :param export: an object, function, or map which is exported to clients when the id is ``None``. """ #_log("Server::__init__") self._in = input self._out = output self._export = export self._next_id = 1 self._objects = {} def handle_status(self, obj, req): return { 'object_count':len(self._objects), 'next_id':self._next_id, 'pid':os.getpid()} def handle_getattr(self, obj, req): try: return getattr(obj, req['attribute']) except AttributeError, e: if hasattr(obj, "__getitem__"): return obj[req['attribute']] else: raise e #_log('getattr: %s' % str(response)) def handle_setattr(self, obj, req): try: return setattr(obj, req['attribute'], req['value']) except AttributeError, e: if hasattr(obj, "__setitem__"): return obj.__setitem__(req['attribute'], req['value']) else: raise e def handle_getitem(self, obj, req): return obj[req['key']] def handle_setitem(self, obj, req): obj[req['key']] = req['value'] return None # *TODO figure out what the actual return value # of __setitem__ should be def handle_eq(self, obj, req): #_log("__eq__ %s %s" % (obj, req)) rhs = None try: rhs = self._objects[req['rhs']] except KeyError: return False return (obj == rhs) def handle_call(self, obj, req): #_log("calling %s " % (req['name'])) try: fn = getattr(obj, req['name']) except AttributeError, e: if hasattr(obj, "__setitem__"): fn = obj[req['name']] else: raise e return fn(*req['args'],**req['kwargs']) def handle_del(self, obj, req): id = req['id'] _log("del %s from %s" % (id, self._objects)) # *TODO what does __del__ actually return? try: del self._objects[id] except KeyError: pass return None def handle_type(self, obj, req): return type(obj) def handle_nonzero(self, obj, req): return bool(obj) def handle_copy(self, obj, req): return obj def loop(self): """Loop forever and respond to all requests.""" _log("Server::loop") while True: try: try: str_ = _read_lp_hunk(self._in) except EOFError: if _g_debug_mode: _log("Exiting normally") sys.exit(0) request = Pickle.loads(str_) _log("request: %s (%s)" % (request, self._objects)) req = request id = None obj = None try: id = req['id'] if id: id = int(id) obj = self._objects[id] #_log("id, object: %d %s" % (id, obj)) except Exception, e: #_log("Exception %s" % str(e)) pass if obj is None or id is None: id = None obj = self._export() #_log("found object %s" % str(obj)) # Handle the request via a method with a special name on the server handler_name = 'handle_%s' % request.action() try: handler = getattr(self, handler_name) except AttributeError: raise BadRequest, request.action() response = handler(obj, request) # figure out what to do with the response, and respond # apprpriately. if request.action() in ['status', 'type', 'copy']: # have to handle these specially since we want to # pickle up the actual value and not return a proxy self.respond(['value', response]) elif callable(response): #_log("callable %s" % response) self.respond(['callable']) elif self.is_value(response): self.respond(['value', response]) else: self._objects[self._next_id] = response #_log("objects: %s" % str(self._objects)) self.respond(['object', self._next_id]) self._next_id += 1 except (KeyboardInterrupt, SystemExit), e: raise e except Exception, e: self.write_exception(e) def is_value(self, value): """ Test if *value* should be serialized as a simple dataset. :param value: The value to test. :return: Returns ``True`` if *value* is a simple serializeable set of data. """ return type(value) in (str,unicode,int,float,long,bool,type(None)) def respond(self, body): _log("responding with: %s" % body) #_log("objects: %s" % self._objects) s = Pickle.dumps(body) _log(repr(s)) _write_lp_hunk(self._out, s) def write_exception(self, e): """Helper method to respond with an exception.""" #_log("exception: %s" % sys.exc_info()[0]) # TODO: serialize traceback using generalization of code from mulib.htmlexception global _g_debug_mode if _g_debug_mode: _log("traceback: %s" % traceback.format_tb(sys.exc_info()[2])) self.respond(['exception', e]) # test function used for testing return of unpicklable exceptions def raise_an_unpicklable_error(): class Unpicklable(Exception): pass raise Unpicklable() # test function used for testing return of picklable exceptions def raise_standard_error(): raise FloatingPointError() # test function to make sure print doesn't break the wrapper def print_string(str): print str # test function to make sure printing on stdout doesn't break the # wrapper def err_string(str): print >>sys.stderr, str def named(name): """Return an object given its name. The name uses a module-like syntax, eg:: os.path.join or:: mulib.mu.Resource """ toimport = name obj = None import_err_strings = [] while toimport: try: obj = __import__(toimport) break except ImportError, err: # print 'Import error on %s: %s' % (toimport, err) # debugging spam import_err_strings.append(err.__str__()) toimport = '.'.join(toimport.split('.')[:-1]) if obj is None: raise ImportError( '%s could not be imported. Import errors: %r' % (name, import_err_strings)) for seg in name.split('.')[1:]: try: obj = getattr(obj, seg) except AttributeError: dirobj = dir(obj) dirobj.sort() raise AttributeError( 'attribute %r missing from %r (%r) %r. Import errors: %r' % ( seg, obj, dirobj, name, import_err_strings)) return obj def main(): import optparse parser = optparse.OptionParser( usage="usage: %prog [options]", description="Simple saranwrap.Server wrapper") parser.add_option( '-c', '--child', default=False, action='store_true', help='Wrap an object serialized via setattr.') parser.add_option( '-m', '--module', type='string', dest='module', default=None, help='a module to load and export.') parser.add_option( '-l', '--logfile', type='string', dest='logfile', default=None, help='file to log to.') options, args = parser.parse_args() global _g_logfile if options.logfile: _g_logfile = open(options.logfile, 'a') from eventlet import tpool base_obj = [None] if options.module: def get_module(): if base_obj[0] is None: base_obj[0] = named(options.module) return base_obj[0] server = Server(tpool.Proxy(sys.stdin), tpool.Proxy(sys.stdout), get_module) elif options.child: def get_base(): if base_obj[0] is None: base_obj[0] = {} return base_obj[0] server = Server(tpool.Proxy(sys.stdin), tpool.Proxy(sys.stdout), get_base) # *HACK: some modules may emit on stderr, which breaks everything. class NullSTDOut(object): def noop(*args): pass def log_write(self, message): self.message = getattr(self, 'message', '') + message if '\n' in message: _log(self.message.rstrip()) self.message = '' write = noop read = noop flush = noop sys.stderr = NullSTDOut() sys.stdout = NullSTDOut() if _g_debug_mode: sys.stdout.write = sys.stdout.log_write sys.stderr.write = sys.stderr.log_write # Loop until EOF server.loop() if _g_logfile: _g_logfile.close() if __name__ == "__main__": main() eventlet-0.13.0/eventlet/db_pool.py0000644000175000017500000003731612164577340020122 0ustar temototemoto00000000000000from collections import deque import sys import time from eventlet.pools import Pool from eventlet import timeout from eventlet import hubs from eventlet.hubs.timer import Timer from eventlet.greenthread import GreenThread class ConnectTimeout(Exception): pass class BaseConnectionPool(Pool): def __init__(self, db_module, min_size = 0, max_size = 4, max_idle = 10, max_age = 30, connect_timeout = 5, *args, **kwargs): """ Constructs a pool with at least *min_size* connections and at most *max_size* connections. Uses *db_module* to construct new connections. The *max_idle* parameter determines how long pooled connections can remain idle, in seconds. After *max_idle* seconds have elapsed without the connection being used, the pool closes the connection. *max_age* is how long any particular connection is allowed to live. Connections that have been open for longer than *max_age* seconds are closed, regardless of idle time. If *max_age* is 0, all connections are closed on return to the pool, reducing it to a concurrency limiter. *connect_timeout* is the duration in seconds that the pool will wait before timing out on connect() to the database. If triggered, the timeout will raise a ConnectTimeout from get(). The remainder of the arguments are used as parameters to the *db_module*'s connection constructor. """ assert(db_module) self._db_module = db_module self._args = args self._kwargs = kwargs self.max_idle = max_idle self.max_age = max_age self.connect_timeout = connect_timeout self._expiration_timer = None super(BaseConnectionPool, self).__init__(min_size=min_size, max_size=max_size, order_as_stack=True) def _schedule_expiration(self): """ Sets up a timer that will call _expire_old_connections when the oldest connection currently in the free pool is ready to expire. This is the earliest possible time that a connection could expire, thus, the timer will be running as infrequently as possible without missing a possible expiration. If this function is called when a timer is already scheduled, it does nothing. If max_age or max_idle is 0, _schedule_expiration likewise does nothing. """ if self.max_age is 0 or self.max_idle is 0: # expiration is unnecessary because all connections will be expired # on put return if ( self._expiration_timer is not None and not getattr(self._expiration_timer, 'called', False)): # the next timer is already scheduled return try: now = time.time() self._expire_old_connections(now) # the last item in the list, because of the stack ordering, # is going to be the most-idle idle_delay = (self.free_items[-1][0] - now) + self.max_idle oldest = min([t[1] for t in self.free_items]) age_delay = (oldest - now) + self.max_age next_delay = min(idle_delay, age_delay) except (IndexError, ValueError): # no free items, unschedule ourselves self._expiration_timer = None return if next_delay > 0: # set up a continuous self-calling loop self._expiration_timer = Timer(next_delay, GreenThread(hubs.get_hub().greenlet).switch, self._schedule_expiration, [], {}) self._expiration_timer.schedule() def _expire_old_connections(self, now): """ Iterates through the open connections contained in the pool, closing ones that have remained idle for longer than max_idle seconds, or have been in existence for longer than max_age seconds. *now* is the current time, as returned by time.time(). """ original_count = len(self.free_items) expired = [ conn for last_used, created_at, conn in self.free_items if self._is_expired(now, last_used, created_at)] new_free = [ (last_used, created_at, conn) for last_used, created_at, conn in self.free_items if not self._is_expired(now, last_used, created_at)] self.free_items.clear() self.free_items.extend(new_free) # adjust the current size counter to account for expired # connections self.current_size -= original_count - len(self.free_items) for conn in expired: self._safe_close(conn, quiet=True) def _is_expired(self, now, last_used, created_at): """ Returns true and closes the connection if it's expired.""" if ( self.max_idle <= 0 or self.max_age <= 0 or now - last_used > self.max_idle or now - created_at > self.max_age ): return True return False def _unwrap_connection(self, conn): """ If the connection was wrapped by a subclass of BaseConnectionWrapper and is still functional (as determined by the __nonzero__ method), returns the unwrapped connection. If anything goes wrong with this process, returns None. """ base = None try: if conn: base = conn._base conn._destroy() else: base = None except AttributeError: pass return base def _safe_close(self, conn, quiet = False): """ Closes the (already unwrapped) connection, squelching any exceptions.""" try: conn.close() except (KeyboardInterrupt, SystemExit): raise except AttributeError: pass # conn is None, or junk except: if not quiet: print "Connection.close raised: %s" % (sys.exc_info()[1]) def get(self): conn = super(BaseConnectionPool, self).get() # None is a flag value that means that put got called with # something it couldn't use if conn is None: try: conn = self.create() except Exception: # unconditionally increase the free pool because # even if there are waiters, doing a full put # would incur a greenlib switch and thus lose the # exception stack self.current_size -= 1 raise # if the call to get() draws from the free pool, it will come # back as a tuple if isinstance(conn, tuple): _last_used, created_at, conn = conn else: created_at = time.time() # wrap the connection so the consumer can call close() safely wrapped = PooledConnectionWrapper(conn, self) # annotating the wrapper so that when it gets put in the pool # again, we'll know how old it is wrapped._db_pool_created_at = created_at return wrapped def put(self, conn): created_at = getattr(conn, '_db_pool_created_at', 0) now = time.time() conn = self._unwrap_connection(conn) if self._is_expired(now, now, created_at): self._safe_close(conn, quiet=False) conn = None else: # rollback any uncommitted changes, so that the next client # has a clean slate. This also pokes the connection to see if # it's dead or None try: if conn: conn.rollback() except KeyboardInterrupt: raise except: # we don't care what the exception was, we just know the # connection is dead print "WARNING: connection.rollback raised: %s" % (sys.exc_info()[1]) conn = None if conn is not None: super(BaseConnectionPool, self).put( (now, created_at, conn) ) else: # wake up any waiters with a flag value that indicates # they need to manufacture a connection if self.waiting() > 0: super(BaseConnectionPool, self).put(None) else: # no waiters -- just change the size self.current_size -= 1 self._schedule_expiration() def clear(self): """ Close all connections that this pool still holds a reference to, and removes all references to them. """ if self._expiration_timer: self._expiration_timer.cancel() free_items, self.free_items = self.free_items, deque() for item in free_items: # Free items created using min_size>0 are not tuples. conn = item[2] if isinstance(item, tuple) else item self._safe_close(conn, quiet=True) def __del__(self): self.clear() class TpooledConnectionPool(BaseConnectionPool): """A pool which gives out :class:`~eventlet.tpool.Proxy`-based database connections. """ def create(self): now = time.time() return now, now, self.connect(self._db_module, self.connect_timeout, *self._args, **self._kwargs) @classmethod def connect(cls, db_module, connect_timeout, *args, **kw): t = timeout.Timeout(connect_timeout, ConnectTimeout()) try: from eventlet import tpool conn = tpool.execute(db_module.connect, *args, **kw) return tpool.Proxy(conn, autowrap_names=('cursor',)) finally: t.cancel() class RawConnectionPool(BaseConnectionPool): """A pool which gives out plain database connections. """ def create(self): now = time.time() return now, now, self.connect(self._db_module, self.connect_timeout, *self._args, **self._kwargs) @classmethod def connect(cls, db_module, connect_timeout, *args, **kw): t = timeout.Timeout(connect_timeout, ConnectTimeout()) try: return db_module.connect(*args, **kw) finally: t.cancel() # default connection pool is the tpool one ConnectionPool = TpooledConnectionPool class GenericConnectionWrapper(object): def __init__(self, baseconn): self._base = baseconn def __enter__(self): return self._base.__enter__() def __exit__(self, exc, value, tb): return self._base.__exit__(exc, value, tb) def __repr__(self): return self._base.__repr__() def affected_rows(self): return self._base.affected_rows() def autocommit(self,*args, **kwargs): return self._base.autocommit(*args, **kwargs) def begin(self): return self._base.begin() def change_user(self,*args, **kwargs): return self._base.change_user(*args, **kwargs) def character_set_name(self,*args, **kwargs): return self._base.character_set_name(*args, **kwargs) def close(self,*args, **kwargs): return self._base.close(*args, **kwargs) def commit(self,*args, **kwargs): return self._base.commit(*args, **kwargs) def cursor(self, *args, **kwargs): return self._base.cursor(*args, **kwargs) def dump_debug_info(self,*args, **kwargs): return self._base.dump_debug_info(*args, **kwargs) def errno(self,*args, **kwargs): return self._base.errno(*args, **kwargs) def error(self,*args, **kwargs): return self._base.error(*args, **kwargs) def errorhandler(self, *args, **kwargs): return self._base.errorhandler(*args, **kwargs) def insert_id(self, *args, **kwargs): return self._base.insert_id(*args, **kwargs) def literal(self, *args, **kwargs): return self._base.literal(*args, **kwargs) def set_character_set(self, *args, **kwargs): return self._base.set_character_set(*args, **kwargs) def set_sql_mode(self, *args, **kwargs): return self._base.set_sql_mode(*args, **kwargs) def show_warnings(self): return self._base.show_warnings() def warning_count(self): return self._base.warning_count() def ping(self,*args, **kwargs): return self._base.ping(*args, **kwargs) def query(self,*args, **kwargs): return self._base.query(*args, **kwargs) def rollback(self,*args, **kwargs): return self._base.rollback(*args, **kwargs) def select_db(self,*args, **kwargs): return self._base.select_db(*args, **kwargs) def set_server_option(self,*args, **kwargs): return self._base.set_server_option(*args, **kwargs) def server_capabilities(self,*args, **kwargs): return self._base.server_capabilities(*args, **kwargs) def shutdown(self,*args, **kwargs): return self._base.shutdown(*args, **kwargs) def sqlstate(self,*args, **kwargs): return self._base.sqlstate(*args, **kwargs) def stat(self, *args, **kwargs): return self._base.stat(*args, **kwargs) def store_result(self,*args, **kwargs): return self._base.store_result(*args, **kwargs) def string_literal(self,*args, **kwargs): return self._base.string_literal(*args, **kwargs) def thread_id(self,*args, **kwargs): return self._base.thread_id(*args, **kwargs) def use_result(self,*args, **kwargs): return self._base.use_result(*args, **kwargs) class PooledConnectionWrapper(GenericConnectionWrapper): """ A connection wrapper where: - the close method returns the connection to the pool instead of closing it directly - ``bool(conn)`` returns a reasonable value - returns itself to the pool if it gets garbage collected """ def __init__(self, baseconn, pool): super(PooledConnectionWrapper, self).__init__(baseconn) self._pool = pool def __nonzero__(self): return (hasattr(self, '_base') and bool(self._base)) def _destroy(self): self._pool = None try: del self._base except AttributeError: pass def close(self): """ Return the connection to the pool, and remove the reference to it so that you can't use it again through this wrapper object. """ if self and self._pool: self._pool.put(self) self._destroy() def __del__(self): return # this causes some issues if __del__ is called in the # main coroutine, so for now this is disabled #self.close() class DatabaseConnector(object): """\ This is an object which will maintain a collection of database connection pools on a per-host basis.""" def __init__(self, module, credentials, conn_pool=None, *args, **kwargs): """\ constructor *module* Database module to use. *credentials* Mapping of hostname to connect arguments (e.g. username and password)""" assert(module) self._conn_pool_class = conn_pool if self._conn_pool_class is None: self._conn_pool_class = ConnectionPool self._module = module self._args = args self._kwargs = kwargs self._credentials = credentials # this is a map of hostname to username/password self._databases = {} def credentials_for(self, host): if host in self._credentials: return self._credentials[host] else: return self._credentials.get('default', None) def get(self, host, dbname): """ Returns a ConnectionPool to the target host and schema. """ key = (host, dbname) if key not in self._databases: new_kwargs = self._kwargs.copy() new_kwargs['db'] = dbname new_kwargs['host'] = host new_kwargs.update(self.credentials_for(host)) dbpool = self._conn_pool_class(self._module, *self._args, **new_kwargs) self._databases[key] = dbpool return self._databases[key] eventlet-0.13.0/eventlet/greenpool.py0000644000175000017500000002144312164577340020470 0ustar temototemoto00000000000000import itertools import traceback from eventlet import event from eventlet import greenthread from eventlet import queue from eventlet import semaphore from eventlet.support import greenlets as greenlet __all__ = ['GreenPool', 'GreenPile'] DEBUG = True class GreenPool(object): """The GreenPool class is a pool of green threads. """ def __init__(self, size=1000): self.size = size self.coroutines_running = set() self.sem = semaphore.Semaphore(size) self.no_coros_running = event.Event() def resize(self, new_size): """ Change the max number of greenthreads doing work at any given time. If resize is called when there are more than *new_size* greenthreads already working on tasks, they will be allowed to complete but no new tasks will be allowed to get launched until enough greenthreads finish their tasks to drop the overall quantity below *new_size*. Until then, the return value of free() will be negative. """ size_delta = new_size - self.size self.sem.counter += size_delta self.size = new_size def running(self): """ Returns the number of greenthreads that are currently executing functions in the GreenPool.""" return len(self.coroutines_running) def free(self): """ Returns the number of greenthreads available for use. If zero or less, the next call to :meth:`spawn` or :meth:`spawn_n` will block the calling greenthread until a slot becomes available.""" return self.sem.counter def spawn(self, function, *args, **kwargs): """Run the *function* with its arguments in its own green thread. Returns the :class:`GreenThread ` object that is running the function, which can be used to retrieve the results. If the pool is currently at capacity, ``spawn`` will block until one of the running greenthreads completes its task and frees up a slot. This function is reentrant; *function* can call ``spawn`` on the same pool without risk of deadlocking the whole thing. """ # if reentering an empty pool, don't try to wait on a coroutine freeing # itself -- instead, just execute in the current coroutine current = greenthread.getcurrent() if self.sem.locked() and current in self.coroutines_running: # a bit hacky to use the GT without switching to it gt = greenthread.GreenThread(current) gt.main(function, args, kwargs) return gt else: self.sem.acquire() gt = greenthread.spawn(function, *args, **kwargs) if not self.coroutines_running: self.no_coros_running = event.Event() self.coroutines_running.add(gt) gt.link(self._spawn_done) return gt def _spawn_n_impl(self, func, args, kwargs, coro): try: try: func(*args, **kwargs) except (KeyboardInterrupt, SystemExit, greenlet.GreenletExit): raise except: if DEBUG: traceback.print_exc() finally: if coro is None: return else: coro = greenthread.getcurrent() self._spawn_done(coro) def spawn_n(self, function, *args, **kwargs): """Create a greenthread to run the *function*, the same as :meth:`spawn`. The difference is that :meth:`spawn_n` returns None; the results of *function* are not retrievable. """ # if reentering an empty pool, don't try to wait on a coroutine freeing # itself -- instead, just execute in the current coroutine current = greenthread.getcurrent() if self.sem.locked() and current in self.coroutines_running: self._spawn_n_impl(function, args, kwargs, None) else: self.sem.acquire() g = greenthread.spawn_n(self._spawn_n_impl, function, args, kwargs, True) if not self.coroutines_running: self.no_coros_running = event.Event() self.coroutines_running.add(g) def waitall(self): """Waits until all greenthreads in the pool are finished working.""" assert greenthread.getcurrent() not in self.coroutines_running, \ "Calling waitall() from within one of the "\ "GreenPool's greenthreads will never terminate." if self.running(): self.no_coros_running.wait() def _spawn_done(self, coro): self.sem.release() if coro is not None: self.coroutines_running.remove(coro) # if done processing (no more work is waiting for processing), # we can finish off any waitall() calls that might be pending if self.sem.balance == self.size: self.no_coros_running.send(None) def waiting(self): """Return the number of greenthreads waiting to spawn. """ if self.sem.balance < 0: return -self.sem.balance else: return 0 def _do_map(self, func, it, gi): for args in it: gi.spawn(func, *args) gi.spawn(return_stop_iteration) def starmap(self, function, iterable): """This is the same as :func:`itertools.starmap`, except that *func* is executed in a separate green thread for each item, with the concurrency limited by the pool's size. In operation, starmap consumes a constant amount of memory, proportional to the size of the pool, and is thus suited for iterating over extremely long input lists. """ if function is None: function = lambda *a: a gi = GreenMap(self.size) greenthread.spawn_n(self._do_map, function, iterable, gi) return gi def imap(self, function, *iterables): """This is the same as :func:`itertools.imap`, and has the same concurrency and memory behavior as :meth:`starmap`. It's quite convenient for, e.g., farming out jobs from a file:: def worker(line): return do_something(line) pool = GreenPool() for result in pool.imap(worker, open("filename", 'r')): print result """ return self.starmap(function, itertools.izip(*iterables)) def return_stop_iteration(): return StopIteration() class GreenPile(object): """GreenPile is an abstraction representing a bunch of I/O-related tasks. Construct a GreenPile with an existing GreenPool object. The GreenPile will then use that pool's concurrency as it processes its jobs. There can be many GreenPiles associated with a single GreenPool. A GreenPile can also be constructed standalone, not associated with any GreenPool. To do this, construct it with an integer size parameter instead of a GreenPool. It is not advisable to iterate over a GreenPile in a different greenthread than the one which is calling spawn. The iterator will exit early in that situation. """ def __init__(self, size_or_pool=1000): if isinstance(size_or_pool, GreenPool): self.pool = size_or_pool else: self.pool = GreenPool(size_or_pool) self.waiters = queue.LightQueue() self.used = False self.counter = 0 def spawn(self, func, *args, **kw): """Runs *func* in its own green thread, with the result available by iterating over the GreenPile object.""" self.used = True self.counter += 1 try: gt = self.pool.spawn(func, *args, **kw) self.waiters.put(gt) except: self.counter -= 1 raise def __iter__(self): return self def next(self): """Wait for the next result, suspending the current greenthread until it is available. Raises StopIteration when there are no more results.""" if self.counter == 0 and self.used: raise StopIteration() try: return self.waiters.get().wait() finally: self.counter -= 1 # this is identical to GreenPile but it blocks on spawn if the results # aren't consumed, and it doesn't generate its own StopIteration exception, # instead relying on the spawning process to send one in when it's done class GreenMap(GreenPile): def __init__(self, size_or_pool): super(GreenMap, self).__init__(size_or_pool) self.waiters = queue.LightQueue(maxsize=self.pool.size) def next(self): try: val = self.waiters.get().wait() if isinstance(val, StopIteration): raise val else: return val finally: self.counter -= 1 eventlet-0.13.0/eventlet/patcher.py0000644000175000017500000003141412164577340020123 0ustar temototemoto00000000000000import sys import imp __all__ = ['inject', 'import_patched', 'monkey_patch', 'is_monkey_patched'] __exclude = set(('__builtins__', '__file__', '__name__')) class SysModulesSaver(object): """Class that captures some subset of the current state of sys.modules. Pass in an iterator of module names to the constructor.""" def __init__(self, module_names=()): self._saved = {} imp.acquire_lock() self.save(*module_names) def save(self, *module_names): """Saves the named modules to the object.""" for modname in module_names: self._saved[modname] = sys.modules.get(modname, None) def restore(self): """Restores the modules that the saver knows about into sys.modules. """ try: for modname, mod in self._saved.iteritems(): if mod is not None: sys.modules[modname] = mod else: try: del sys.modules[modname] except KeyError: pass finally: imp.release_lock() def inject(module_name, new_globals, *additional_modules): """Base method for "injecting" greened modules into an imported module. It imports the module specified in *module_name*, arranging things so that the already-imported modules in *additional_modules* are used when *module_name* makes its imports. *new_globals* is either None or a globals dictionary that gets populated with the contents of the *module_name* module. This is useful when creating a "green" version of some other module. *additional_modules* should be a collection of two-element tuples, of the form (, ). If it's not specified, a default selection of name/module pairs is used, which should cover all use cases but may be slower because there are inevitably redundant or unnecessary imports. """ patched_name = '__patched_module_' + module_name if patched_name in sys.modules: # returning already-patched module so as not to destroy existing # references to patched modules return sys.modules[patched_name] if not additional_modules: # supply some defaults additional_modules = ( _green_os_modules() + _green_select_modules() + _green_socket_modules() + _green_thread_modules() + _green_time_modules()) #_green_MySQLdb()) # enable this after a short baking-in period # after this we are gonna screw with sys.modules, so capture the # state of all the modules we're going to mess with, and lock saver = SysModulesSaver([name for name, m in additional_modules]) saver.save(module_name) # Cover the target modules so that when you import the module it # sees only the patched versions for name, mod in additional_modules: sys.modules[name] = mod ## Remove the old module from sys.modules and reimport it while ## the specified modules are in place sys.modules.pop(module_name, None) try: module = __import__(module_name, {}, {}, module_name.split('.')[:-1]) if new_globals is not None: ## Update the given globals dictionary with everything from this new module for name in dir(module): if name not in __exclude: new_globals[name] = getattr(module, name) ## Keep a reference to the new module to prevent it from dying sys.modules[patched_name] = module finally: saver.restore() ## Put the original modules back return module def import_patched(module_name, *additional_modules, **kw_additional_modules): """Imports a module in a way that ensures that the module uses "green" versions of the standard library modules, so that everything works nonblockingly. The only required argument is the name of the module to be imported. """ return inject( module_name, None, *additional_modules + tuple(kw_additional_modules.items())) def patch_function(func, *additional_modules): """Decorator that returns a version of the function that patches some modules for the duration of the function call. This is deeply gross and should only be used for functions that import network libraries within their function bodies that there is no way of getting around.""" if not additional_modules: # supply some defaults additional_modules = ( _green_os_modules() + _green_select_modules() + _green_socket_modules() + _green_thread_modules() + _green_time_modules()) def patched(*args, **kw): saver = SysModulesSaver() for name, mod in additional_modules: saver.save(name) sys.modules[name] = mod try: return func(*args, **kw) finally: saver.restore() return patched def _original_patch_function(func, *module_names): """Kind of the contrapositive of patch_function: decorates a function such that when it's called, sys.modules is populated only with the unpatched versions of the specified modules. Unlike patch_function, only the names of the modules need be supplied, and there are no defaults. This is a gross hack; tell your kids not to import inside function bodies!""" def patched(*args, **kw): saver = SysModulesSaver(module_names) for name in module_names: sys.modules[name] = original(name) try: return func(*args, **kw) finally: saver.restore() return patched def original(modname): """ This returns an unpatched version of a module; this is useful for Eventlet itself (i.e. tpool).""" # note that it's not necessary to temporarily install unpatched # versions of all patchable modules during the import of the # module; this is because none of them import each other, except # for threading which imports thread original_name = '__original_module_' + modname if original_name in sys.modules: return sys.modules.get(original_name) # re-import the "pure" module and store it in the global _originals # dict; be sure to restore whatever module had that name already saver = SysModulesSaver((modname,)) sys.modules.pop(modname, None) # some rudimentary dependency checking -- fortunately the modules # we're working on don't have many dependencies so we can just do # some special-casing here deps = {'threading':'thread', 'Queue':'threading'} if modname in deps: dependency = deps[modname] saver.save(dependency) sys.modules[dependency] = original(dependency) try: real_mod = __import__(modname, {}, {}, modname.split('.')[:-1]) if modname == 'Queue' and not hasattr(real_mod, '_threading'): # tricky hack: Queue's constructor in <2.7 imports # threading on every instantiation; therefore we wrap # it so that it always gets the original threading real_mod.Queue.__init__ = _original_patch_function( real_mod.Queue.__init__, 'threading') # save a reference to the unpatched module so it doesn't get lost sys.modules[original_name] = real_mod finally: saver.restore() return sys.modules[original_name] already_patched = {} def monkey_patch(**on): """Globally patches certain system modules to be greenthread-friendly. The keyword arguments afford some control over which modules are patched. If no keyword arguments are supplied, all possible modules are patched. If keywords are set to True, only the specified modules are patched. E.g., ``monkey_patch(socket=True, select=True)`` patches only the select and socket modules. Most arguments patch the single module of the same name (os, time, select). The exceptions are socket, which also patches the ssl module if present; and thread, which patches thread, threading, and Queue. It's safe to call monkey_patch multiple times. """ accepted_args = set(('os', 'select', 'socket', 'thread', 'time', 'psycopg', 'MySQLdb')) default_on = on.pop("all",None) for k in on.iterkeys(): if k not in accepted_args: raise TypeError("monkey_patch() got an unexpected "\ "keyword argument %r" % k) if default_on is None: default_on = not (True in on.values()) for modname in accepted_args: if modname == 'MySQLdb': # MySQLdb is only on when explicitly patched for the moment on.setdefault(modname, False) on.setdefault(modname, default_on) modules_to_patch = [] if on['os'] and not already_patched.get('os'): modules_to_patch += _green_os_modules() already_patched['os'] = True if on['select'] and not already_patched.get('select'): modules_to_patch += _green_select_modules() already_patched['select'] = True if on['socket'] and not already_patched.get('socket'): modules_to_patch += _green_socket_modules() already_patched['socket'] = True if on['thread'] and not already_patched.get('thread'): modules_to_patch += _green_thread_modules() already_patched['thread'] = True if on['time'] and not already_patched.get('time'): modules_to_patch += _green_time_modules() already_patched['time'] = True if on.get('MySQLdb') and not already_patched.get('MySQLdb'): modules_to_patch += _green_MySQLdb() already_patched['MySQLdb'] = True if on['psycopg'] and not already_patched.get('psycopg'): try: from eventlet.support import psycopg2_patcher psycopg2_patcher.make_psycopg_green() already_patched['psycopg'] = True except ImportError: # note that if we get an importerror from trying to # monkeypatch psycopg, we will continually retry it # whenever monkey_patch is called; this should not be a # performance problem but it allows is_monkey_patched to # tell us whether or not we succeeded pass imp.acquire_lock() try: for name, mod in modules_to_patch: orig_mod = sys.modules.get(name) if orig_mod is None: orig_mod = __import__(name) for attr_name in mod.__patched__: patched_attr = getattr(mod, attr_name, None) if patched_attr is not None: setattr(orig_mod, attr_name, patched_attr) finally: imp.release_lock() def is_monkey_patched(module): """Returns True if the given module is monkeypatched currently, False if not. *module* can be either the module itself or its name. Based entirely off the name of the module, so if you import a module some other way than with the import keyword (including import_patched), this might not be correct about that particular module.""" return module in already_patched or \ getattr(module, '__name__', None) in already_patched def _green_os_modules(): from eventlet.green import os return [('os', os)] def _green_select_modules(): from eventlet.green import select return [('select', select)] def _green_socket_modules(): from eventlet.green import socket try: from eventlet.green import ssl return [('socket', socket), ('ssl', ssl)] except ImportError: return [('socket', socket)] def _green_thread_modules(): from eventlet.green import Queue from eventlet.green import thread from eventlet.green import threading return [('Queue', Queue), ('thread', thread), ('threading', threading)] def _green_time_modules(): from eventlet.green import time return [('time', time)] def _green_MySQLdb(): try: from eventlet.green import MySQLdb return [('MySQLdb', MySQLdb)] except ImportError: return [] def slurp_properties(source, destination, ignore=[], srckeys=None): """Copy properties from *source* (assumed to be a module) to *destination* (assumed to be a dict). *ignore* lists properties that should not be thusly copied. *srckeys* is a list of keys to copy, if the source's __all__ is untrustworthy. """ if srckeys is None: srckeys = source.__all__ destination.update(dict([(name, getattr(source, name)) for name in srckeys if not ( name.startswith('__') or name in ignore) ])) if __name__ == "__main__": import sys sys.argv.pop(0) monkey_patch() execfile(sys.argv[0]) eventlet-0.13.0/eventlet/util.py0000644000175000017500000001167312164577340017457 0ustar temototemoto00000000000000import socket import warnings def g_log(*args): warnings.warn("eventlet.util.g_log is deprecated because " "we're pretty sure no one uses it. " "Send mail to eventletdev@lists.secondlife.com " "if you are actually using it.", DeprecationWarning, stacklevel=2) import sys from eventlet.support import greenlets as greenlet g_id = id(greenlet.getcurrent()) if g_id is None: if greenlet.getcurrent().parent is None: ident = 'greenlet-main' else: g_id = id(greenlet.getcurrent()) if g_id < 0: g_id += 1 + ((sys.maxint + 1) << 1) ident = '%08X' % (g_id,) else: ident = 'greenlet-%d' % (g_id,) print >>sys.stderr, '[%s] %s' % (ident, ' '.join(map(str, args))) __original_socket__ = socket.socket def tcp_socket(): warnings.warn("eventlet.util.tcp_socket is deprecated." "Please use the standard socket technique for this instead:" "sock = socket.socket()", DeprecationWarning, stacklevel=2) s = __original_socket__(socket.AF_INET, socket.SOCK_STREAM) return s try: # if ssl is available, use eventlet.green.ssl for our ssl implementation from eventlet.green import ssl def wrap_ssl(sock, certificate=None, private_key=None, server_side=False): return ssl.wrap_socket(sock, keyfile=private_key, certfile=certificate, server_side=server_side, cert_reqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_SSLv23, ca_certs=None, do_handshake_on_connect=True, suppress_ragged_eofs=True) except ImportError: # if ssl is not available, use PyOpenSSL def wrap_ssl(sock, certificate=None, private_key=None, server_side=False): try: from eventlet.green.OpenSSL import SSL except ImportError: raise ImportError("To use SSL with Eventlet, " "you must install PyOpenSSL or use Python 2.6 or later.") context = SSL.Context(SSL.SSLv23_METHOD) if certificate is not None: context.use_certificate_file(certificate) if private_key is not None: context.use_privatekey_file(private_key) context.set_verify(SSL.VERIFY_NONE, lambda *x: True) connection = SSL.Connection(context, sock) if server_side: connection.set_accept_state() else: connection.set_connect_state() return connection def wrap_socket_with_coroutine_socket(use_thread_pool=None): warnings.warn("eventlet.util.wrap_socket_with_coroutine_socket() is now " "eventlet.patcher.monkey_patch(all=False, socket=True)", DeprecationWarning, stacklevel=2) from eventlet import patcher patcher.monkey_patch(all=False, socket=True) def wrap_pipes_with_coroutine_pipes(): warnings.warn("eventlet.util.wrap_pipes_with_coroutine_pipes() is now " "eventlet.patcher.monkey_patch(all=False, os=True)", DeprecationWarning, stacklevel=2) from eventlet import patcher patcher.monkey_patch(all=False, os=True) def wrap_select_with_coroutine_select(): warnings.warn("eventlet.util.wrap_select_with_coroutine_select() is now " "eventlet.patcher.monkey_patch(all=False, select=True)", DeprecationWarning, stacklevel=2) from eventlet import patcher patcher.monkey_patch(all=False, select=True) def wrap_threading_local_with_coro_local(): """ monkey patch ``threading.local`` with something that is greenlet aware. Since greenlets cannot cross threads, so this should be semantically identical to ``threadlocal.local`` """ warnings.warn("eventlet.util.wrap_threading_local_with_coro_local() is now " "eventlet.patcher.monkey_patch(all=False, thread=True) -- though" "note that more than just _local is patched now.", DeprecationWarning, stacklevel=2) from eventlet import patcher patcher.monkey_patch(all=False, thread=True) def socket_bind_and_listen(descriptor, addr=('', 0), backlog=50): warnings.warn("eventlet.util.socket_bind_and_listen is deprecated." "Please use the standard socket methodology for this instead:" "sock.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)" "sock.bind(addr)" "sock.listen(backlog)", DeprecationWarning, stacklevel=2) set_reuse_addr(descriptor) descriptor.bind(addr) descriptor.listen(backlog) return descriptor def set_reuse_addr(descriptor): warnings.warn("eventlet.util.set_reuse_addr is deprecated." "Please use the standard socket methodology for this instead:" "sock.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)", DeprecationWarning, stacklevel=2) try: descriptor.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, descriptor.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) | 1) except socket.error: pass eventlet-0.13.0/eventlet/backdoor.py0000644000175000017500000000576412164577340020272 0ustar temototemoto00000000000000import socket import sys import errno from code import InteractiveConsole import eventlet from eventlet import hubs from eventlet.support import greenlets, get_errno try: sys.ps1 except AttributeError: sys.ps1 = '>>> ' try: sys.ps2 except AttributeError: sys.ps2 = '... ' class FileProxy(object): def __init__(self, f): self.f = f def isatty(self): return True def flush(self): pass def write(self, *a, **kw): self.f.write(*a, **kw) self.f.flush() def readline(self, *a): return self.f.readline(*a).replace('\r\n', '\n') def __getattr__(self, attr): return getattr(self.f, attr) # @@tavis: the `locals` args below mask the built-in function. Should # be renamed. class SocketConsole(greenlets.greenlet): def __init__(self, desc, hostport, locals): self.hostport = hostport self.locals = locals # mangle the socket self.desc = FileProxy(desc) greenlets.greenlet.__init__(self) def run(self): try: console = InteractiveConsole(self.locals) console.interact() finally: self.switch_out() self.finalize() def switch(self, *args, **kw): self.saved = sys.stdin, sys.stderr, sys.stdout sys.stdin = sys.stdout = sys.stderr = self.desc greenlets.greenlet.switch(self, *args, **kw) def switch_out(self): sys.stdin, sys.stderr, sys.stdout = self.saved def finalize(self): # restore the state of the socket self.desc = None print "backdoor closed to %s:%s" % self.hostport def backdoor_server(sock, locals=None): """ Blocking function that runs a backdoor server on the socket *sock*, accepting connections and running backdoor consoles for each client that connects. The *locals* argument is a dictionary that will be included in the locals() of the interpreters. It can be convenient to stick important application variables in here. """ print "backdoor server listening on %s:%s" % sock.getsockname() try: try: while True: socketpair = sock.accept() backdoor(socketpair, locals) except socket.error, e: # Broken pipe means it was shutdown if get_errno(e) != errno.EPIPE: raise finally: sock.close() def backdoor((conn, addr), locals=None): """Sets up an interactive console on a socket with a single connected client. This does not block the caller, as it spawns a new greenlet to handle the console. This is meant to be called from within an accept loop (such as backdoor_server). """ host, port = addr print "backdoor to %s:%s" % (host, port) fl = conn.makefile("rw") console = SocketConsole(fl, (host, port), locals) hub = hubs.get_hub() hub.schedule_call_global(0, console.switch) if __name__ == '__main__': backdoor_server(eventlet.listen(('127.0.0.1', 9000)), {}) eventlet-0.13.0/LICENSE0000644000175000017500000000234612164577340015304 0ustar temototemoto00000000000000Unless otherwise noted, the files in Eventlet are under the following MIT license: Copyright (c) 2005-2006, Bob Ippolito Copyright (c) 2007-2010, Linden Research, Inc. Copyright (c) 2008-2010, Eventlet Contributors (see AUTHORS) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. eventlet-0.13.0/README0000644000175000017500000000354212164577340015156 0ustar temototemoto00000000000000Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it. It uses epoll or libevent for highly scalable non-blocking I/O. Coroutines ensure that the developer uses a blocking style of programming that is similar to threading, but provide the benefits of non-blocking I/O. The event dispatch is implicit, which means you can easily use Eventlet from the Python interpreter, or as a small part of a larger application. It's easy to get started using Eventlet, and easy to convert existing applications to use it. Start off by looking at the `examples`_, `common design patterns`_, and the list of `basic API primitives`_. .. _examples: http://eventlet.net/doc/examples.html .. _common design patterns: http://eventlet.net/doc/design_patterns.html .. _basic API primitives: http://eventlet.net/doc/basic_usage.html Quick Example =============== Here's something you can try right on the command line:: % python >>> import eventlet >>> from eventlet.green import urllib2 >>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net') >>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com') >>> gt2.wait() >>> gt.wait() Getting Eventlet ================== The easiest way to get Eventlet is to use easy_install or pip:: easy_install eventlet pip install eventlet The development `tip`_ is available via easy_install as well:: easy_install 'eventlet==dev' pip install 'eventlet==dev' .. _tip: http://bitbucket.org/eventlet/eventlet/get/tip.zip#egg=eventlet-dev Building the Docs Locally ========================= To build a complete set of HTML documentation, you must have Sphinx, which can be found at http://sphinx.pocoo.org/ (or installed with `easy_install sphinx`) cd doc make html The built html files can be found in doc/_build/html afterward.eventlet-0.13.0/NEWS0000644000175000017500000005551112164577340015000 0ustar temototemoto000000000000000.13 ==== * hubs: kqueue support! Thanks to YAMAMOTO Takashi, Edward George * greenio: Fix AttributeError on MacOSX; Bitbucket #136; Thanks to Derk Tegeler * green: subprocess: Fix subprocess.communicate() block on Python 2.7; Thanks to Edward George * green: select: ensure that hub can .wait() at least once before timeout; Thanks to YAMAMOTO Takashi * tpool: single request queue to avoid deadlocks; Bitbucket pull request 31,32; Thanks to Edward George * zmq: pyzmq 13.x compatibility; Thanks to Edward George * green: subprocess: Popen.wait() accepts new `timeout` kwarg; Python 3.3 and RHEL 6.1 compatibility * hubs: EVENTLET_HUB can point to external modules; Thanks to Edward George * semaphore: support timeout for acquire(); Thanks to Justin Patrin * support: do not clear sys.exc_info if can be preserved (greenlet >= 0.3.2); Thanks to Edward George * Travis continous integration; Thanks to Thomas Grainger, Jakub Stasiak * wsgi: minimum_chunk_size of last Server altered all previous (global variable); Thanks to Jakub Stasiak * doc: hubs: Point to the correct function in exception message; Thanks to Floris Bruynooghe 0.12 ==== * zmq: Fix 100% busy CPU in idle after .bind(PUB) (thanks to Geoff Salmon) * greenio: Fix socket.settimeout() did not switch back to blocking mode (thanks to Peter Skirko) * greenio: socket.dup() made excess fcntl syscalls (thanks to Peter Portante) * setup: Remove legacy --without-greenlet option and unused httplib2 dependency (thanks to Thomas Grainger) * wsgi: environ[REMOTE_PORT], also available in log_format, log accept event (thanks to Peter Portante) * tests: Support libzmq 3.0 SNDHWM option (thanks to Geoff Salmon) 0.11 ==== * ssl: Fix 100% busy CPU in socket.sendall() (thanks to Raymon Lu) * zmq: Return linger argument to Socket.close() (thanks to Eric Windisch) * tests: SSL tests were always skipped due to bug in skip_if_no_ssl decorator 0.10 ==== * greenio: Fix relative seek() (thanks to AlanP) * db_pool: Fix pool.put() TypeError with min_size > 1 (thanks to Jessica Qi) * greenthread: Prevent infinite recursion with linking to current greenthread (thanks to Edward George) * zmq: getsockopt(EVENTS) wakes correct threads (thanks to Eric Windisch) * wsgi: Handle client disconnect while sending response (thanks to Clay Gerrard) * hubs: Ensure that new hub greenlet is parent of old one (thanks to Edward George) * os: Fix waitpid() returning (0, 0) (thanks to Vishvananda Ishaya) * tpool: Add set_num_threads() method to set the number of tpool threads (thanks to David Ibarra) * threading, zmq: Fix Python 2.5 support (thanks to Floris Bruynooghe) * tests: tox configuration for all supported Python versions (thanks to Floris Bruynooghe) * tests: Fix zmq._QueueLock test in Python2.6 * tests: Fix patcher_test on Darwin (/bin/true issue) (thanks to Edward George) * tests: Skip SSL tests when not available (thanks to Floris Bruynooghe) * greenio: Remove deprecated GreenPipe.xreadlines() method, was broken anyway 0.9.17 ====== * ZeroMQ support calling send and recv from multiple greenthreads (thanks to Geoff Salmon) * SSL: unwrap() sends data, and so it needs trampolining (#104 thanks to Brandon Rhodes) * hubs.epolls: Fix imports for exception handler (#123 thanks to Johannes Erdfelt) * db_pool: Fix .clear() when min_size > 0 * db_pool: Add MySQL's insert_id() method (thanks to Peter Scott) * db_pool: Close connections after timeout, fix get-after-close race condition with using TpooledConnectionPool (thanks to Peter Scott) * threading monkey patch fixes (#115 thanks to Johannes Erdfelt) * pools: Better accounting of current_size in pools.Pool (#91 thanks to Brett Hoerner) * wsgi: environ['RAW_PATH_INFO'] with request path as received from client (thanks to dweimer) * wsgi: log_output flag (thanks to Juan Manuel Garcia) * wsgi: Limit HTTP header size (thanks to Gregory Holt) * wsgi: Configurable maximum URL length (thanks to Tomas Sedovic) 0.9.16 ====== * SO_REUSEADDR now correctly set. 0.9.15 ====== * ZeroMQ support without an explicit hub now implemented! Thanks to Zed Shaw for the patch. * zmq module supports the NOBLOCK flag, thanks to rfk. (#76) * eventlet.wsgi has a debug flag which can be set to false to not send tracebacks to the client (per redbo's request) * Recursive GreenPipe madness forestalled by Soren Hansen (#77) * eventlet.green.ssl no longer busywaits on send() * EEXIST ignored in epoll hub (#80) * eventlet.listen's behavior on Windows improved, thanks to Nick Vatamaniuc (#83) * Timeouts raised within tpool.execute are propagated back to the caller (thanks again to redbo for being the squeaky wheel) 0.9.14 ====== * Many fixes to the ZeroMQ hub, which now requires version 2.0.10 or later. Thanks to Ben Ford. * ZeroMQ hub no longer depends on pollhub, and thus works on Windows (thanks, Alexey Borzenkov) * Better handling of connect errors on Windows, thanks again to Alexey Borzenkov. * More-robust Event delivery, thanks to Malcolm Cleaton * wsgi.py now distinguishes between an empty query string ("") and a non-existent query string (no entry in environ). * wsgi.py handles ipv6 correctly (thanks, redbo) * Better behavior in tpool when you give it nonsensical numbers, thanks to R. Tyler for the nonsense. :) * Fixed importing on 2.5 (#73, thanks to Ruijun Luo) * Hub doesn't hold on to invalid fds (#74, thanks to Edward George) * Documentation for eventlet.green.zmq, courtesy of Ben Ford 0.9.13 ====== * ZeroMQ hub, and eventlet.green.zmq make supersockets green. Thanks to Ben Ford! * eventlet.green.MySQLdb added. It's an interface to MySQLdb that uses tpool to make it appear nonblocking * Greenthread affinity in tpool. Each greenthread is assigned to the same thread when using tpool, making it easier to work with non-thread-safe libraries. * Eventlet now depends on greenlet 0.3 or later. * Fixed a hang when using tpool during an import causes another import. Thanks to mikepk for tracking that down. * Improved websocket draft 76 compliance, thanks to Nick V. * Rare greenthread.kill() bug fixed, which was probably brought about by a bugfix in greenlet 0.3. * Easy_installing eventlet should no longer print an ImportError about greenlet * Support for serving up SSL websockets, thanks to chwagssd for reporting #62 * eventlet.wsgi properly sets 'wsgi.url_scheme' environment variable to 'https', and 'HTTPS' to 'on' if serving over ssl * Blocking detector uses setitimer on 2.6 or later, allowing for sub-second block detection, thanks to rtyler. * Blocking detector is documented now, too * socket.create_connection properly uses dnspython for nonblocking dns. Thanks to rtyler. * Removed EVENTLET_TPOOL_DNS, nobody liked that. But if you were using it, install dnspython instead. Thanks to pigmej and gholt. * Removed _main_wrapper from greenthread, thanks to Ambroff adding keyword arguments to switch() in 0.3! 0.9.12 ====== * Eventlet no longer uses the Twisted hub if Twisted is imported -- you must call eventlet.hubs.use_hub('twistedr') if you want to use it. This prevents strange race conditions for those who want to use both Twisted and Eventlet separately. * Removed circular import in twistedr.py * Added websocket multi-user chat example * Not using exec() in green modules anymore. * eventlet.green.socket now contains all attributes of the stdlib socket module, even those that were left out by bugs. * Eventlet.wsgi doesn't call print anymore, instead uses the logfiles for everything (it used to print exceptions in one place). * Eventlet.wsgi properly closes the connection when an error is raised * Better documentation on eventlet.event.Event.send_exception * Adding websocket.html to tarball so that you can run the examples without checking out the source 0.9.10 ====== * Greendns: if dnspython is installed, Eventlet will automatically use it to provide non-blocking DNS queries. Set the environment variable 'EVENTLET_NO_GREENDNS' if you don't want greendns but have dnspython installed. * Full test suite passes on Python 2.7. * Tests no longer depend on simplejson for >2.6. * Potential-bug fixes in patcher (thanks to Schmir, and thanks to Hudson) * Websockets work with query strings (thanks to mcarter) * WSGI posthooks that get called after the request completed (thanks to gholt, nice docs, too) * Blocking detector merged -- use it to detect places where your code is not yielding to the hub for > 1 second. * tpool.Proxy can wrap callables * Tweaked Timeout class to do something sensible when True is passed to the constructor 0.9.9 ===== * A fix for monkeypatching on systems with psycopg version 2.0.14. * Improved support for chunked transfers in wsgi, plus a bunch of tests from schmir (ported from gevent by redbo) * A fix for the twisted hub from Favo Yang 0.9.8 ===== * Support for psycopg2's asynchronous mode, from Daniele Varrazzo * websocket module is now part of core Eventlet with 100% unit test coverage thanks to Ben Ford. See its documentation at http://eventlet.net/doc/modules/websocket.html * Added wrap_ssl convenience method, meaning that we truly no longer need api or util modules. * Multiple-reader detection code protects against the common mistake of having multiple greenthreads read from the same socket at the same time, which can be overridden if you know what you're doing. * Cleaner monkey_patch API: the "all" keyword is no longer necessary. * Pool objects have a more convenient constructor -- no more need to subclass * amajorek's reimplementation of GreenPipe * Many bug fixes, major and minor. 0.9.7 ===== * GreenPipe is now a context manager (thanks, quad) * tpool.Proxy supports iterators properly * bug fixes in eventlet.green.os (thanks, Benoit) * much code cleanup from Tavis * a few more example apps * multitudinous improvements in Py3k compatibility from amajorek 0.9.6 ===== * new EVENTLET_HUB environment variable allows you to select a hub without code * improved GreenSocket and GreenPipe compatibility with stdlib * bugfixes on GreenSocket and GreenPipe objects * code coverage increased across the board * Queue resizing * internal DeprecationWarnings largely eliminated * tpool is now reentrant (i.e., can call tpool.execute(tpool.execute(foo))) * more reliable access to unpatched modules reduces some race conditions when monkeypatching * completely threading-compatible corolocal implementation, plus tests and enthusiastic adoption * tests stomp on each others' toes less * performance improvements in timers, hubs, greenpool * Greenlet-aware profile module courtesy of CCP * support for select26 module's epoll * better PEP-8 compliance and import cleanup * new eventlet.serve convenience function for easy TCP servers 0.9.5 ===== * support psycopg in db_pool * smart patcher that does the right patching when importing without needing to understand plumbing of patched module * patcher.monkey_patch() method replacing util.wrap_* * monkeypatch threading support * removed api.named * imported timeout module from gevent, replace exc_after and with_timeout() * replace call_after with spawn_after; this is so that users don't see the Timer class * added cancel() method to GreenThread to support the semantic of "abort if not already in the middle of something" * eventlet.green.os with patched read() and write(), etc * moved stuff from wrap_pipes_with_coroutine_pipe into green.os * eventlet.green.subprocess instead of eventlet.processes * improve patching docs, explaining more about patcher and why you'd use eventlet.green * better documentation on greenpiles * deprecate api.py completely * deprecate util.py completely * deprecate saranwrap * performance improvements in the hubs * much better documentation overall * new convenience functions: eventlet.connect and eventlet.listen. Thanks, Sergey! 0.9.4 ===== * Deprecated coros.Queue and coros.Channel (use queue.Queue instead) * Added putting and getting methods to queue.Queue. * Added eventlet.green.Queue which is a greened clone of stdlib Queue, along with stdlib tests. * Changed __init__.py so that the version number is readable even if greenlet's not installed. * Bugfixes in wsgi, greenpool 0.9.3 ===== * Moved primary api module to __init__ from api. It shouldn't be necessary to import eventlet.api anymore; import eventlet should do the same job. * Proc module deprecated in favor of greenthread * New module greenthread, with new class GreenThread. * New GreenPool class that replaces pool.Pool. * Deprecated proc module (use greenthread module instead) * tpooled gethostbyname is configurable via environment variable EVENTLET_TPOOL_GETHOSTBYNAME * Removed greenio.Green_fileobject and refactored the code therein to be more efficient. Only call makefile() on sockets now; makeGreenFile() is deprecated. The main loss here is that of the readuntil method. Also, Green_fileobjects used to be auto-flushing; flush() must be called explicitly now. * Added epoll support * Improved documentation across the board. * New queue module, API-compatible with stdlib Queue * New debug module, used for enabling verbosity within Eventlet that can help debug applications or Eventlet itself. * Bugfixes in tpool, green.select, patcher * Deprecated coros.execute (use eventlet.spawn instead) * Deprecated coros.semaphore (use semaphore.Semaphore or semaphore.BoundedSemaphore instead) * Moved coros.BoundedSemaphore to semaphore.BoundedSemaphore * Moved coros.Semaphore to semaphore.Semaphore * Moved coros.event to event.Event * Deprecated api.tcp_listener, api.connect_tcp, api.ssl_listener * Moved get_hub, use_hub, get_default_hub from eventlet.api to eventlet.hubs * Renamed libevent hub to pyevent. * Removed previously-deprecated features tcp_server, GreenSSL, erpc, and trap_errors. * Removed saranwrap as an option for making db connections nonblocking in db_pool. 0.9.2 ===== * Bugfix for wsgi.py where it was improperly expecting the environ variable to be a constant when passed to the application. * Tpool.py now passes its tests on Windows. * Fixed minor performance issue in wsgi. 0.9.1 ===== * PyOpenSSL is no longer required for Python 2.6: use the eventlet.green.ssl module. 2.5 and 2.4 still require PyOpenSSL. * Cleaned up the eventlet.green packages and their associated tests, this should result in fewer version-dependent bugs with these modules. * PyOpenSSL is now fully wrapped in eventlet.green.OpenSSL; using it is therefore more consistent with using other green modules. * Documentation on using SSL added. * New green modules: ayncore, asynchat, SimpleHTTPServer, CGIHTTPServer, ftplib. * Fuller thread/threading compatibility: patching threadlocal with corolocal so coroutines behave even more like threads. * Improved Windows compatibility for tpool.py * With-statement compatibility for pools.Pool objects. * Refactored copyrights in the files, added LICENSE and AUTHORS files. * Added support for logging x-forwarded-for header in wsgi. * api.tcp_server is now deprecated, will be removed in a future release. * Added instructions on how to generate coverage reports to the documentation. * Renamed GreenFile to Green_fileobject, to better reflect its purpose. * Deprecated erpc method in tpool.py * Bug fixes in: wsgi.py, twistedr.py, poll.py, greenio.py, util.py, select.py, processes.py, selects.py 0.9.0 ===== * Full-duplex sockets (simultaneous readers and writers in the same process). * Remove modules that distract from the core mission of making it straightforward to write event-driven networking apps: httpd, httpc, channel, greenlib, httpdate, jsonhttp, logutil * Removed test dependency on sqlite, using nose instead. * Marked known-broken tests using nose's mechanism (most of these are not broken but are simply run in the incorrect context, such as threading-related tests that are incompatible with the libevent hub). * Remove copied code from python standard libs (in tests). * Added eventlet.patcher which can be used to import "greened" modules. 0.8.16 ====== * GreenSSLObject properly masks ZeroReturnErrors with an empty read; with unit test. * Fixed 2.6 SSL compatibility issue. 0.8.15 ====== * GreenSSL object no longer converts ZeroReturnErrors into empty reads, because that is more compatible with the underlying SSLConnection object. * Fixed issue caused by SIGCHLD handler in processes.py * Stopped supporting string exceptions in saranwrap and fixed a few test failures. 0.8.14 ====== * Fixed some more Windows compatibility problems, resolving EVT-37 : http://jira.secondlife.com/browse/EVT-37 * waiting() method on Pool class, which was lost when the Pool implementation replaced CoroutinePool. 0.8.13 ====== * 2.6 SSL compatibility patch by Marcus Cavanaugh. * Added greenlet and pyopenssl as dependencies in setup.py. 0.8.12 ====== * The ability to resize() pools of coroutines, which was lost when the Pool implementation replaced CoroutinePool. * Fixed Cesar's issue with SSL connections, and furthermore did a complete overhaul of SSL handling in eventlet so that it's much closer to the behavior of the built-in libraries. In particular, users of GreenSSL sockets must now call shutdown() before close(), exactly like SSL.Connection objects. * A small patch that makes Eventlet work on Windows. This is the first release of Eventlet that works on Windows. 0.8.11 ====== Eventlet can now run on top of twisted reactor. Twisted-based hub is enabled automatically if twisted.internet.reactor is imported. It is also possible to "embed" eventlet into a twisted application via eventlet.twistedutil.join_reactor. See the examples for details. A new package, eventlet.twistedutil, is added that makes integration of twisted and eventlet easier. It has block_on function that allows to wait for a Deferred to fire and it wraps twisted's Protocol in a synchronous interface. This is similar to and is inspired by Christopher Armstrong's corotwine library. Thanks to Dan Pascu for reviewing the package. Another new package, eventlet.green, was added to provide some of the standard modules that are fixed not to block other greenlets. This is an alternative to monkey-patching the socket, which is impossible to do if you are running twisted reactor. The package includes socket, httplib, urllib2. Much of the core functionality has been refactored and cleaned up, including the removal of eventlet.greenlib. This means that it is now possible to use plain greenlets without modification in eventlet, and the subclasses of greenlet instead of the old eventlet.greenlib.GreenletContext. Calling eventlet.api.get_hub().switch() now checks to see whether the current greenlet has a "switch_out" method and calls it if so, providing the same functionality that the GreenletContext.swap_out used to. The swap_in behavior can be duplicated by overriding the switch method, and the finalize functionality can be duplicated by having a try: finally: block around the greenlet's main implementation. The eventlet.backdoor module has been ported to this new scheme, although it's signature had to change slightly so existing code that used the backdoor will have to be modified. A number of bugs related to improper scheduling of switch calls has been fixed. The fixed functions and classes include api.trampoline, api.sleep, coros.event, coros.semaphore, coros.queue. Many methods of greenio.GreenSocket were fixed to make its behavior more like that of a regular socket. Thanks to Marcin Bachry for fixing GreenSocket.dup to preserve the timeout. Added proc module which provides an easy way to subscribe to coroutine's results. This makes it easy to wait for a single greenlet or for a set of greenlets to complete. wsgi.py now supports chunked transfer requests (patch by Mike Barton) The following modules were deprecated or removed because they were broken: hubs.nginx, hubs.libev, support.pycurls, support.twisteds, cancel method of coros.event class The following classes are still present but will be removed in the future version: - channel.channel (use coros.Channel) - coros.CoroutinePool (use pool.Pool) saranwrap.py now correctly closes the child process when the referring object is deleted, received some fixes to its detection of child process death, now correctly deals with the in keyword, and it is now possible to use coroutines in a non-blocking fashion in the child process. Time-based expiry added to db_pool. This adds the ability to expire connections both by idleness and also by total time open. There is also a connection timeout option. A small bug in httpd's error method was fixed. Python 2.3 is no longer supported. A number of tests was added along with a script to run all of them for all the configurations. The script generates an html page with the results. Thanks to Brian Brunswick for investigation of popen4 badness (eventlet.process) Thanks to Marcus Cavanaugh for pointing out some coros.queue(0) bugs. The twisted integration as well as many other improvements were funded by AG Projects (http://ag-projects.com), thanks! 0.8.x ===== Fix a CPU leak that would cause the poll hub to consume 100% CPU in certain conditions, for example the echoserver example. (Donovan Preston) Fix the libev hub to match libev's callback signature. (Patch by grugq) Add a backlog argument to api.tcp_listener (Patch by grugq) 0.7.x ===== Fix a major memory leak when using the libevent or libev hubs. Timers were not being removed from the hub after they fired. (Thanks Agusto Becciu and the grugq). Also, make it possible to call wrap_socket_with_coroutine_socket without using the threadpool to make dns operations non-blocking (Thanks the grugq). It's now possible to use eventlet's SSL client to talk to eventlet's SSL server. (Thanks to Ryan Williams) Fixed a major CPU leak when using select hub. When adding a descriptor to the hub, entries were made in all three dictionaries, readers, writers, and exc, even if the callback is None. Thus every fd would be passed into all three lists when calling select regardless of whether there was a callback for that event or not. When reading the next request out of a keepalive socket, the socket would come back as ready for writing, the hub would notice the callback is None and ignore it, and then loop as fast as possible consuming CPU. 0.6.x ===== Fixes some long-standing bugs where sometimes failures in accept() or connect() would cause the coroutine that was waiting to be double-resumed, most often resulting in SwitchingToDeadGreenlet exceptions as well as weird tuple-unpacking exceptions in the CoroutinePool main loop. 0.6.1: Added eventlet.tpool.killall. Blocks until all of the threadpool threads have been told to exit and join()ed. Meant to be used to clean up the threadpool on exit or if calling execv. Used by Spawning. 0.5.x ===== "The Pycon 2008 Refactor": The first release which incorporates libevent support. Also comes with significant refactoring and code cleanup, especially to the eventlet.wsgi http server. Docstring coverage is much higher and there is new extensive documentation: http://wiki.secondlife.com/wiki/Eventlet/Documentation The point releases of 0.5.x fixed some bugs in the wsgi server, most notably handling of Transfer-Encoding: chunked; previously, it would happily send chunked encoding to clients which asked for HTTP/1.0, which isn't legal. 0.2 ===== Initial re-release of forked linden branch. eventlet-0.13.0/setup.py0000644000175000017500000000237212164577340016010 0ustar temototemoto00000000000000#!/usr/bin/env python from setuptools import find_packages, setup from eventlet import __version__ from os import path setup( name='eventlet', version=__version__, description='Highly concurrent networking library', author='Linden Lab', author_email='eventletdev@lists.secondlife.com', url='http://eventlet.net', packages=find_packages(exclude=['tests', 'benchmarks']), install_requires=( 'greenlet >= 0.3', ), zip_safe=False, long_description=open( path.join( path.dirname(__file__), 'README' ) ).read(), test_suite='nose.collector', classifiers=[ "License :: OSI Approved :: MIT License", "Programming Language :: Python", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Programming Language :: Python :: 2.4", "Programming Language :: Python :: 2.5", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Topic :: Internet", "Topic :: Software Development :: Libraries :: Python Modules", "Intended Audience :: Developers", "Development Status :: 4 - Beta", ] ) eventlet-0.13.0/doc/0000755000175000017500000000000012164600754015033 5ustar temototemoto00000000000000eventlet-0.13.0/doc/design_patterns.rst0000644000175000017500000001447712164577340020777 0ustar temototemoto00000000000000.. _design-patterns: Design Patterns ================= There are a bunch of basic patterns that Eventlet usage falls into. Here are a few examples that show their basic structure. Client Pattern -------------------- The canonical client-side example is a web crawler. This use case is given a list of urls and wants to retrieve their bodies for later processing. Here is a very simple example:: urls = ["http://www.google.com/intl/en_ALL/images/logo.gif", "https://wiki.secondlife.com/w/images/secondlife.jpg", "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"] import eventlet from eventlet.green import urllib2 def fetch(url): return urllib2.urlopen(url).read() pool = eventlet.GreenPool() for body in pool.imap(fetch, urls): print "got body", len(body) There is a slightly more complex version of this in the :ref:`web crawler example `. Here's a tour of the interesting lines in this crawler. ``from eventlet.green import urllib2`` is how you import a cooperatively-yielding version of urllib2. It is the same in all respects to the standard version, except that it uses green sockets for its communication. This is an example of the :ref:`import-green` pattern. ``pool = eventlet.GreenPool()`` constructs a :class:`GreenPool ` of a thousand green threads. Using a pool is good practice because it provides an upper limit on the amount of work that this crawler will be doing simultaneously, which comes in handy when the input data changes dramatically. ``for body in pool.imap(fetch, urls):`` iterates over the results of calling the fetch function in parallel. :meth:`imap ` makes the function calls in parallel, and the results are returned in the order that they were executed. The key aspect of the client pattern is that it involves collecting the results of each function call; the fact that each fetch is done concurrently is essentially an invisible optimization. Note also that imap is memory-bounded and won't consume gigabytes of memory if the list of urls grows to the tens of thousands (yes, we had that problem in production once!). Server Pattern -------------------- Here's a simple server-side example, a simple echo server:: import eventlet def handle(client): while True: c = client.recv(1) if not c: break client.sendall(c) server = eventlet.listen(('0.0.0.0', 6000)) pool = eventlet.GreenPool(10000) while True: new_sock, address = server.accept() pool.spawn_n(handle, new_sock) The file :ref:`echo server example ` contains a somewhat more robust and complex version of this example. ``server = eventlet.listen(('0.0.0.0', 6000))`` uses a convenience function to create a listening socket. ``pool = eventlet.GreenPool(10000)`` creates a pool of green threads that could handle ten thousand clients. ``pool.spawn_n(handle, new_sock)`` launches a green thread to handle the new client. The accept loop doesn't care about the return value of the ``handle`` function, so it uses :meth:`spawn_n `, instead of :meth:`spawn `. The difference between the server and the client patterns boils down to the fact that the server has a ``while`` loop calling ``accept()`` repeatedly, and that it hands off the client socket completely to the handle() method, rather than collecting the results. Dispatch Pattern ------------------- One common use case that Linden Lab runs into all the time is a "dispatch" design pattern. This is a server that is also a client of some other services. Proxies, aggregators, job workers, and so on are all terms that apply here. This is the use case that the :class:`GreenPile ` was designed for. Here's a somewhat contrived example: a server that receives POSTs from clients that contain a list of urls of RSS feeds. The server fetches all the feeds concurrently and responds with a list of their titles to the client. It's easy to imagine it doing something more complex than this, and this could be easily modified to become a Reader-style application:: import eventlet feedparser = eventlet.import_patched('feedparser') pool = eventlet.GreenPool() def fetch_title(url): d = feedparser.parse(url) return d.feed.get('title', '') def app(environ, start_response): pile = eventlet.GreenPile(pool) for url in environ['wsgi.input'].readlines(): pile.spawn(fetch_title, url) titles = '\n'.join(pile) start_response('200 OK', [('Content-type', 'text/plain')]) return [titles] The full version of this example is in the :ref:`feed_scraper_example`, which includes code to start the WSGI server on a particular port. This example uses a global (gasp) :class:`GreenPool ` to control concurrency. If we didn't have a global limit on the number of outgoing requests, then a client could cause the server to open tens of thousands of concurrent connections to external servers, thereby getting feedscraper's IP banned, or various other accidental-or-on-purpose bad behavior. The pool isn't a complete DoS protection, but it's the bare minimum. .. highlight:: python :linenothreshold: 1 The interesting lines are in the app function:: pile = eventlet.GreenPile(pool) for url in environ['wsgi.input'].readlines(): pile.spawn(fetch_title, url) titles = '\n'.join(pile) .. highlight:: python :linenothreshold: 1000 Note that in line 1, the Pile is constructed using the global pool as its argument. That ties the Pile's concurrency to the global's. If there are already 1000 concurrent fetches from other clients of feedscraper, this one will block until some of those complete. Limitations are good! Line 3 is just a spawn, but note that we don't store any return value from it. This is because the return value is kept in the Pile itself. This becomes evident in the next line... Line 4 is where we use the fact that the Pile is an iterator. Each element in the iterator is one of the return values from the fetch_title function, which are strings. We can use a normal Python idiom (:func:`join`) to concatenate these incrementally as they happen.eventlet-0.13.0/doc/conf.py0000644000175000017500000001512212164577340016337 0ustar temototemoto00000000000000# -*- coding: utf-8 -*- # # Eventlet documentation build configuration file, created by # sphinx-quickstart on Sat Jul 4 19:48:27 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.intersphinx'] # If this is True, '.. todo::' and '.. todolist::' produce output, else they produce # nothing. The default is False. todo_include_todos = True # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Eventlet' copyright = u'2005-2010, Eventlet Contributors' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # import eventlet # The short X.Y version. version = '%s.%s' % (eventlet.version_info[0], eventlet.version_info[1]) # The full version, including alpha/beta/rc tags. release = eventlet.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # Intersphinx references intersphinx_mapping = {'http://docs.python.org/': None} # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'Eventletdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Eventlet.tex', u'Eventlet Documentation', u'', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True eventlet-0.13.0/doc/Makefile0000644000175000017500000000627512164577340016511 0ustar temototemoto00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = PYTHONPATH=../:$(PYTHONPATH) sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to generate a docstring coverage report" clean: -rm -rf _build/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) _build/dirhtml @echo @echo "Build finished. The HTML pages are in _build/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in _build/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) _build/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in _build/qthelp, like this:" @echo "# qcollectiongenerator _build/qthelp/Eventlet.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile _build/qthelp/Eventlet.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo @echo "Build finished; the LaTeX files are in _build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes @echo @echo "The overview file is in _build/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in _build/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in _build/doctest/output.txt." coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) _build/coverage @echo "Coverage report finished, look at the " \ "results in _build/coverage/python.txt." eventlet-0.13.0/doc/testing.rst0000644000175000017500000001142212164577340017246 0ustar temototemoto00000000000000Testing Eventlet ================ Eventlet is tested using `Nose `_. To run tests, simply install nose, and then, in the eventlet tree, do: .. code-block:: sh $ python setup.py test If you want access to all the nose plugins via command line, you can run: .. code-block:: sh $ python setup.py nosetests Lastly, you can just use nose directly if you want: .. code-block:: sh $ nosetests That's it! The output from running nose is the same as unittest's output, if the entire directory was one big test file. Many tests are skipped based on environmental factors; for example, it makes no sense to test Twisted-specific functionality when Twisted is not installed. These are printed as S's during execution, and in the summary printed after the tests run it will tell you how many were skipped. .. note:: If running Python version 2.4, use this command instead: ``python tests/nosewrapper.py``. There are several tests which make use of the `with` statement and therefore will cause nose grief when it tries to import them; nosewrapper.py excludes these tests so they are skipped. Doctests -------- To run the doctests included in many of the eventlet modules, use this command: .. code-block :: sh $ nosetests --with-doctest eventlet/*.py Currently there are 16 doctests. Standard Library Tests ---------------------- Eventlet provides for the ability to test itself with the standard Python networking tests. This verifies that the libraries it wraps work at least as well as the standard ones do. The directory tests/stdlib contains a bunch of stubs that import the standard lib tests from your system and run them. If you do not have any tests in your python distribution, they'll simply fail to import. There's a convenience module called all.py designed to handle the impedance mismatch between Nose and the standard tests: .. code-block:: sh $ nosetests tests/stdlib/all.py That will run all the tests, though the output will be a little weird because it will look like Nose is running about 20 tests, each of which consists of a bunch of sub-tests. Not all test modules are present in all versions of Python, so there will be an occasional printout of "Not importing %s, it doesn't exist in this installation/version of Python". If you see "Ran 0 tests in 0.001s", it means that your Python installation lacks its own tests. This is usually the case for Linux distributions. One way to get the missing tests is to download a source tarball (of the same version you have installed on your system!) and copy its Lib/test directory into the correct place on your PYTHONPATH. Testing Eventlet Hubs --------------------- When you run the tests, Eventlet will use the most appropriate hub for the current platform to do its dispatch. It's sometimes useful when making changes to Eventlet to test those changes on hubs other than the default. You can do this with the ``EVENTLET_HUB`` environment variable. .. code-block:: sh $ EVENTLET_HUB=epolls nosetests See :ref:`understanding_hubs` for the full list of hubs. Writing Tests ------------- What follows are some notes on writing tests, in no particular order. The filename convention when writing a test for module `foo` is to name the test `foo_test.py`. We don't yet have a convention for tests that are of finer granularity, but a sensible one might be `foo_class_test.py`. If you are writing a test that involves a client connecting to a spawned server, it is best to not use a hardcoded port because that makes it harder to parallelize tests. Instead bind the server to 0, and then look up its port when connecting the client, like this:: server_sock = eventlet.listener(('127.0.0.1', 0)) client_sock = eventlet.connect(('localhost', server_sock.getsockname()[1])) Coverage -------- Coverage.py is an awesome tool for evaluating how much code was exercised by unit tests. Nose supports it if both are installed, so it's easy to generate coverage reports for eventlet. Here's how: .. code-block:: sh nosetests --with-coverage --cover-package=eventlet After running the tests to completion, this will emit a huge wodge of module names and line numbers. For some reason, the ``--cover-inclusive`` option breaks everything rather than serving its purpose of limiting the coverage to the local files, so don't use that. The html option is quite useful because it generates nicely-formatted HTML that are much easier to read than line-number soup. Here's a command that generates the annotation, dumping the html files into a directory called "cover": .. code-block:: sh coverage html -d cover --omit='tempmod,,tests' (``tempmod`` and ``console`` are omitted because they gets thrown away at the completion of their unit tests and coverage.py isn't smart enough to detect this.) eventlet-0.13.0/doc/basic_usage.rst0000644000175000017500000001330012164577340020033 0ustar temototemoto00000000000000Basic Usage ============= If it's your first time to Eventlet, you may find the illuminated examples in the :ref:`design-patterns` document to be a good starting point. Eventlet is built around the concept of green threads (i.e. coroutines, we use the terms interchangeably) that are launched to do network-related work. Green threads differ from normal threads in two main ways: * Green threads are so cheap they are nearly free. You do not have to conserve green threads like you would normal threads. In general, there will be at least one green thread per network connection. * Green threads cooperatively yield to each other instead of preemptively being scheduled. The major advantage from this behavior is that shared data structures don't need locks, because only if a yield is explicitly called can another green thread have access to the data structure. It is also possible to inspect primitives such as queues to see if they have any pending data. Primary API =========== The design goal for Eventlet's API is simplicity and readability. You should be able to read its code and understand what it's doing. Fewer lines of code are preferred over excessively clever implementations. `Like Python itself `_, there should be one, and only one obvious way to do it in Eventlet! Though Eventlet has many modules, much of the most-used stuff is accessible simply by doing ``import eventlet``. Here's a quick summary of the functionality available in the ``eventlet`` module, with links to more verbose documentation on each. Greenthread Spawn ----------------------- .. function:: eventlet.spawn(func, *args, **kw) This launches a greenthread to call *func*. Spawning off multiple greenthreads gets work done in parallel. The return value from ``spawn`` is a :class:`greenthread.GreenThread` object, which can be used to retrieve the return value of *func*. See :func:`spawn ` for more details. .. function:: eventlet.spawn_n(func, *args, **kw) The same as :func:`spawn`, but it's not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See :func:`spawn_n ` for more details. .. function:: eventlet.spawn_after(seconds, func, *args, **kw) Spawns *func* after *seconds* have elapsed; a delayed version of :func:`spawn`. To abort the spawn and prevent *func* from being called, call :meth:`GreenThread.cancel` on the return value of :func:`spawn_after`. See :func:`spawn_after ` for more details. Greenthread Control ----------------------- .. function:: eventlet.sleep(seconds=0) Suspends the current greenthread and allows others a chance to process. See :func:`sleep ` for more details. .. class:: eventlet.GreenPool Pools control concurrency. It's very common in applications to want to consume only a finite amount of memory, or to restrict the amount of connections that one part of the code holds open so as to leave more for the rest, or to behave consistently in the face of unpredictable input data. GreenPools provide this control. See :class:`GreenPool ` for more on how to use these. .. class:: eventlet.GreenPile GreenPile objects represent chunks of work. In essence a GreenPile is an iterator that can be stuffed with work, and the results read out later. See :class:`GreenPile ` for more details. .. class:: eventlet.Queue Queues are a fundamental construct for communicating data between execution units. Eventlet's Queue class is used to communicate between greenthreads, and provides a bunch of useful features for doing that. See :class:`Queue ` for more details. .. class:: eventlet.Timeout This class is a way to add timeouts to anything. It raises *exception* in the current greenthread after *timeout* seconds. When *exception* is omitted or ``None``, the Timeout instance itself is raised. Timeout objects are context managers, and so can be used in with statements. See :class:`Timeout ` for more details. Patching Functions --------------------- .. function:: eventlet.import_patched(modulename, *additional_modules, **kw_additional_modules) Imports a module in a way that ensures that the module uses "green" versions of the standard library modules, so that everything works nonblockingly. The only required argument is the name of the module to be imported. For more information see :ref:`import-green`. .. function:: eventlet.monkey_patch(all=True, os=False, select=False, socket=False, thread=False, time=False) Globally patches certain system modules to be greenthread-friendly. The keyword arguments afford some control over which modules are patched. If *all* is True, then all modules are patched regardless of the other arguments. If it's False, then the rest of the keyword arguments control patching of specific subsections of the standard library. Most patch the single module of the same name (os, time, select). The exceptions are socket, which also patches the ssl module if present; and thread, which patches thread, threading, and Queue. It's safe to call monkey_patch multiple times. For more information see :ref:`monkey-patch`. Network Convenience Functions ------------------------------ .. autofunction:: eventlet.connect .. autofunction:: eventlet.listen .. autofunction:: eventlet.wrap_ssl .. autofunction:: eventlet.serve .. autoclass:: eventlet.StopServe These are the basic primitives of Eventlet; there are a lot more out there in the other Eventlet modules; check out the :doc:`modules`. eventlet-0.13.0/doc/authors.rst0000644000175000017500000000005012164577340017251 0ustar temototemoto00000000000000Authors ======= .. include:: ../AUTHORSeventlet-0.13.0/doc/index.rst0000644000175000017500000000201312164577340016674 0ustar temototemoto00000000000000Eventlet Documentation ==================================== Code talks! This is a simple web crawler that fetches a bunch of urls concurrently:: urls = ["http://www.google.com/intl/en_ALL/images/logo.gif", "https://wiki.secondlife.com/w/images/secondlife.jpg", "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"] import eventlet from eventlet.green import urllib2 def fetch(url): return urllib2.urlopen(url).read() pool = eventlet.GreenPool() for body in pool.imap(fetch, urls): print "got body", len(body) Contents ========= .. toctree:: :maxdepth: 2 basic_usage design_patterns patching examples ssl threading zeromq hubs testing environment modules authors history License --------- Eventlet is made available under the terms of the open source `MIT license `_ Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` eventlet-0.13.0/doc/history.rst0000644000175000017500000000302312164577340017270 0ustar temototemoto00000000000000History ------- Eventlet began life as Donovan Preston was talking to Bob Ippolito about coroutine-based non-blocking networking frameworks in Python. Most non-blocking frameworks require you to run the "main loop" in order to perform all network operations, but Donovan wondered if a library written using a trampolining style could get away with transparently running the main loop any time i/o was required, stopping the main loop once no more i/o was scheduled. Bob spent a few days during PyCon 2006 writing a proof-of-concept. He named it eventlet, after the coroutine implementation it used, `greenlet `_. Donovan began using eventlet as a light-weight network library for his spare-time project `Pavel `_, and also began writing some unittests. * http://svn.red-bean.com/bob/eventlet/trunk/ When Donovan started at Linden Lab in May of 2006, he added eventlet as an svn external in the ``indra/lib/python directory``, to be a dependency of the yet-to-be-named backbone project (at the time, it was named restserv). However, including eventlet as an svn external meant that any time the externally hosted project had hosting issues, Linden developers were not able to perform svn updates. Thus, the eventlet source was imported into the linden source tree at the same location, and became a fork. Bob Ippolito has ceased working on eventlet and has stated his desire for Linden to take it's fork forward to the open source world as "the" eventlet. eventlet-0.13.0/doc/modules/0000755000175000017500000000000012164600754016503 5ustar temototemoto00000000000000eventlet-0.13.0/doc/modules/greenthread.rst0000644000175000017500000000022612164577340021531 0ustar temototemoto00000000000000:mod:`greenthread` -- Green Thread Implementation ================================================== .. automodule:: eventlet.greenthread :members: eventlet-0.13.0/doc/modules/pools.rst0000644000175000017500000000020112164577340020366 0ustar temototemoto00000000000000:mod:`pools` - Generic pools of resources ========================================== .. automodule:: eventlet.pools :members: eventlet-0.13.0/doc/modules/wsgi.rst0000644000175000017500000000546612164577340020225 0ustar temototemoto00000000000000:mod:`wsgi` -- WSGI server =========================== The wsgi module provides a simple and easy way to start an event-driven `WSGI `_ server. This can serve as an embedded web server in an application, or as the basis for a more full-featured web server package. One such package is `Spawning `_. To launch a wsgi server, simply create a socket and call :func:`eventlet.wsgi.server` with it:: from eventlet import wsgi import eventlet def hello_world(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Hello, World!\r\n'] wsgi.server(eventlet.listen(('', 8090)), hello_world) You can find a slightly more elaborate version of this code in the file ``examples/wsgi.py``. .. automodule:: eventlet.wsgi :members: .. _wsgi_ssl: SSL --- Creating a secure server is only slightly more involved than the base example. All that's needed is to pass an SSL-wrapped socket to the :func:`~eventlet.wsgi.server` method:: wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', 8090)), certfile='cert.crt', keyfile='private.key', server_side=True), hello_world) Applications can detect whether they are inside a secure server by the value of the ``env['wsgi.url_scheme']`` environment variable. Non-Standard Extension to Support Post Hooks -------------------------------------------- Eventlet's WSGI server supports a non-standard extension to the WSGI specification where :samp:`env['eventlet.posthooks']` contains an array of `post hooks` that will be called after fully sending a response. Each post hook is a tuple of :samp:`(func, args, kwargs)` and the `func` will be called with the WSGI environment dictionary, followed by the `args` and then the `kwargs` in the post hook. For example:: from eventlet import wsgi import eventlet def hook(env, arg1, arg2, kwarg3=None, kwarg4=None): print 'Hook called: %s %s %s %s %s' % (env, arg1, arg2, kwarg3, kwarg4) def hello_world(env, start_response): env['eventlet.posthooks'].append( (hook, ('arg1', 'arg2'), {'kwarg3': 3, 'kwarg4': 4})) start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Hello, World!\r\n'] wsgi.server(eventlet.listen(('', 8090)), hello_world) The above code will print the WSGI environment and the other passed function arguments for every request processed. Post hooks are useful when code needs to be executed after a response has been fully sent to the client (or when the client disconnects early). One example is for more accurate logging of bandwidth used, as client disconnects use less bandwidth than the actual Content-Length. eventlet-0.13.0/doc/modules/semaphore.rst0000644000175000017500000000041612164577340021225 0ustar temototemoto00000000000000:mod:`semaphore` -- Semaphore classes ================================================== .. autoclass:: eventlet.semaphore.Semaphore :members: .. autoclass:: eventlet.semaphore.BoundedSemaphore :members: .. autoclass:: eventlet.semaphore.CappedSemaphore :members:eventlet-0.13.0/doc/modules/zmq.rst0000644000175000017500000000163712164577340020057 0ustar temototemoto00000000000000:mod:`eventlet.green.zmq` -- ØMQ support ======================================== .. automodule:: eventlet.green.zmq :show-inheritance: .. currentmodule:: eventlet.green.zmq .. autofunction:: Context .. autoclass:: _Context :show-inheritance: .. automethod:: socket .. autoclass:: Socket :show-inheritance: :inherited-members: .. automethod:: recv .. automethod:: send .. module:: zmq :mod:`zmq` -- The pyzmq ØMQ python bindings =========================================== :mod:`pyzmq ` [1]_ Is a python binding to the C++ ØMQ [2]_ library written in Cython [3]_. The following is auto generated :mod:`pyzmq's ` from documentation. .. autoclass:: zmq.core.context.Context :members: .. autoclass:: zmq.core.socket.Socket .. autoclass:: zmq.core.poll.Poller :members: .. [1] http://github.com/zeromq/pyzmq .. [2] http://www.zeromq.com .. [3] http://www.cython.org eventlet-0.13.0/doc/modules/websocket.rst0000644000175000017500000000210212164577340021222 0ustar temototemoto00000000000000:mod:`websocket` -- Websocket Server ===================================== This module provides a simple way to create a `websocket `_ server. It works with a few tweaks in the :mod:`~eventlet.wsgi` module that allow websockets to coexist with other WSGI applications. To create a websocket server, simply decorate a handler method with :class:`WebSocketWSGI` and use it as a wsgi application:: from eventlet import wsgi, websocket import eventlet @websocket.WebSocketWSGI def hello_world(ws): ws.send("hello world") wsgi.server(eventlet.listen(('', 8090)), hello_world) You can find a slightly more elaborate version of this code in the file ``examples/websocket.py``. As of version 0.9.13, eventlet.websocket supports SSL websockets; all that's necessary is to use an :ref:`SSL wsgi server `. .. note :: The web socket spec is still under development, and it will be necessary to change the way that this module works in response to spec changes. .. automodule:: eventlet.websocket :members: eventlet-0.13.0/doc/modules/timeout.rst0000644000175000017500000000712312164577340020732 0ustar temototemoto00000000000000:mod:`timeout` -- Universal Timeouts ======================================== .. class:: eventlet.timeout.Timeout Raises *exception* in the current greenthread after *timeout* seconds:: timeout = Timeout(seconds, exception) try: ... # execution here is limited by timeout finally: timeout.cancel() When *exception* is omitted or ``None``, the :class:`Timeout` instance itself is raised: >>> Timeout(0.1) >>> eventlet.sleep(0.2) Traceback (most recent call last): ... Timeout: 0.1 seconds In Python 2.5 and newer, you can use the ``with`` statement for additional convenience:: with Timeout(seconds, exception) as timeout: pass # ... code block ... This is equivalent to the try/finally block in the first example. There is an additional feature when using the ``with`` statement: if *exception* is ``False``, the timeout is still raised, but the with statement suppresses it, so the code outside the with-block won't see it:: data = None with Timeout(5, False): data = mysock.makefile().readline() if data is None: ... # 5 seconds passed without reading a line else: ... # a line was read within 5 seconds As a very special case, if *seconds* is None, the timer is not scheduled, and is only useful if you're planning to raise it directly. There are two Timeout caveats to be aware of: * If the code block in the try/finally or with-block never cooperatively yields, the timeout cannot be raised. In Eventlet, this should rarely be a problem, but be aware that you cannot time out CPU-only operations with this class. * If the code block catches and doesn't re-raise :class:`BaseException` (for example, with ``except:``), then it will catch the Timeout exception, and might not abort as intended. When catching timeouts, keep in mind that the one you catch may not be the one you have set; if you going to silence a timeout, always check that it's the same instance that you set:: timeout = Timeout(1) try: ... except Timeout, t: if t is not timeout: raise # not my timeout .. automethod:: cancel .. autoattribute:: pending .. function:: eventlet.timeout.with_timeout(seconds, function, *args, **kwds) Wrap a call to some (yielding) function with a timeout; if the called function fails to return before the timeout, cancel it and return a flag value. :param seconds: seconds before timeout occurs :type seconds: int or float :param func: the callable to execute with a timeout; it must cooperatively yield, or else the timeout will not be able to trigger :param \*args: positional arguments to pass to *func* :param \*\*kwds: keyword arguments to pass to *func* :param timeout_value: value to return if timeout occurs (by default raises :class:`Timeout`) :rtype: Value returned by *func* if *func* returns before *seconds*, else *timeout_value* if provided, else raises :class:`Timeout`. :exception Timeout: if *func* times out and no ``timeout_value`` has been provided. :exception: Any exception raised by *func* Example:: data = with_timeout(30, urllib2.open, 'http://www.google.com/', timeout_value="") Here *data* is either the result of the ``get()`` call, or the empty string if it took too long to return. Any exception raised by the ``get()`` call is passed through to the caller. eventlet-0.13.0/doc/modules/greenpool.rst0000644000175000017500000000020012164577340021223 0ustar temototemoto00000000000000:mod:`greenpool` -- Green Thread Pools ======================================== .. automodule:: eventlet.greenpool :members: eventlet-0.13.0/doc/modules/backdoor.rst0000644000175000017500000000244712164577340021034 0ustar temototemoto00000000000000:mod:`backdoor` -- Python interactive interpreter within a running process =============================================================================== The backdoor module is convenient for inspecting the state of a long-running process. It supplies the normal Python interactive interpreter in a way that does not block the normal operation of the application. This can be useful for debugging, performance tuning, or simply learning about how things behave in situ. In the application, spawn a greenthread running backdoor_server on a listening socket:: eventlet.spawn(backdoor.backdoor_server, eventlet.listen(('localhost', 3000))) When this is running, the backdoor is accessible via telnet to the specified port. .. code-block:: sh $ telnet localhost 3000 Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import myapp >>> dir(myapp) ['__all__', '__doc__', '__name__', 'myfunc'] >>> The backdoor cooperatively yields to the rest of the application between commands, so on a running server continuously serving requests, you can observe the internal state changing between interpreter commands. .. automodule:: eventlet.backdoor :members: eventlet-0.13.0/doc/modules/event.rst0000644000175000017500000000021212164577340020355 0ustar temototemoto00000000000000:mod:`event` -- Cross-greenthread primitive ================================================== .. automodule:: eventlet.event :members: eventlet-0.13.0/doc/modules/debug.rst0000644000175000017500000000021312164577340020323 0ustar temototemoto00000000000000:mod:`debug` -- Debugging tools for Eventlet ================================================== .. automodule:: eventlet.debug :members: eventlet-0.13.0/doc/modules/queue.rst0000644000175000017500000000016012164577340020362 0ustar temototemoto00000000000000:mod:`queue` -- Queue class ======================================== .. automodule:: eventlet.queue :members: eventlet-0.13.0/doc/modules/db_pool.rst0000644000175000017500000001111612164577340020657 0ustar temototemoto00000000000000:mod:`db_pool` -- DBAPI 2 database connection pooling ======================================================== The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be database-agnostic, compatible with any DB-API 2.0 database module. *It has currently been tested and used with both MySQLdb and psycopg2.* A ConnectionPool object represents a pool of connections open to a particular database. The arguments to the constructor include the database-software-specific module, the host name, and the credentials required for authentication. After construction, the ConnectionPool object decides when to create and sever connections with the target database. >>> import MySQLdb >>> cp = ConnectionPool(MySQLdb, host='localhost', user='root', passwd='') Once you have this pool object, you connect to the database by calling :meth:`~eventlet.db_pool.ConnectionPool.get` on it: >>> conn = cp.get() This call may either create a new connection, or reuse an existing open connection, depending on whether it has one open already or not. You can then use the connection object as normal. When done, you must return the connection to the pool: >>> conn = cp.get() >>> try: ... result = conn.cursor().execute('SELECT NOW()') ... finally: ... cp.put(conn) After you've returned a connection object to the pool, it becomes useless and will raise exceptions if any of its methods are called. Constructor Arguments ---------------------- In addition to the database credentials, there are a bunch of keyword constructor arguments to the ConnectionPool that are useful. * min_size, max_size : The normal Pool arguments. max_size is the most important constructor argument -- it determines the number of concurrent connections can be open to the destination database. min_size is not very useful. * max_idle : Connections are only allowed to remain unused in the pool for a limited amount of time. An asynchronous timer periodically wakes up and closes any connections in the pool that have been idle for longer than they are supposed to be. Without this parameter, the pool would tend to have a 'high-water mark', where the number of connections open at a given time corresponds to the peak historical demand. This number only has effect on the connections in the pool itself -- if you take a connection out of the pool, you can hold on to it for as long as you want. If this is set to 0, every connection is closed upon its return to the pool. * max_age : The lifespan of a connection. This works much like max_idle, but the timer is measured from the connection's creation time, and is tracked throughout the connection's life. This means that if you take a connection out of the pool and hold on to it for some lengthy operation that exceeds max_age, upon putting the connection back in to the pool, it will be closed. Like max_idle, max_age will not close connections that are taken out of the pool, and, if set to 0, will cause every connection to be closed when put back in the pool. * connect_timeout : How long to wait before raising an exception on connect(). If the database module's connect() method takes too long, it raises a ConnectTimeout exception from the get() method on the pool. DatabaseConnector ----------------- If you want to connect to multiple databases easily (and who doesn't), the DatabaseConnector is for you. It's a pool of pools, containing a ConnectionPool for every host you connect to. The constructor arguments are: * module : database module, e.g. MySQLdb. This is simply passed through to the ConnectionPool. * credentials : A dictionary, or dictionary-alike, mapping hostname to connection-argument-dictionary. This is used for the constructors of the ConnectionPool objects. Example: >>> dc = DatabaseConnector(MySQLdb, ... {'db.internal.example.com': {'user': 'internal', 'passwd': 's33kr1t'}, ... 'localhost': {'user': 'root', 'passwd': ''}}) If the credentials contain a host named 'default', then the value for 'default' is used whenever trying to connect to a host that has no explicit entry in the database. This is useful if there is some pool of hosts that share arguments. * conn_pool : The connection pool class to use. Defaults to db_pool.ConnectionPool. The rest of the arguments to the DatabaseConnector constructor are passed on to the ConnectionPool. *Caveat: The DatabaseConnector is a bit unfinished, it only suits a subset of use cases.* .. automodule:: eventlet.db_pool :members: :undoc-members: eventlet-0.13.0/doc/modules/corolocal.rst0000644000175000017500000000023212164577340021213 0ustar temototemoto00000000000000:mod:`corolocal` -- Coroutine local storage ============================================= .. automodule:: eventlet.corolocal :members: :undoc-members: eventlet-0.13.0/doc/examples.rst0000644000175000017500000000474012164577340017414 0ustar temototemoto00000000000000Examples ======== Here are a bunch of small example programs that use Eventlet. All of these examples can be found in the ``examples`` directory of a source copy of Eventlet. .. _web_crawler_example: Web Crawler ------------ ``examples/webcrawler.py`` .. literalinclude:: ../examples/webcrawler.py .. _wsgi_server_example: WSGI Server ------------ ``examples/wsgi.py`` .. literalinclude:: ../examples/wsgi.py .. _echo_server_example: Echo Server ----------- ``examples/echoserver.py`` .. literalinclude:: ../examples/echoserver.py .. _socket_connect_example: Socket Connect -------------- ``examples/connect.py`` .. literalinclude:: ../examples/connect.py .. _chat_server_example: Multi-User Chat Server ----------------------- ``examples/chat_server.py`` This is a little different from the echo server, in that it broadcasts the messages to all participants, not just the sender. .. literalinclude:: ../examples/chat_server.py .. _feed_scraper_example: Feed Scraper ----------------------- ``examples/feedscraper.py`` This example requires `Feedparser `_ to be installed or on the PYTHONPATH. .. literalinclude:: ../examples/feedscraper.py .. _forwarder_example: Port Forwarder ----------------------- ``examples/forwarder.py`` .. literalinclude:: ../examples/forwarder.py .. _recursive_crawler_example: Recursive Web Crawler ----------------------------------------- ``examples/recursive_crawler.py`` This is an example recursive web crawler that fetches linked pages from a seed url. .. literalinclude:: ../examples/recursive_crawler.py .. _producer_consumer_example: Producer Consumer Web Crawler ----------------------------------------- ``examples/producer_consumer.py`` This is an example implementation of the producer/consumer pattern as well as being identical in functionality to the recursive web crawler. .. literalinclude:: ../examples/producer_consumer.py .. _websocket_example: Websocket Server Example -------------------------- ``examples/websocket.py`` This exercises some of the features of the websocket server implementation. .. literalinclude:: ../examples/websocket.py .. _websocket_chat_example: Websocket Multi-User Chat Example ----------------------------------- ``examples/websocket_chat.py`` This is a mashup of the websocket example and the multi-user chat example, showing how you can do the same sorts of things with websockets that you can do with regular sockets. .. literalinclude:: ../examples/websocket_chat.py eventlet-0.13.0/doc/ssl.rst0000644000175000017500000000531612164577340016377 0ustar temototemoto00000000000000Using SSL With Eventlet ======================== Eventlet makes it easy to use non-blocking SSL sockets. If you're using Python 2.6 or later, you're all set, eventlet wraps the built-in ssl module. If on Python 2.5 or 2.4, you have to install pyOpenSSL_ to use eventlet. In either case, the the ``green`` modules handle SSL sockets transparently, just like their standard counterparts. As an example, :mod:`eventlet.green.urllib2` can be used to fetch https urls in as non-blocking a fashion as you please:: from eventlet.green import urllib2 from eventlet import coros bodies = [coros.execute(urllib2.urlopen, url) for url in ("https://secondlife.com","https://google.com")] for b in bodies: print b.wait().read() With Python 2.6 ---------------- To use ssl sockets directly in Python 2.6, use :mod:`eventlet.green.ssl`, which is a non-blocking wrapper around the standard Python :mod:`ssl` module, and which has the same interface. See the standard documentation for instructions on use. With Python 2.5 or Earlier --------------------------- Prior to Python 2.6, there is no :mod:`ssl`, so SSL support is much weaker. Eventlet relies on pyOpenSSL to implement its SSL support on these older versions, so be sure to install pyOpenSSL, or you'll get an ImportError whenever your system tries to make an SSL connection. Once pyOpenSSL is installed, you can then use the ``eventlet.green`` modules, like :mod:`eventlet.green.httplib` to fetch https urls. You can also use :func:`eventlet.green.socket.ssl`, which is a nonblocking wrapper for :func:`socket.ssl`. PyOpenSSL ---------- :mod:`eventlet.green.OpenSSL` has exactly the same interface as pyOpenSSL_ `(docs) `_, and works in all versions of Python. This module is much more powerful than :func:`socket.ssl`, and may have some advantages over :mod:`ssl`, depending on your needs. Here's an example of a server:: from eventlet.green import socket from eventlet.green.OpenSSL import SSL # insecure context, only for example purposes context = SSL.Context(SSL.SSLv23_METHOD) context.set_verify(SSL.VERIFY_NONE, lambda *x: True)) # create underlying green socket and wrap it in ssl sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connection = SSL.Connection(context, sock) # configure as server connection.set_accept_state() connection.bind(('127.0.0.1', 80443)) connection.listen(50) # accept one client connection then close up shop client_conn, addr = connection.accept() print client_conn.read(100) client_conn.shutdown() client_conn.close() connection.close() .. _pyOpenSSL: https://launchpad.net/pyopenssleventlet-0.13.0/doc/environment.rst0000644000175000017500000000134212164577340020135 0ustar temototemoto00000000000000.. _env_vars: Environment Variables ====================== Eventlet's behavior can be controlled by a few environment variables. These are only for the advanced user. EVENTLET_HUB Used to force Eventlet to use the specified hub instead of the optimal one. See :ref:`understanding_hubs` for the list of acceptable hubs and what they mean (note that picking a hub not on the list will silently fail). Equivalent to calling :meth:`eventlet.hubs.use_hub` at the beginning of the program. EVENTLET_THREADPOOL_SIZE The size of the threadpool in :mod:`~eventlet.tpool`. This is an environment variable because tpool constructs its pool on first use, so any control of the pool size needs to happen before then. eventlet-0.13.0/doc/patching.rst0000644000175000017500000001337512164577340017377 0ustar temototemoto00000000000000Greening The World ================== One of the challenges of writing a library like Eventlet is that the built-in networking libraries don't natively support the sort of cooperative yielding that we need. What we must do instead is patch standard library modules in certain key places so that they do cooperatively yield. We've in the past considered doing this automatically upon importing Eventlet, but have decided against that course of action because it is un-Pythonic to change the behavior of module A simply by importing module B. Therefore, the application using Eventlet must explicitly green the world for itself, using one or both of the convenient methods provided. .. _import-green: Import Green -------------- The first way of greening an application is to import networking-related libraries from the ``eventlet.green`` package. It contains libraries that have the same interfaces as common standard ones, but they are modified to behave well with green threads. Using this method is a good engineering practice, because the true dependencies are apparent in every file:: from eventlet.green import socket from eventlet.green import threading from eventlet.green import asyncore This works best if every library can be imported green in this manner. If ``eventlet.green`` lacks a module (for example, non-python-standard modules), then :func:`~eventlet.patcher.import_patched` function can come to the rescue. It is a replacement for the builtin import statement that greens any module on import. .. function:: eventlet.patcher.import_patched(module_name, *additional_modules, **kw_additional_modules) Imports a module in a greened manner, so that the module's use of networking libraries like socket will use Eventlet's green versions instead. The only required argument is the name of the module to be imported:: import eventlet httplib2 = eventlet.import_patched('httplib2') Under the hood, it works by temporarily swapping out the "normal" versions of the libraries in sys.modules for an eventlet.green equivalent. When the import of the to-be-patched module completes, the state of sys.modules is restored. Therefore, if the patched module contains the statement 'import socket', import_patched will have it reference eventlet.green.socket. One weakness of this approach is that it doesn't work for late binding (i.e. imports that happen during runtime). Late binding of imports is fortunately rarely done (it's slow and against `PEP-8 `_), so in most cases import_patched will work just fine. One other aspect of import_patched is the ability to specify exactly which modules are patched. Doing so may provide a slight performance benefit since only the needed modules are imported, whereas import_patched with no arguments imports a bunch of modules in case they're needed. The *additional_modules* and *kw_additional_modules* arguments are both sequences of name/module pairs. Either or both can be used:: from eventlet.green import socket from eventlet.green import SocketServer BaseHTTPServer = eventlet.import_patched('BaseHTTPServer', ('socket', socket), ('SocketServer', SocketServer)) BaseHTTPServer = eventlet.import_patched('BaseHTTPServer', socket=socket, SocketServer=SocketServer) .. _monkey-patch: Monkeypatching the Standard Library ---------------------------------------- The other way of greening an application is simply to monkeypatch the standard library. This has the disadvantage of appearing quite magical, but the advantage of avoiding the late-binding problem. .. function:: eventlet.patcher.monkey_patch(os=None, select=None, socket=None, thread=None, time=None, psycopg=None) This function monkeypatches the key system modules by replacing their key elements with green equivalents. If no arguments are specified, everything is patched:: import eventlet eventlet.monkey_patch() The keyword arguments afford some control over which modules are patched, in case that's important. Most patch the single module of the same name (e.g. time=True means that the time module is patched [time.sleep is patched by eventlet.sleep]). The exceptions to this rule are *socket*, which also patches the :mod:`ssl` module if present; and *thread*, which patches :mod:`thread`, :mod:`threading`, and :mod:`Queue`. Here's an example of using monkey_patch to patch only a few modules:: import eventlet eventlet.monkey_patch(socket=True, select=True) It is important to call :func:`~eventlet.patcher.monkey_patch` as early in the lifetime of the application as possible. Try to do it as one of the first lines in the main module. The reason for this is that sometimes there is a class that inherits from a class that needs to be greened -- e.g. a class that inherits from socket.socket -- and inheritance is done at import time, so therefore the monkeypatching should happen before the derived class is defined. It's safe to call monkey_patch multiple times. The psycopg monkeypatching relies on Daniele Varrazzo's green psycopg2 branch; see `the announcement `_ for more information. .. function:: eventlet.patcher.is_monkey_patched(module) Returns whether or not the specified module is currently monkeypatched. *module* can either be the module itself or the module's name. Based entirely off the name of the module, so if you import a module some other way than with the import keyword (including :func:`~eventlet.patcher.import_patched`), is_monkey_patched might not be correct about that particular module. eventlet-0.13.0/doc/modules.rst0000644000175000017500000000051712164577340017244 0ustar temototemoto00000000000000Module Reference ====================== .. toctree:: :maxdepth: 2 modules/backdoor modules/corolocal modules/debug modules/db_pool modules/event modules/greenpool modules/greenthread modules/pools modules/queue modules/semaphore modules/timeout modules/websocket modules/wsgi modules/zmq eventlet-0.13.0/doc/common.txt0000644000175000017500000000021712164577340017070 0ustar temototemoto00000000000000.. |internal| replace:: This is considered an internal API, and it might change unexpectedly without being deprecated first. eventlet-0.13.0/doc/images/0000755000175000017500000000000012164600754016300 5ustar temototemoto00000000000000eventlet-0.13.0/doc/images/threading_illustration.png0000644000175000017500000010177212164577340023600 0ustar temototemoto00000000000000‰PNG  IHDRfØWóÞðHiCCPICC Profilex­˜y8ÔÝßÇÏlcÙ%”5k£"û¾dÈÎdŒ1¶Pˆ’¥l©É–„B ¡T*”P(7ÊVÖŠ"æ÷꾯ßó\÷õüóœëšïyÏy>ç|Ï9ßs®3pºS($8 €Bµ6ÔÁ;v£#`Ü@ Ý=ƒ)ÚVVfä_Òò€Ñ«ÞÈÑcý‹èßÌlT¨A`²€Ëw›ÒÙc›méB 4~töôs÷‚øIJT[k]ˆo@Ìæ»Í töØæv:‡yúÒ}ßÀÀIö"’@ÏC¬éåì UÓÛõò ö €8¸f@@ ÛÙ¥=)TÈ» ±}\ JG#PÓbÌüc tàˆlÒ–§ €ê'ÿؾ[oŒ§;ØGYi+ŒŠ…zO£}—„ú– ÀF¶^D£m€ •äJ ÛÒB/ƒ"þ_åíwþí€&‡>Á@D€N˜Ìö.ÏFð Ê‘ÆÈUTCڗцɄÙCfIg­cÄ"9T8}q…\#<";H¼üwÁVa!‘ü'1}ñæÉZi…]e2xÙ,y´B˜â„’¹rµ*›QýSÃc_Õþš­¤ƒÃ:²º¡z-ú4ÃF‘Æu&ÓfÜæÚ~–ÉV7=··Ù°ã¶—sÐut:äç|ùH¹K£kÇÑ·n“îß=6½½q>xßÝ~jDÍcúþJ$.Ò÷€~òýÀk”„ Õ5Ø2D+T)L*\ø8w[$S2jóÄÒɱ讘û±¥§ÒâÂãÝN›žQN8‹8;—Ø—Ôœ\š’z.ìüÑT»´°ôñLó Õ±—H—›²7r5®äeç×]í(xwm¤p´h¸øMIgé“ë}eóå 7E*Ô+Í«ÜoQnÇT§Ö\®-¸Sz·¢®òÞíúš†ÊÆÊû¥M)Íþ-fä²?üÜúòÑíÇO‚ÛŸj=“zÎÙÚ:Æ:û_t¼¬íJ~åÖ­ÒÃÐó®·ô5å;>t__Áé­Æ;æwïkÿJrn1yñþÐûÁÄk£™crc㸉æIŸ¸­ŸHS¼SO§ýgXfŠfÕfÛçœç¾Ì'–ùÜÿ%í«ã‚ê¢Â7‹ï9?x——&m`i4hþ1@XÐã‚yÃZá»à%D2 E`@2Œ£»Ÿ2µ3¿Å̲l°q³ïÁZqP9óq/¸ÖyÔw„ðÖòý0ÌšQÃ'‰ÎˆÛI4KJJ“þ¶ÛN¦NŽG>P¡} ’¿rÊšš–zôÞû„Å}»öû¨Ö\8(¯í«“§ûJf ghmb|Áä¶éS³Aó9‹u+Ì!^k y[u;m{#kGÇÃ.NÞÎÄ#$—×°£án'Üã<â=½½}‚}mü‰ìÄoÇü›HÅ©äDÊá SªV°rÈîP±0þpŽã¨ãK㑽QNÜ<™Cе?¥'ÏÿóôØ™Ž„Ú³¹‰qIAɉ)£çURcÓ^fpfZ^8UwqèÒz6.G"WñŠzžF>áªjâ5ÕB‹¢â‚’¶Òù2ì=åæ7‰g*¯TUßztûuõXÍBíÆ]tç=Ñzµ³FÏûQMÍÅ-u:þÕúéÑÒã6ÔSægìÐÀupt²¾€½˜}ÙÝUó*£›ÜcØ+ÒûãuÇ›k}!ýÆø•·ýïªÓþ¢ YKÿé|Ÿó8ª1†¯šˆž´ü(öqùÓ‹©Âéð‹YÉYÚ\ÿüÕÏ®_ø¾t~XZx³ÿMýÛâ÷ú¥s?‚—©+é«kn¿”7=·æŸHSp4Á0,Ö‡ÁáÍID¬F¡¦.¡Í™Û˜â™õ1L˜n–\V"Û>vûgl;Ç Îó¸`.gn}¹;ywòáùEù„8„YDP"?ñ ¢cb=â-e;Ó%䜥5w ïf“É,ʎʽ”¯W(PLØCR²RVPaU™V}¬–«NÝkH&,j<Ù—µßû€š&Js@ëúÁPm}. ÝZ½X}3>ƒ)Ã;F1ÆÆ&œ&=¦IfÍ~š×YP-,¿ZÕ ·Ö°¦Ù<³=ogk/dÿÑ¡Ê1ôð'´ÓçÂ#d W”kïÑ«nþîªÀã¥g¶—‡·œ÷ªÏcßs~ŽDâıëþ¾¤Ý¤ù€J²_ xà0%3H'è+5'X3x4$6T ô^˜m8[xÿñâj¤A”@Ô‰ç'‹¢cc\b÷8µ7ß|:ÿÌñdz:‰†I¶Éî)ÁçâΧ¦^M«I’ÑŸ9{v‘û’ìe½lלðÜô+åyùÍW[ \k)l)j*®/i(m¼ÞZöüÆëòÉ›+•Uò·,n‡TçÔ´ÖÎÜåªÓºG®/hè¿ÏÒdÐ×òà!hÕ{”ü¸»÷©Ë³ëÏ¿vèt–¾„wÙ¾*í^êµz}¯owÿí·öƒâCÜ#zŽç|œžåÿš½2LŸÿí³~&0¨€Ã¶Ðù—Î uGàvÀ b[€­ØÄ[nΚÁ:l/Œ «„MÂ…à¶ðdx+|¡ˆðG\GŒ#E.È+È!”Ê u5 ÂàÁPÂ0–CSÑ èMF#ÆTÆ>&~&¦ ¦%f-ææw)L8¦E€%å)« kk/›[*Û» {V›ˆ]âpçèåÔä¬Àñá’q›\T®YnoîQ7ž;Âå‚r¡¹ÒÜ0îbšž¾^YÞ¾Ì~fÄŒcþC¤5²`àAŠOP:õAðR¨4´Õ‹”Š <ñ šZ_qñÁ§WqI…)„s=©žik$²ê/]îËqÍÊ Ì_)È*´/.™¿Þz£ðfbeÐ-§j£Zå»Âu´ú‰ÆGMy„µY=Wìøúòr÷®ÞÊ>éÂAÑ¡â÷ÆcÊ“jS„YÜü½¯Ü‹êß¹—š–VN®Vý,[Ë]·ùÅúëÆ†ÔÆõÍ›hn[û‡!¸þ‚ñÁì`é°hïЀSá7á“<ÂqñRy™EI |Q¨Eu†“ Oд-:=ŨÌÍØÎÄÁäÂTδ̬Ï|‘ù†€IÅL°h°d±|a5e½É†a#³õ±ïg/År`c° žo99ïãäq¥\B\ÙÜ<Ü—xøxòwHì¨æÕâíåóæÛäÏ  ÆI½Ž‘ħˆj‹.‹U‹“$¤%¦wVH’¤”¥6¤ŸíJÛí*##C“í•k’¿£P£Xµ§L©DùšJ®j¦Úyõ³{O¢4ÂöQ 3‹¨IÔ¢ŒÕNÕ¹¦[§×£?oÈ`$elfB5Í1{b¾`)ley(Þú–M•m ´ïÈ:`¦;×88_8’ì’âšq´ÀížûU/ oŸs¾íDÔ±=þ†¤Ã~ä˜À ”ª ê\g(!Ì#üÂñg›Qê'ÂNÖE/Çjž:×s{Æ"!ýlO6Ù>¥èÜçTõ´ÄôÑLÕ YYK—œ.?Α˽š‡Ë?_ÀqíV‘] ¢´±,¼\»‚½ròV}uf­ß]{ìõc·šB[¬j=Ry¢øt×óâ/Ä»¤º¥{¥ßHõK½•”’‘ùpdìÖ¤ô§š£¹®/ê ÑßÒ——ñ+Å«›kòëû©o 7º6;·æß <‚IÃ`ïáªðDø0B‘Š˜Cš o¢XPdÔk†½ WÑ(t zúÎk¡¯ü,Ówf/æŒ9¦E‹å«&k›9Û_ìþX€ÍæPåà$ãи.}®îLMžÏ; yùxù†ø‹± }~,Rˆ=&vH\SBv§°$N #Í´‹q7³ VVPNNÞ^!Q±aÏ”2ŸŠ¡j°Z‰z¥AØGÚ_ràƒ–ÐA'í,×z\úÎ× gŒ &ɦCæÊ)–Ó‡Œ­oØrÙ%: cœ€sª‹˜ëc·Pu/¬÷šï:qÅÿR€¹Š¢t'xoHC˜rxm„BdÕ ¹“U1 ±-qñg` ægsÇ’åSbÏu§ò§Óïg¢/8d•]\¹l”}9gê !ï\þXÖµ;E²Åe¥;¯—Ü.¯®Pª¬¿¥|»¦F¶6ÿ.G]Ô½‰“ƪ&æfRK×CÙÖäG³OŒÚŠžn<·m¯è/l_–wmt›÷äõν!ô%÷¼•z9Ø1Ä?ì7Rö~dTxŒ8^?Éðñð§Â©‰‘Y›¹èù¢Ïw¿4~­Y¸º˜øÍõ»Â÷¯K¥?ü\Î]QYiY5Xíþ©ù³r-iml]m=e}ô—ꯓ¿îþšÛÛ8¼‘±Ñ¾AÛTߤl–oŽÓh¶´ó´gôùß¾/ÑÏÀ¬H ¤âÍtõ¶ŠÿR(t'ÛJœÐ“…ìaa åtž¦„XÑï‚<Ðo-8ÌFʱÐuëC40þÍx/w=Sˆ!»b¤Ÿ®Ä,›ùP ¬!†âÀ¹›XAÌñ1o²Íoûq iëŽK×$SBtèz^ˆó¼ƒõÿhîFúÙ:üö}F µ¶ƒXÒôùšÒõô¶V½¼õ~÷ Ž$“,Ì ;Ôg81ĘÞ.ˆwà¨Àx9`tÞï'²ã¡r Të ‚!Ýä–îÊ~«Lü^rÀg+^Ø–?øù%ÆQ¡XÛÑ;€'dsä?Å ÅÅ_J[-’¶Zýãa •þÛ²i»wÛ5Dà©þØ=ÿxÐ[¨õ ˌа÷CJ"•ªHä¤&’ðH$?Cª ÷"µ‘ZÈ}PáÕ|ýüß}Ù¿ßÑê‡7ÝòßÖÿÕ* BÿalÝÝ¡Q ÐÚÈ;C§¶·éÙ¥ïãнÝ@J•èë‚׆þ¹ð–Å“=åeñJŠŠð¶a¼®+-Ï pHYs  šœ IDATxì}|Åû÷%¹KB !¡·Ð D”ª¨(Vø©4,TA@z7éHQ)¡JPP° J齆ÐI¹äÞïìl™ÝÛzwI.ÿ—ý\vgž>ÏíódvvïÙ—Ëe{´=òÀ#<ò€9š#{DõÈ<ðÈÄRÆ£óà‘yÀ‚ìhý€Ôý2Êâf>2á‘ô< @»CþÓõë”AÓ¸¢ûå Ñ9ËlŽ mìÅÆBÛ~8(LH4dddжb?Rº÷C·>2é‘ô= È$[È·À@²h@aú¢²ëG)ƒæŒŸf ºOOOmÚ¥{ЈIJFŸ1Ôyï$ÓHtOAAA,%ö^»—ü"eÐø§©î‘)°Ç–ššºÿþíÛ·;~üôéÓΟ¿wïÞÜN§—ƒÄþÈÙè »=,,,""¢h±â¥K•Š+_®~ýúÕ«WFÊÀ†ôA÷bú@[6Ú ÕÙû_šMd"!¤ dŠ»wïþòË/«×¬ùcóæ"Åcë6nZ¦R¥eËá GGæ†7-úÎú(Ö98“èF³F6&ŽìLHø.à8.W9&Ønß¾={öìoæÌ)S±Ò o¾ýìkoDEGKߚΗ ¹·¬³Yçð P¬‡†Æ[fÙ@k Ö¨e·ÆjZRdϵ¤…¶,pR ärM·“’6}¿zÊe'騱ãï¿eç6¤š5ÐÚrÖ,êeOÊ “ š)°§™û¤¤$$‹Ùsæ<ÙìÙnÃFÆ–)ûÁãoÀƒïÎC]±Yf" 1Y:Ÿ,kðôkòT‘5/x¡…̬žÎ,¹â;‘³?}jFüÈm¿þò~§Žï¿ÿ~tt4MØóó nù#ë§Ù¨Ø9EZZ–*RRR>|8eÊ”ºõê9aÑÛ>_°HÊ Ïšíâ †ŒÖ98ÖÙ(‡5ëuMV}à‰âWË|–¤oÏ«RI¾Ø¢ÜØlÉü+”L|`Ü=_``xâûo½…g.Þx·“¥“±„å Ñ…ÕëÖ+U>nÞÔÉ Ó¿x¢I3|dTT™ ·ž»üTlá5»,ZtÉW³@¿lë_UjÕÆô!zëfâ–Ÿ68}ùêÆ-^B<×Η“‹æ¯½A%ïMº‚ R òˆâibC˜ƒµ¶‘ Öî>+,¬r8Áþsý6Ú˜t þ¨Ó’¯fæ ð³/¿yµ]ÌG&póÆu¤'?qßqÝByÑ2u:•V§a£Ùë6\½x!&ï.0"wîöÝ?.Vªt¡bÅ,9èE\QƒLí=ÕB…[à¶@ª4Ü«§K›ÖFdÍ&vD{õq‡4iÒäï¿ÿ.R¤‹¢ÏzeÒ#™5ËPÍ.\hРŸ/Ø!´y·âºÁI®~Ù\ëÉ1ùóƒÉ‚Nõ?ñº¸ƒ½Ó™F¯AЎΗ{„öØölÿ{ü߯s<ì‘`-Œ,h «º16ü^«~Jó^¯ÞÈH1è"s¡FÃç^ ØÐ\¤[¿i3ìßëÕçðýÔ‚Â\;ä J†=…ý›O=ñþKÏ'ÌøíWÚ¶Çþ ë1‰ïÕ㟭[Ð5·aDôcŽœ¥²ÌG¼çÉfÁF ¤ê–˜µ‘Wd–\¡Ì¬™féâÝö]»™<­i³f'OžÌ²uL™e°ùB\ï¼~ýúÛmÛbrAæf7é‹C¬®ÚµGÁ×;~ìÄAý_}ü1À[¾Ý–`9{4P°T¬ñØúåËhfQ&éClع[äÿá?¼@šÔiBûÒù³”,hܾy³`‘¢‡öü3Ú”§PÔž›wÑ8sü?Ú1ãËÓÇÿÃâ΄ý0£Á”)fÁ¦ßƒCBwmÙ<{üXð¾ýa—¢%JRzí½ä7m7Œe&Ë ’Jk¬Ö¨%-h™eåéÌ’ËtȘˆ‰ÚGALëw:&ß¿ÿnÇŽ+—/ÏÏý'™4×È”YRþåbó~{Ú¹k×Ç51ó+m7°¸Dð –·_·#`ö¥ pù…͆%O,LÊiù^“[¢Õ¢z%,a¼^¯Úïôè¥J© ¤¦¹Y‡¯ªÕ{$¾þDm\•üï©'ÐÆ%.4ÐÀÆO+¿ ˌɀ gN_9.Pñ¹•˜øb*;½‹Ç^[üï-`còøvV¯˜;§Eë7éÌ("7aÔØ4,Ó –݆#ÃÊ:^hÊ-¨ÚµX°L&Ȭ"j“i»d:¸Žh©;J±`ŒÏ½#wIû®=j7xºK×nˆ2:×@ÜѤÓgwÞ@|Ÿ2`+ Å%:ìÆ’ Æ€å™!C‡†† œ<Í„­<‹Ÿ«!Y@&¦4Ãä‰!ó|,F.œÁë²Ûe3Ž’åÊöÕÂÒî-ü'oøü }Ç~nÂ*DþmQ(]­@{ø³°ð ±XÅ@wâ‚EEK”¨R³/Ñíݾ ö˜5 »ývâ,,Ÿ4x.7Üy%×Y Ã$kœ€ûz‡wßïÓs¿iZ¾–?1A“ýBœd£S³Œâõ÷”[ŸFÂzª¬)’TZnYPDH-+L1Ëi–N!^Ù¥b°wÛŒŸl :l˜¸Šè£K¡ˆG7r¯>~`œÎ/h¾ Ï_àIµY³f­Z·nñ;ÈD@sSó„&1‡ kÙ.Ì¢tÙR$\œž‰@¡AÁÈd˜¹a±S€ë­X‡"ð@Hh]=¥¬$¦¦†„†²Kªä¿Az:kCFºŠaôF²Ýá`ys­X&ðÐô*öL4<ÒBåZ`µ@ªn³Y„Î,­ª&ȘˆTÅ« „¹pc®Í3^{ùå.;çÊ•+$$u7؇ÊUdzòåZ¢QLÈyXÎÅÏØWîÚ—ù¢ w‡¢áó-بƒ4ò…äuÄ]¤4pšÄa@È ]ܪò*ý"CFšârºT SŒNâ‰q¯ù£wZÌr›¥S·Û·¿ß ‘Æh<.‰ñõõêÞhP÷ñÚµëÔ©ƒÀ¹„=Ý  I²-_Î2,°‰×#˜_à‘ð—_}µÛðÏ^hý¦š‘Ò€Õ°j09n£¢Ün+èºCΣ&Uf™0xĤ®_jY/Ç>Oxˆ: |HÕbV¡3Kë®É,§Y:w *MašˆŸV­˜>jÄšïWÇÄÄй~†ÿLHØTTXù,eÐùò6º~qÿþýaÇßJuNHXâf˜æ˜Ý(€uÏ/YfÍ|⾬ÑB¿(kº¬Q §w4ËÊÓ™%—é:ÆÌÆ‚,££±$оïuˆ 1bxxxxhh(®Pè )C÷?«‘a^˜ }ÏŽÜ IBŸ ÇcóæÍ¿üöÛÚ½‡å2 ,'æzÖ9<ˆ¢ÉSE–ù2ŸAr£]H%ùbË·RQ<ß°ÆJ¨­q°úÌrš¥ce+Ûë=Iz8VÚÉS_­W»Q£†M›6åæ|!bš/¼Ï¾™« e`ýB\ÂÀ,ãÚµkƒ† ûí‚ÜyòãÁ˜Í›g±ÎÁ©°¨Ê,+¢ ÖÛ­)âµ´r´Ì O7˃òB‘)V~ðÖì%óÜb_§á¡¥DˆÁF÷JEhâÜÉsGå‰ÿröà!Cƒâ=WžˆSwz«¤ qŠKñ‡g£âã_øß[õ7ÂÑŠ­ ¥k£ñ„ÇSEÖù,[6kŠ(µGº,°Š¤Y£Èú·#zÚ”âpD6T—F´ªâz ?ÿF+Ä î?àÆ%¢±‰uFªª,æ¾I°†bà.ÉßÿüÓmèHÌ©i¢'.E6Êa‰ç¡MûÝ™ÏÀ봦ȵrXà6Þ¨ S¤êÂ,؈ ü¤Ï},õD¼Êè¨Ma Rn³u0äŸþA ² š5Téͽ]Ë€Øh¾ÜûùçŸ|66,œü¶Âì¦é;^~*fVÔ#ºGÈQ8˜D~pÀmFa Ð]¶°ððžÃF!—-Y‚çÄ›&t-Û»'^Å!½4S¦@ØV¯^_€¶lÃýâÃ`dÖ¼tòÄ ¹<â_áo ø—#4É‘é0 …/ˆy Š‹E…dÁX0€Yb…Á#ÏÀa%©EðŒ BC2/U â˜Åi(ý&‡ˆXI€Øb5r’bÁ:‘Rör¤J8ÊÌ &H‰ž‡)PúİIN/“, T%«RÛ8‹„í¨ L2X ®ÐŒÿµÈ,pY?Ê´Ø^úß[ gMG<¶nÝ·Z‘8°ÑQD®Çë ^]˜@1]V¡«˜á­_LŸ>hò¦ Â僴î$‘Ü sT¼PKÄ¢!6²T™‡6°™‚ %¾D+¬Qt}©ÉHT{§]MbpàøÉS§}¨DlÒ ï×A=Ot)…N1èª'Ìš3gNÍ'Ÿzì‰ú>R¡‹-¸X8*ñÙÑ·b‹Úì‹‘NÁ~áhDïøœd«YaLÚêQ§n­úO"*éŠâ)1Kƒ×¬ 9W)º±‰SŒsçÎ-_±¢ï¸ rLO{„ ©Øô„Gd~Ô °äp‰Xj nt‡sGoùÍia¨²\!£[jf’k ™P ~¢±)N4hØ"kHZiy˜2Ä)ò6,aÀ ¹óæ½Ü¶=©¥ºyb¡'<ªÊ³èn¼;ÄS»|'‰± S„2ò3»éýÞqgöØ$ù°“~${K¢(T´hË·Ú|;w.hЀEÖðx¢áyÊ€Vz£„[ôLE½ãüñÝ^}”曡‚Éçþ’¶åÊ”%Ÿ²åè§|Ùråg¯ÿ¦ì‹ ÄÕg…PO»÷—¼Reö¿wL°Ãlùæ£å=KÄrV/zÙ£Õ ƒ¬ºöë"‚,t})Ö¢,crP(‰Þéöñúõë¡4T‘5ÄË £H=¹c"N1Ä”¶hñb”“É_¸0/Yi¶ Pïhž'¤ÉˆÖ;íö‡ÿöxã“Æ_,iU.â¡Óòpç¸c6O†¤g+Océ[ݯpæ=œUf[µÈÓétVÇPÕsM1iå/TèÙ—_E„~Ò«—âMkˆeSw*˜ay2Ë€qŠA>ñ‹Õ•+Wvú´?/YÓxF³²i‰Ç^°L\\\™2UÊã*(_©Š±¤—Ÿ”ò¼ðÃüÑeËÅ•-ß÷×SdjrîçÞ£—ý´jLÙò]ß·Ý>µupûŠeã*¶špìN:±"íÚÚ™ýÊV¨„Oï™ Èæ¼y`b·WËV¬vft‹¶ó Ž-ùà“i¾ZùÃÏ˦ܟÖsÞ_×lÎSÃë¿5ë^Ä¥ J»TžŽ’'¥–÷vûR–¾5Y§Iß/°æ†`H=Š6 G—Bã)rÄ©ûµ "Z$3Óð$e 9aÃbܽ{wñ’¥ôdFCcv´ ‹™fÜÒ ]kW®Þ¦ËÛ±¿Îð q ÏèðR½¿%µõ˜ÔõÙ¸Òµºj;6aß%g¾º¾]9¤iå…J•­gûýЕ”Sÿ,µ•_1£çÕk¾?fU33j³ÆÚ÷-3Ó V™o;Ùc‡ZYV¦Í4½r äøJcÇ}>]¼d ¢Õˉ{¡Îˆ×m"_`Mk×®­R«V…jÕu™Xd&øƒ7´,‡×†¸l)¤l0™‚p[žÜü@›V?N*Az!)¹†ýÁÆÞ;Æ“6{Ûî´AÎKÅÉëGl¶ ¢ ã$ADfŽÁV¥š§WR*û>tMV‰ÊÂ!@•ž6Šæg Â&-^úsÛvÈbÊ09Ñ +}ÆþDÖ>‘œÎŸ?õÊÕZO60æ)V‹põOmI•’šÚáÃNÿlÃS~ºá}·þÞ;ßéÖíŸÿø©•6^F}p+Y*{§wßþÚã¿v†„øõ{ØùnÿaÿìÚçÏvî_¿H7ÔMØn³…šOÔ¿zõ ¢¸|ùòdšáįÂÉfB·‰_Š#Y`CÂÑÈØ~ÿý÷Æ-^4õ¨©ñðÜíô*_@\ï~Ÿ¦ ï~x®‹ys-/T€pݲ¿A‚£Á&Ž€£áP3Èø&,—ЦüD2OGø¥±3%uãËCz¶éHïÁCÒbswÿÆíüøäeº½Għ•ÈÓ}ŽÌNÀY_‘6ÙˆÇç.þ2¡Ëc8”è"þ‹@FŽ„TN1LOê¦öxí5Ðõ=1­Tt÷¹þjgË!=Z¶àFäéNr„ªm@…_£6zîDq©R¥Έkà\¬üÞT^8L1ØYÆŸÛw|0h¨ªÕÐÐp‰TlOøD6Ê–íùïp‹UÃ’éƒ\ˆÜIÏŸ°¼láÅÙŒy²s—d†ƒ#ãúò”´;~^õÒå[½öÊòÕkðŒz‹UÃý×Î’¥[½üâŠ5?î=u¼ÅjÁN)#`td€¼C8ÿHÎ!¹2ßHd4 "œáRs©mϰyÕ‹—lõbó?ü¼÷Ì©“Gðþô7;‡Î«^¬Ø/àe`Ö7É­Z¼„˜ŠánÖòåoÆm×®™cpËp3Og§ šxĬ¹ßõë×Oÿ¯~ÓfŒò¦%Û +Ï`™O®½“'OŒ;öÙU#\ø±ˆ°)ÎKè“›SªˆmðqÁÀ)oá„”ˆ6BÆÑðŒ6‰ ×ïHÚ¼ñòe'OþlÂ$ÿ¶sßâ…óNž9ûÙÔéÏ® â¡{Þ Ü˜6çO~˜ í%ñ¹Ð& ~ ƺ” aÈ ,„†w+/ë]Xÿ×Íß÷-üvæÉ³çãg|óìj¶so—Úeq1>­Mt™:A¨1>ѨIÿ:"–ñ‚xn> •Ö ÔTÊ “¤":‡Ù²eKý&Mq5®b‰¶sãõ„W/±â±Ô{t¯2¸]hù’Éü "®…üDB˜?^ G«¼šNV‘Œþ듃@ÐpdŠ”XÀƒ‹×öýrÁŒ/‚Žúôõo;gÍŸ<Þá°wî?¤ÊàöÄŸddhÜ&ŒNîRâ†FË¥‚ßxR‹Oë¢@ÀÐ'óêqÙƒ ×ö™5ï󑻣óÐø*CüÙΙó>áÓ—ÑsVr—äS-Äï #–ß|óMŵ‰>¿AÊW1Ø«’mÛw¼Üé™\O çyRcðe›­Ð ÏâúÕw$Ÿ:Ñý'›mÅíš_Uk~ýXÁWl¶‚ƒG—ï_?àð†ÓÅ•íXÆ–ò߉¾'Kô ÛÖk»Í¶ýB“ÕAÎt»ÃqkÕ×çvÛœÓ&Úž­rèÓèÒÁ©o8ÚpÌm[™+FƾXz«Éÿؤßw'Îß°xåw—¯\é7rT•U£ý×Îy/™wùêµþ£?÷…ñåû?)ø³´-ù¿Ÿž‚?ÿäüy®É*ÖŸéÓÞàýéü¹|Dì‹%áCLëD—Þ‚?ç­_4oæå«×ûO˜ZeÕ¯ý™ivÎýqá×uþi“¡aÆF{n{m@實`DDDæŽ-UýôÓO³³ .¤7Ë W%ØÓ ¡ÈF{÷î­Ó°±Ñ ¤ 3ɇ?(:]Ê CÀ¡`ØG½zùäͰڕRƒ‚ÒÒîÝ·=U)÷š¹OdÌÈyJ%!_8¾_ütââÐøÁ‡W^M³ÜZ¶ânµ±e[…ýÔš¥ ’–†~6ðßWœÉwîÜHöT—ëÎßwÒÊ?UéS›­Ç¸¨ê¹¬>”ì J»ÿïÕéiÅ.üXitâ¡„#ΤÝ;./|pc³=“ÞhsùŒ“˜A>dºñàæÝ36&>*wîΟö/Òû-?¶sÄÔ‘CrGFv4ÌÐΣ¦ü9HæOçÏr¢?ÃX½ðC¥Ñ7-üYþlÕöòY'™÷ .…?O¿3bʾÄÎcüÛÎáSú÷ˆÉep ã<×Ü< UijÑT穆ˆhº–AÃ\ŒzU¤ Ê/N1œª‹”avóhÀtlØ›Û&N›–˜74ï6ä"Ÿ\¡¶\,¤àÃ]¤¤g<8¹ë²­gÁJ¦F–,°³ëy‡beëPâÓF®Û{/Û>)Ђ òïê–8ïÐÃ\A.ä \•ØÓìaEj…)„šgéyÓ‘îEÞÜ)¸P¾°–ÏØÆ»üˆÍV6íä[{î¡ÈùÃD§+ K¤ødØ\ç; »eËúõêNš9+1.ÿµ³ýð·_x¡~Ú“¿ž“˜?ÂÐÎKþ´éùÓ¦ªù³Èæ÷áÏ\-Ÿµ}.ù3iï½0ΟÄŸÄ¥6ÛùöÃÞzî™úµ›ümÂÍ‘ù{·ÕÿÞ³ÓÎfŸ¨YM8‹'´¢+PIGäO‚xÔ¢"4¤Ôyêé"®é†HÇU¢›–2½”ð‹S ÅeÏÑ#GÌ=ôI-µ>`MSeC©vìØ±xÕŠÂ †§:‚qê {˶çÞM›3(È”xÿ\îÆ—Ž¦Ùª†ºƒHاØÊçÁÒZ`£¼$°!¸j0·æàÂÂi¹qÂÂü7|ҥ󎻯Y[xÁÿµóú½^tÜñ÷îÅ?®/œà¹ð˜ÒŸ€„Úá9êÏ@zj0þ t!EgرO•üiƒ?¯ß‹àüLüI\š81!ßµ»½Þk¿c÷¾%6Né±?3×Î ù®ÞêÙáºõ8ˆ:É:¯œƒœÜZÑPë‰úˆhĵ˜2hÖÐâ\/eÐdÃfC‡•,[Ž,dèmÒ€õ VH L–3jbbb×>½‹-á(Ã=DâßZ¡pÛÉ÷?LBnHO^2ã¢íé˜gêä>¸âþg`úûӧ‡U,hGü'¦Ø3ÒƒËTêÁ…Ô ô÷€ªP08=ãζÓAéÎÔ%óîØì8_ío¸R‚\²Hs¢ŒQ`zšÝ™fOGÀô°jem8ò·j^´MÝô©ñÉ×RN'>É»]û„?Ç%Ì6vöõy±E~m猽쪕x5£‚ëq ˆ%!”4žÇ…GF"¢r :90œeh.Šù‚M¸ì©Û¨‰›À¡g§*§e^ ,ìѯoÌ;/ä{ªF@-î ¹1:xEúûoÌ?#mPq×€¨ðȲs+í)ù$Téøø”!ΣƒBpš•Ž›[éï’Oq¨÷êN.žËy·àº.ÛC‡ÈÓµƒSSƒë>¿é‡W, re;Ó‚Ò]ÁNg(¹/•a«ìÈUºQÞƒw„= òˆ.ÃêÄá>AšóÎým‡:´p¡BíºtéàÇv¶>f`ß ¶ïÙGÝŸƒV¤ ógžðH‡ÜŸÁÚþ £þ ¡þ|<8-ÅÁú«Féðª3-S–QøÓäLM¿}ÿd›acú|\¸`þö}ǼÓBå{÷;G÷ü¨HÁüB`’óÈhÓ¾0â”ðÆ¡¤BQ÷i²œQ¿~}1eˆYCuÕV³ÍxÖóáǨه×ܺu«GÏž Öðù$+ù1EÅ7:À C•ˆ`¡¯ëëÙßÌݼ¾âÆÉôç4ä rAnQp÷þ3îÞu&Ûìù"Ň22î&9“ƒ‚óER’_2îÝJOtä‹à!ÎTç½tG,Mð®äT[(®_H4‚a!ße˸u/Ãìˆ@5²ì4®nH¾øÁƒ¿ž7Þ–M7MñW;ÇÖ±GÇ÷ï3{Á’y;þÛ‰‘òÃÇx=ò'\Í]Öiú®â¿psiFÒ½t{p°èÏŽcëF~Ö»ûì%+æïÚ^qÓTÆŸ~eç˜:¶\£z ô”©â€¤_âá4äNÏvz"ôp[7müz\üÓ¦åÉ“'22Õýð<(^Y õð¸æ,f‹ó ¤\ê$'';r´NÃF̈xSô,b¨¥¦× ˜JMÿúËÚÌÌ•á H¥S 7¡¹BBòÎT!‰¸l¹Âl–p@ž…¤‰0Š|¦9ù\à ´¥¥ñ]0àù^‘0æz“…‡K%,go 8tnèÒqž1ç[¿¶ó๡ †ü÷èŒ µ·ÌÒö'¼gÅŸ¡ÔŸä«á|Hý‰÷ÚÑç³P2þ$]Á›Ü‘u)ñ§ þâÜâM¶ƒg†Ìž~ðÈ3.«½ÕíNð§@/iËV;‡Ï«Q<öšñæã &Á¼mÈ4@ki¯üX­£ÿE/L膿*½zÊ  `¦ü…íô™³¯~ÔMUŠ&ÐÚLQ92%9yYÜ[šJ³Qû‰Ç'ÍœFŒç×vÖ­9iÊDØ9rüä”äÿõg?ÿ vŽš<¿Tö[;kÕ®6ñ³A™uêG†1…Žmx]ë†åKi˜c–@ßò,là¡Ìq§NžÀ[KtK(Ëö[`ø÷ÐA(.“ŒÒŒš3FÀòG¡Ëê3b¹ N‹Ëv蟿‡zÉQh‹6È` –o2ð²=yGôˆ$Oð§JÆÉ±òƒÛ~#VÉðL‡4™®Ø–`RËMÏË“&ÓÛLjôÀok±g6Kv22BÀèãèþ¨@kŒ] æ$ˆbQÀ´Ý‰%6ÕëŠgz´z8V†~»bµj§NžDŒÓ« 1öU¹Ôå¢ †æ*esG削ŽV•Âa?ýè±8Ë ,³ß´½ûÚ¼ãö…²ßë£ð;“d}º0Œ~4©|fzî<ѹ£¢ã4ä‘2РIÀ]¹Ê… %³˜‘~°öWµª;¿±l¿eI—Q+G• îî÷å‚7qå‚?}T.Øè¼3Æãm†û\d@gháîhN^ùÊUHŒÇÅÑ)›+ *)2)M6TYû¬ý¸»:±6kÔê ªå‚yK•KvdÁŽY7ãx2¡ fÂYÏã„%~t2Ãßœ!fÓåËÛÊu/ ¢…%z7{ìü¸5-<úQ¹`é+áoíà"_óe1_´ê÷nX.˜;½ Sc3@kpiƒyXEŒÓ`Çž^› ¸3«¤ 6_ˆYãì¹sZ¿-ãW‘&ûu,3¸Iàš‚8„²\°”˜ï’¸Ò{’(·ï¿XS1€)9Åa™³„ë³\œpNsÿpϰùêå‚ýÐNZ.xí~_.¸DN*,sÌù¥ ”ðh‰ÐdËM^•5çmþ•¦ õbÖ°0Ë@î6ºtñbñÒ¥%{ÜTJ(•–5joœU6ëT¨t@¦Æb@d²\p¶••Ê™˜?ÌËðÒrÁµ&ýí£rÁ¾)k,+¬¾@©G‰:TU_±%銅 ÷¹†fÊ É‚fš«W¯+UšQCíÀÞ£Mä6@Iõ´x_.8sËÆNdʯõª\p¦Û™“Ê·#å‚òïrÁW<.¬w«àT¢DŒ1r-RÆÕ+Wiȳ‰CA¯L4©€Mœe }õÊåØÒe8Në}Ñ5;àÄÄ›&ËgcyÛYŸ%傚*œv å‚Gù}¹àþ9­\0ÎgvStY”é¶"*ÂT@fäâJâÚµ«l²@ìƒQ1ÑP.ÒL:šl°ÇÃwïÜÍ[ €êE’S…Q˜%…©šå‚ý¤l,)<„+ÜM½\°ŸØ‰rÁrV¹à!~_.øCR.8K7³£eTÞüùïÞ¹ƒß‰±O—ÛXe¹`,%"G  ^mpçÎÛ·o£~_¯Ozo9{‘e³Ð6;º*‘¡'O'càÿl_ã^.X*‡§¨²¹¼-_.xÐ×ó¸• ö+;¹rÁýúÌNxT.˜TŠÕ(MN;òhó}àέ¬1W.¸;ûžsæÜO_ˆâNæŠÍZ¸q›ë™Ø1’j€²|lT¡ì¤‰Ê•+…wûå ÅOéì´"7'O6Ë3 È4tŸ””cY9ÌŽÅ,k_.x‹N¹àl.o++ìÏv¢\ðüGå‚ñÔ>}òÖë²Æ¤\ð8ö\•Úžœé·&£&BRj²ÅIÊ ˆ± nšÄ_šÈRN)h¾3DDçÍkR/Ofv féÚI¹à¹rÁ~[ÞV*ü))ì¿v¢\ð—¤\ðÀGå‚ÉYÆ?â/<’Àu¹³”ÛñXît/ Òpe…rÁžÚŠ3몈R©qš† ò¢¢chÊ Yƒ¦…eÊ`ó͸<‰Î—OÁ¦Ùtkð³tªrú3å‚ù/L'}©*õ;Ù‰?É„|ñ+Gƒ´QhñÒ„ü1]ôЕHy”Í¥U.˜# @ ô,h!"$Á™i§T.x¨— ? K§U.Xúþ¸oÕ£º u¨G ØS“7âe¸g YʬlÖ3–3bòcíÓh35SDúšH¹àGšLË™$%ÙL’+O !*!M©Ë«–‡1‘Ï ºHÈ á¸´Ë ô’¶ì¶“” ~qù÷?ø}¹àÒ¤\ðºŸrX¹`þd²xN£&BAgÜÕ–“?¿ê,9AóÂD‘5ÐEÊÈW6NÓmõn,He¼r¾þ_.xÆ£rÁ²/ЛNm” 7 þ¿(,?Õ¿©C¬Å£®0LÏž' b6`uÈf@ˆÔ˜_ÐíîýûåT×2tu3:ÌÒ1,BSõðAR.Ømj ÍùeX^‚Bψ‚ˆU‚*®\°Ð!G¡ÍÙÇw%˜ÔI©E„ ÞŒÉ ¥^Âʲ»nûU;<•H,4„£BŽÈGL"G'"±q¦+rJ0©ÅI"]¦\0‡åIä”D­"#–ã$㈥ Z¶Ë´ù&ádYÛ©s«C­If©õ䇳+:&æì¿…Ðç΢‰C”¤L@Ь!6ðBÖè¼òµ =Ý¢dSD"µ²á·RÚÿo}KÞÓ#ÖÙqª·üftÈh ]©…ŽU1VéULñ^ª±$JAöyòæÅ½EPX'K4P1ÓÏqqŒÆê©|³t kH× ViY ÊLÓ3S6ã¤,RÃhô¦™³¬UŽ”³^1EWÉb¹o,OF‘7›;DÕ²”(›5(^„iw8ÌE²L½¨ÃTà VSò3…ÈÍh7€žZKÄz‚,áX­lÛ’l$ÖµY©4Ú±’Ùã¾–V-¸GŠŒ…©S8<ýI_5_ÀYÊ`Iiù&55-8ïD×ßÔ-Ðç‘°ÞqKr2£%³MÖÉ m¾–™ã vs€rmÂÀßf¾34&ÆiVŒ&js9Óœˆz÷T ª—¥ 5"Ó™æÑOšˆjÔžò©K3ÕÓ«À)ºæud#evØl^§1¥1E6:ׄêl²ß@­šŽ ³ Ä»ûÒlÀŽZ–2()¥Ùð«by„¶)#bùÑ2+Ϡɧ‰ëUéyΩ",@’ýRË{3|)Kßš¬Ó¤o‡Øl‚r´bÔd–átбOó…˜(±,ePH¦(i©©Ĉ¶fÃÈ5=áö„G©WêûVš$7«Zrûå=,Èò=vx¡•eeÚLÓØ–ˆÅq2 ÐîJÁ!Xˆ &ÀŠ©€¥TI")ÐÆ˜”aÙI™eVË ’.YËWrdB3«£b¬ (³´g–Üœ3ÞRÿ6ØÀ: ÉC|™Ž`raFš,hPÈ‘R†M»ØCDpp°gežXî97<Ur¼™°óÄOx2Át“"Ý­u‡˜eŽÌ’xSĦˆÌ—™TÆfR c:U+q£ƒ^˜ëžè3ãRÊ "@GIi{¼ ÅAR†G›5Ë ¨Ëh–,öȶGL<ƒ<`@{8³`}œ–ª\þ¤ A$S¦ Íδ4æÂ„Åk· †§Í¨9tç¡&Ò”:‘¬£)—Ad>£Œ›šé–­‘ ;fŘ¥«7ôÄð8=u¡r¨Y~³tréj=I* 5NÓ0cy”˜ÎP¥Ã,Þ1Ñ"VOl^ÁDÅrÊÐÒ&ƒ{7B‹ÜÉe†šîøB‰(Cl˜VïSB©÷‘¯‡æ#;|$Æëá@€OM„Ñ Ñ<6ˆ@•”!ÒÑjx¥¦¤˜Ê‚bQºFÃ, »eV_< d Ô­2 wHÝõˆž=œº4U¨Y1«¾@ã”a JKKU‘¢­[Nl–NÎÅõ,³z7­€Nh•!(@ÖX£V¨â»z2ôpêÒT¡fÅ:³´ªŠD ¦M„Èj¶a ÉmV‹H§.O*2YhI¢ÿh1Ë`‹‰«Ê7NäÇmî&FpÊL©šE€Ö¸½KƺŒ)4"",Ȱ@*ŠW6ôdèá”rtú¦ÄðD¦hut¥)B¡/OkJ’)"᪠¥0e_•ÉPO¤,pp‡Õn÷z–Á]˜³ =ÝtÆ£µ"€£µÂ ªÛ@€ZU¤ zwLYZ¯uQazbôp¬)m³bx:³äZZ5ù5Z’4áÆ’Œ)4…«"|-O©DW>‡TR¤¥¦8Ó4Œ°Ù±Žªà•vzØW¯"¦JT.è#Ð#äp¸–äãD“lfÁªF± RGwSIb)aÚ²¥¹_˜È„(£Uí˜pòÄ FÏÆ8@Ä5Äáa: ƒ M±f*óG¡+Êa¢dX†VêK-È@¨p µ8b¦«b†€åB—6×UÀèÈ PBp°L"æµ¾—+ Ô(ÖjÉ$çvLa0èẒÙ'|"¯ÐàB—Ðaãº2 ‘!xz &´¸c…fop¾Û âÝ%*ÇëF%º–¡H,¡JÊZd@?0Á¢Ë#ok(§SéyÁª"Mú²Õ"̼Ró”¢p¯Y­Ï+cU˜MØo‚DEpfÜ­q‡d–nßËÕ¶]ÀGmÝøuˆWËŸHذüI.L”›±z%Û·ÆM¨Ý8€pdÅgcÛŠ9Vh³qHšª9ûsÒ r’­š^W ŒÆd8³`å!Òï4ðGƒÅÒ¶r–ÁRÓvxxøÍë×åœFfÊ©e=‹¬–,Sô¨ÃzÀ’Û%b©Å StY¦ÚÞò›RÂe¹BV9ÛÎ$C´ÅzvcÝ IDAT87¯_‹Œˆ Oó…˜ÄÑH)ƒRˆ‘!)ãÆ •ó"i¦4”× ™¢$Ó…²_)Ûö…b_ËãlÊ¡¾­yžÁS>ó–e!¥‡ƒIºyS1öYƒÅü þÀ8ÐtOX®\I‰H^o…© Yç‹ÃóÛá·ðäS®\YþS¾íìýÇæ·{qöa¯m• 8–ðÞ‹sþ•4;n#th²a‰XOy\6¨4oœ J]ûu‘&„ë’dªp]Íä<¡%š %ŸÐ¿u31<, =!˜¸0©i#000222ñêA¦õ£iÏr„f©K>¿¾ÞC{.ûñï†wý²ÂÒŸÞŒÄË¢‹%oÀ‚­4u²n® ‡Ýó‹9î6š5XEnö€üÏ`Kÿ7É·šÖji´nR5¹ Bé&F pãêUÄ;¢)4â¼¥—Í2D 1Ç ‘;wn·µ V‚nÛÜ ¸¼RsÔœÂð±qqqebË”*b³Å•®Z¦ é·Gýë—éCZ–-÷bŸ—p«ÇynvŸ¾³—&|T¾ÂüÃwl)—VŽîV6®bÙ–Ÿ®ÝZò×…m+T*[¡Ò‹]§ì»–Îïü:§_ÙŠ•Û~:eé†]y ~ª£ë„lFZðªÏ,õ¥N_ÊòÙ­ òÑÔÅJò¨:Òš¶›7®#ÞÙð‡1-Pa²”AA éø”qƒ-J`¼§#0DÓéh¤ÌN‰"ÌöËÔÓe{®Y<1ÏÚøO—¶¥Üݽví¸¡ñÅûŽ®Wعäý&ýþ*–°nÍW-mŸ´nøÓ¥t繚w]qB–+Z…ýƈµ)¶ô“;|8á¿Q_/ø Ö­¹;m9)eHþ”Z’nZn%%!ÞÅØ³;$YÊ)Ð@¦¡²Îí[·XYÛM« +ëøt´2ÉòŽ0#`®V°¢AV(œÎûXùHغìñpgºÝvæàg‘'î7d~É¥¿MªQ$ææÖ±õ'üvRžÛÃÄ[6öE.ªØ[ðƒ±mIºÂ²ÿº¾wï%šú4Õ„&Ò”h7"q®;·nÑY}š;ÀI3¡¾–Áæ<0•;ñºÛrTªhu³Œø‹.Uœ1*ÒÓ%9/K“4¼dÍf¶có:”~ïÄÆm>Ø~=ÝI¸ó)“~íÀ´÷Øòؒ˵¨k5mõ¹ÛýsNøâ¿Ì_þTجè;)»)Ü v‡X±Ñ;n+šDÚlP)ê&¡¤®Ÿ 4‘¿é•¥ÔE xü*""ñÎf 6YP²Y@”{a’AŽùòå;wò$^$ÏÛ¥Ô§n.GeŽT]Õ@°¶â¢¿Ùñm¹P¡S†°Êñß ¯Ûºmåx"¼Yß¹ïVÏýZ=[×§«| È»¶¶}5¦CBÍï?ÿöŸ†ï5©1†ÐÒ÷ËŸú£jýyÏÙoi–ˆ3}èþdº-¾ž•«k£yÄù3§óåÏÇÆ¾˜ Øo#@¬ (^¬†zâÉÉÉ>¼wïÞ;w’’’'NžüÊ»^zómA8+AµMŒÐ´P•CTc®š'Lø%«€–Q‘õY€h ºl)÷¯ÝN Š §—.[ÚýÄ›)¡Q1á!6çý{é!á!À¤ßOL¼oÊ%]•èK†h™V¢KH-ÈtUlf°¤Étuˆy*‰˜kI;ÂÊŠ"‰˜ÇÊLGys( /µ%,„´,J†àm Ô\“ì¤דw)Vc:ŒJI—áÜ$ BbB mT áÀ,D† X™*+ñKÖWk…2I­Ç-A—\€úãÊåß/˜×§÷'yóæŽŽÆJDDD®\¹BCCñ[5L(»ðŸ˜ë!©à(¦:Ñ ‚€ gÎ(¾;¹z©ÇF‰5ßRÄ<§iÊðÂeÔöð¼ÄNœ3‚(ÐcƒXF¶-ÓìaÇ’YˆR•z5 d¤ÈWøÌ1Ø’T¯ˆ-1ûÊi>”“åöC!ý(aÍD7b‘ÎÎ2h[!X6ËNÌ("'² õä/P÷`ÊÄUà T±Òµ0 ¤îì;xà×[¹Ãý2kê¤fMí;t¨Õ[íýÇ*wKfNݬáSûÿ=ҪÇîXÿÌ3´ÙSOì?r¬õû=ýÇ*wKfŒü´iý:îpO ê!¢՗ˈ|ùó#ÒÅÀG,bBÙURp ¦¤”Y£@Á‚çOŸ,'rZn˜ˆY:Ž=Úá½÷Úo[ÐEîI‘.@\B 4ŒV‚f¸xÇŵ~ž ]ŽE’,°ð¦ËeÜSë¶œò}ýzuûïºú³'&¯®_·öÑã'ÞéÑۯ휴ª~íšGOœz·÷`¿¶sâŠ'jVÃyáÕÆœ¬Œu(C ×êÞ­îÐ6!eŠ& B áÍk"~B’‘ y‡ÉE¨ˆ`® J “ÀíD £Ý»g¯î‘°döWÔ¹wÿ¶sÁâSq2tî?¸îжþëÏá O v_w˜?Û9ÑçÃÂr…²ÿ „ÇÜQ8¥Ü¨5n”ê€÷ï+\¸0òbâP0h¦ q–A¥Ä/~h÷ß fã®Ù˜¥3Öhs 1"¢v™Ø×+#—’'Å%?z‚ú6¶ù„"äаé€ã²ƒl’ÂI¸$¥$#-}G—I½;T1®|Ÿ!Ã"j—õ_;;Oúäý÷*–/Ûwä˜ÈÇýÙΉŸ¼×¶B¹Ò}ÇLŠ|¼œûsb¯ö­+”)É}ÖwÂ9*çT‡ÊiLõíÝ]±Li:?“M ~eÊ3Øè†”­dÉ’«×®U0kvÍÄ,¦"!ÉY»vÝÖ½7]ÿyr÷&t‘!Ì„@%)ò醀â9y²~~Áp€• LmE¶€c#fW*P¬Ý[o®ýqýÖ{š®ïŸv1»bÂíZ¿¾î§M[îoºÁíÌ[°ík-×müýÏ5Ý0Á_ýùuÅè|m_~žœ V7é 9U@"Îrƒvìð¡š5åb”‹†¿{ÖP¦ QHÙ”»/÷nß¾}ófT JHèn¦†cŠHW òìÙsC?UwùÈŒ¨ˆ [ê­[/ï¾n Î÷Zü%sI*ÉüBŠmWÎ8Ó ŽœË87Îܼ™¯Z>Ž‹·“LÒ!vå&\ß´ëúê­‹V®8{þܰqãê.å·vÞX½eÑ’ç.\6aJÝ~lçª?ÎûòÜÅKçͬ»â3ÿõçª?¾ž,?ŒzÜi¤F¤‰P#6‚qÂp¯äî;ô+²†˜,T™•)C1Ë üxÂb±±G¨Û°‘ª æ´:^€z:`‰Oh G*omù¨g÷ÒýÚ„Õ(—j»}¶Ý[ݣZÀÕÓ[êµ.ûÝ‚ MÉÏêèÅa•®&¤ô!),rÂâ<³7ïܲ7F†Þ¹~ã|hdMæ%t„†Ë"3× Ö I$ åjâ‘ξ<),,×[~Pº_[?¶süœq£açÛÝ>.Ýßíühü7ñC`g›Oú—îßÎý ;‡õ‰Š”~<)?SÔzòs[ P‡ X+G¹¤#‡/‹§¿%³r‡û,Cå¹ JDg`¦d+RäÈý*vA½Ü2K§ÂnBþØ ã)üÑk©öà¤ç8×µÉF7"Oû·›l˜zôÒC‡óÒŒ ?æmþcÞñO'ãå“·VÏ?0qþÖ˜דÓ.Mÿ|C̳bÆ_8“’œ~nðü)Á)öàÄ¥ Î~pqøØk¶íÛ»ý’dKO·§¥Ÿ?Övê±O†mŠjºç‹=ìödÛ­s{ÿ’»Éޖߣ#8Õœ‚Í‘FHjýø;ñÞ~«VÍc§N}›Ïíìß±õµjT7}ÖƒùýØÎÏ:¾Ñ²VµÊ㾚ó dA¶ó½Wž¯Y¥¢ÊYí¢!¢<ÛÕ¡îÜf!Jù¶£+Z„Î h¼‹Éy@!V3eÐÉ öbÊ(Z´è¡ä+ t, ‘Ê®×v¡RƒÍ¶yóæU?­/ýU¿´  Ô  äs§"Þ©î ²;íäãªÞ¼tï'î­›ºghl«›ê|»¿öôÛ.{Úƒ£ãÏ—ØÞݶyêþ¡%¿þëãkbÖüâŽË•òÇ9ÈI ²§Þ<û0=$òãl¶26J¿w%ñÌÃÔ ŒÛëÖ¦5û öñ/Ò}•”äHŒí¿ð·økÉê»ötü5Ù”ŠwÙ DÈ…ÏM ìú~ÇÍ[¶®Þ´ÉíL(šjëúnûÍîXýÛ勵êï§þ—P$9£K»7ÿؾëûÍ–þÚí|èìòÖkîg¬¢JêP%¯™¾…jÑthßnD·˜2ÄØwŸb@•JÊ”hЬqX=*Î2ÌŽEÍ@3#8H#úk×®}2 _©…Ã]bð®{¼".-,OF #ÅÎ}ŽG`rzò±›Jì|;0:<ð…·KVÝtãœ+å^@áF>^ qÌÆØ]m¢ÃZ´)QucâÙ g¾À4;å L²§)”Û–'°ttZh - È™á ¬Ú­P«ò¥ªÇT=qûÌÅ»ëmy£“oÿ¼7¥@9ÛÚë΀@ù`4ðÎöý7§/Ÿ>vôÄÄÞC†ù³‰Ó—1rØ›7{]já¿õ'±sH¿7“z™Tj‘?ÛùÝ´~#‚¸SØ=ˆpdÎtzö3€LiR-.Ì2Ý4eÐ|!fwµÊµ PPjš)èžN4Š+võÒÅ÷Ë9î.ÕÜ…Š—|ÔϨñÑ¥÷'ÑÝ[‡<] …>é4*=WÀƒmWÒì•è¤*mç¼½OÇ6²eaå!ÝV6$Ü‘av¤#°óÚ Å'#¨0Š„b 7EøOglZ]EÚÈë°»\v©&j·ÝK=h v¤d\‡ eÊ-)šî¤§‰3éÎù·†L9<¾|ÿëôAt¶sð¤Áóçy³ëÇ1þl盃'õÿ$Þè7{ö÷o;MìÝ¥`>£Û²0PI2¼¥Ž0ýðÁƒk—/!ºi˜c/ƾåY›/ •6J•.ó÷Ÿ[t͆–j²{Ä7ýË//8Òó|—d >Æm¹^}϶hÀ¥ŸÏ@d@Ò‰ËO~ýMƒ½b‹íاÔ+Où¯?;Äø¸Kl±"zõi¶sx—Ž\¾ §Hæï¥³Ø]—Ô÷îÞ={úâÑMÃ{„<}wq€¨§ :!áÒɘ´Ð µCK•*½{ÇŸ×&ª*Ü€tc¡\šD þ½r®Þ‚žNñu%Rl †]Ûœp‚”X“Cw¤-TK’àDê"õQ%¼*N‚˶»ÿ— ªÕlÙâ…„%Kÿ½v±ÞÂ>þjç¬Uª½ôü³ ß­8rýJ½ÅýüÔÎ~³ž¬Xù¥g›,\¹æHâµzCú¯q^jö4sef“9Áåj4r2Ûž;J•*…¸Bœ<´I_õªìê)Ê &Q¢îr†Â¦kv”E‡š Ž92aÚÔëÇ…Ø2põÁñH3@.ž9 ̵h¸ Î Ðåix”>„t0!à¹ÐHÎIàzB!Skþ|ðÏñQË–9zlâÌY~mçßÇG.œ{ä¿ã“¾žÓ`ýçþëÏ¿ÿ9gÆÑã''}»°Á?¶s×ѳ&pgD&ï¤SÏ]‘NA½së–ÒÜB pš/°×Ê`WOâ, zaB%böR¾|ùõ7)û´k0`ü¶ýÃÝ«'o‘ü©tŠ!ÌxVr`B]3¶ù8'är¥òt@¤14— …<)TÔsW÷Ž˜¿ø›Ù¸Ó¹Oÿ¶sÞ¢™_àz ¿m¯>ü]ÿõçðy‹¦~Nü94¾ú¶sîÂÏGß¶‹›ü´Á6 ¤ U•îÜúG‹æÍ×bÊ ù‚^› üݹÔSèĬ!N4¨ÜråÊÿúka9Ã] ±6³Ôþ¹íâé³?žºý㩌2?j._¶¨b…¸_~ýýâ™ó~mç‚ÙË—ûeó–Kg/\òc~7ç‹ åÊüòǶKøý™Û¹lÖ¸ eKeù‰h6pÜ ÃBÆ…3gÑȈn1Òuò„h¦ àèü‚ g¸ ›Zÿbë7Ý bòŸy¦ÙÉÇ1-Ø„#Ñ͵Y€ŒJ†àL•¤ð]A†ÔåZF’s=QÔ¬Iãã‡ö¹Ï_”¥ÍŒ‘¤Ét©d@èðG¡Ë³©Ž¡hž¸Y£§ÿÛ½€Ü”’KÆu%˜Ô"XÒc!\_H-ŽXÞ¥ŒLj¸ÙÓõmû™pñ‡•H¤–ºÁàâI”ã‘Ó rbA=²^ Œd9±~OPä•9!*þØøb­˜e äûLŒƒšÎ2Ø È¥[åJ•~^½R]"†@?êhÔ©‚ÓϺ^}s©œcòÎþl0ÜwûN’ïÝ iôã•Îk¿¯\¹’ÖäÚ„& 1öU¥kÎ2Ä q®Ag/PP£FÕǧ$'‡„2—m¯9BwÕHÝ™Í@rT¹àfF”]43'Ä?*ìCçÏñiÓ'uËë‡Î’‘ˆ_¬}6Œ¦ š/hÊðüÂDÌ$^˜àé<ï[¢Äöß~müB +i#òÙ€õ½“£Êwóëò¶(\å‚Oæ„rÁý_(¬"šýXÐÂîØ²¹XñX¼{¬Á¦ øZŒš³ Ê N1hÖ  :*ÆÅmü~I6_ŽYS–Ëö¨\0ùN„µnÁQ̽cî+ão@s+â" wŸ‰Ïë\Yã‹§ç”rÁc0âS.X5  ]´@fí¸qíšJ+Ð|AS›5tdé¥ ÷Y„Ò¬Q³fÍ '¦;Ó‘Jt¤s(¯lQÀà‘þ_.øÃU.xô£rÁäLn8JÙ–}2‡N¡Pó§-!e¢;så‚K—ä‚ÂÌÎâyoF$h8©ééé[6ýܧwoÎô‚¦ ý«0H”y›˜/™ðîÆè˜˜=;¶=Þ@ëA7 Ø¢˜µërJ¹à 9£\ð†M[È å‚ûóßÃ9 \°ÙóÙ,ÉDÁ“ RñÐgtt ¢˜eÐ0ÇDF½–d½”AyÀ¦ d è ®M6­]£‘2Ó´ÔÂ=CÊ/?{þ|Ž)<1§” žå÷å‚'žõ™µØçM¿þøB9˜Î2ã4ØõíÔ¼ûJÙØkH×2 ¬N:ë–.¯70~˜Õ¦§ÒÒR?êÙƒ/ì¸ü½7~}9¬A¥ðâÉ(|pK’%¶P°ËLÊmÙí©{ ÷Aé-Ò>iv{W†Tâs(SmvæuÓåH&å‚ÓPï ¥CSƒ¸Ý‘f§2iAäã$;ê÷‘:€œXÐßI¼rÁ3ÇŽò¶û~Ê— öS;ÇψAì8„/ìŸv~4~úð°³Ë°x¾\°ßÚ9è®\°N0xzÞëˆJ)•ôÓRSÖ­X†ø¥)ƒ5b eè¯}B¤qÊ ‰Gœe@Õ„w+-Zì×èËM”¦éD 1Ö6‰!g” ~ å‚§å„rÁÕr@¹à×sP¹`¼ÌXg“Nc"k(eDJýß~Z_¸Hį˜2³ d ])œôÚ‰ƒ®eÐ}·Õªùز9_û`6E‡cà7Í4騏rÁü¸ /W.¸SN)Ü.Ç” Þñ¥\°NŒZEÑh’¸”ýeó¾}¼V-˜ˆqM'úùRM¥ š5èÔ…f&ºÇ}¼¦àÒ¹s’y–Zt,nñ/—aL”“ÊõÿrÁC• öU™èÄél¹`ùyíóžJ PLÓ¥ ç:ˆÈeÙüU d§ B$TÙ¡zm‚,þX+Ì“e²c) EeÀî²e傱¬@V+¤\ð” r9ƒþðWîuiym6®\pÞEb+¤].ØFÞl$–Åí²e¹`_.8ƒ/ìäË'Ù2P.¸Ê;È%Bo¢\ð$®\p×þørÁ~j'_.¸Û‘|^ÿ´“” î…rÁÝFóo;=(ì~‚›ƒ(£I3vV-J@Ì"r¿Èt–¸³†¡>ã”)eˆ×&âr´¢þ׊ùs·†šx:åÜšVСû¨\°Ê‹å‚ú}¹à9¦\p]Z.Øýtw‡¸ŸÚ& 4PdÂ(HqºrQb–æ š2hÖ¯JìêÌÔø&+¥„ Å (´b+Y²ddDoÐìA¦ÆQ60Uc W.x_—áÍA傇ä”rÁýsF¹à©=§«ÏºÊ@QöµmÿãwD+b–¯b–a˜,¨XõrÁ •.— ù /=E a¾ÿþ;xóë­›7o&&&þüóÏW’n}µŠÞ:Q°r]ãS@PÕ˜Hò³w\±pä(bœÊ/Ì•3ÊçŽ|®MŽ(ñü»]ü¾\ðØØ"(L7æô曄” ¾|C 4qI»¢+4IDAT”XEbˆcßjU(oLóæÍóæÍC‹ã"…Æts ÃÄaj–)ØèІb9?¶¯W¯^ü˜1xÃs…ªÕXûHÛx,ÆJ™BŸ+ü¤ÿ–%å‚ûqå‚{ú}¹àž±ÅŠvìû¨\0wnyY&ºCüð.ï‘rÁî§¶;D8™J^eßP±ÇíÛûÊ AˆYÌ2S š,æ†rL¥ H¡)ƒf º–Aç6ØGDDÔ¯WïËqc¦,\Âë35SDZX°ðQ¹`Þ7äKªÆH_8‘ºâox¯‹e…rÁËV>*Lê•?QÖØçå‚•¢ìkňþÕä OÖ¯h#—f ,8 ®±™Éi!e° d lÐM3Vƒ FsîÔÉØÒeT’«ÂvJyå‚'æ˜rÁ_ú}¹àoI¹àÙÊ‹'‰I.qPˆPY–ï%­” „”‰öy¹`I• ž-îÏŸ9ý×–? Æ, az¯„æ § XȦ hBŠ¢é Y¯KÁó§³'Ž5ýKݱ( K¬Ž$¿mÿØïËϦå‚ûærÁCü½\ð”q˜2çœrÁ^ŸçJʾzlhCgOŒEœÒÿñˆ\Ä/]d ³ “ùL-RK°ŠßÌ:NvkXÅváÂ…ñ&üð÷þü…ÄUvÞŽ²èòç¦M¿|Ô¹3+ÚßÚË—.¬Q¥Ê/¿ýÞ¹G/³µgù‚ÙÕ+WB¹à.½°pk÷Í´êã~Ù²½ë€þfkÊW+ÇA¸³]:å…–pdùdËŸ+XÊRè·¯_½ÒòÉz}ûöÁ»š±ê‰-** éC\ø¤s “YÃBÊ€Y¸o‚”['øW['wïÞ¥ï^¤YcÙwß•©ZãÓÑÜ«MÉ ¼ªäNRռܩb++Ÿk³Q;ÊœXI ß•Q±ô\›ˆÒxŒ Gtñ9œôäÚ•`R‹Ã0]b"ו`B‹? ]–N#B(ZBˆ Æ4KšLWËN ‡T!@‘'–w $ëË–u¨Öh+É’? ]^וÁXˆ !Wʘ'QI-)eH0ªRÙFdù8~ØàSÿú_ëÖ4_àF R}‹"^z„+¤ \›˜”k–ŽŠ£×&Є¹ ç`­QÆß/N¸rñ¢IݦÈà7Ÿ¹Î”Âÿ#D–œ¦I¬‰0é%oùMª‘È ]‰Î“–ya 4O Sdô´c‰_o4W/]Z³t b“)]Ë  Ÿ4S˜_ø¤j¼MÐDk°GÅŽÇk×׿ÂzÒÁÉü¨CçW(Ÿ}Ó*£ÊLÙ‚º¬Ð!èòÕ1'ÚÌ]œ‘ ¾ðýXÆ øøãµ›bœ"f±‰“‹ÌMt`ÐAŸH§ ¨­iܸñ?ÛÿÜ·ó/Áá7ß»Î#KŒ™Ü uèɰD¬'È}úÑ£¯o\³úGÍNOK°Ø¨Å?8ÐŽµnÞ«é*Y•ÿ‡ ¤6‰²q6¯ÿfwm v¯b°Ð@‰Š}J¾ eËVS”!§'?ALˆL( æÿüá‡ÿ°û©…K—iñ) ÖÌbÙ+ÿ°jSy4¬\:Ã4hèÂsé‚ÀÅHARDbÞTÇ )¬Ê.u,]Z¾1ETD ûÂ=U—vO4Q£²ÂhüížÃÿ|é««Veù§اrÕ³X‰wOL(¼"4)4P¡±BE„[Ùoìíݲþý§Ok“:*»ìŒ¥S€ù.—Ή›QÒ‘ŽçÒe€º—€‰¹L¬QGŠè}}wj ø2§”é!ÆÖ!!mA3QéÛ”6ëØDÇY7ý}}÷|÷;7.^Œ}‡Ý‡Æ[RJ nØÂ|µ4KÄÐ×/‚ÆÐ’” »»»qùŠ›¾sáB=wÖçÎ,‘ôi±]™KêAr‘ËOd¬ø:¥6+^ ˜BñÅu•½"Ó½˜Òf½š¨æ¸Y|™Oàx{±9{gpðö›—àÞpì¸:]4(ƒ×ÁØ­À6Ê€oD€–,4pn"UCÿœ9;vÍ­·à£ó¶K–‚:©Tfg@]Rs!y‚È‘R¾8H‰r:L_´Ž¬,³ó©”ð­îó!g­S¾ZãEGM9ÎÆUPB_É1Ûœá²ÀÚÛnùåØkÂè`'&¯b2°aÀù{MM¦;0–°øŒW4Xkà©a\À¢… Ïž>½å[Í=?BÍ’‹63Ôåè{•ä…Ìl) Ó0|ÌÈXÁáx.K—/×¼":ÅD-h *}—êK_NÄî¶õÎõÿé?³ðúë¹Ý°ïxJB¾À®”£™ú‚1|hãÆîh¼Z!¡ðÓ%8²áh¹rÒ¤göì~ÿƒzf]ëÅT \isé(‡È…äÒ9 ­*O—ZABVHÿïb|lc÷’$ÄTw¤KCð\6&ª9vÍÍ«Kaǃ”̸iÝÚûŸçã0ðà6|®Ä¨2J)1BñwL’ñ…8Oâá#¼ÐaÃeK—þì¾ïãêèò_INwõý9ó[¸ðEçñ¨Æß)©¨¯8‚2Q°‘?`¯…‚ªˆ¼0~ƒÖ *~G9¨|÷®{ßµzõj|ÏŠ 6”hF‰Ê¿ëJÁ¢ª øJ†Â>ލ2 ª6¢Ÿ(¦Nzÿ–Í»ì²Ov–+H%†¹ß˜`º\:+ Caų*`V•,RX•V8»ÂŠeUرœO—:”&ª9€31ã±)¶ƒá«Ï¶mݼråJ|ûPy1õ…”¸JPîY b)‡2$L‘\ °„è£úÕÕjµ¿Ü*ä3×ÍN7ú®œ¹t „Àž Ì¥ „o˜yÀ<êNHO‹¤º&SµÑát© *T Q(šÃÎD5ÇŽ©ùT)àx’ÀíØþàŽmܼlÙĉU¾@•A¾Û·Pý&¥Q]I¹!ÊÉ裃•\ÕÕõäÎGßî;={î¼TŒž´yÔ)(÷À…äÒ¹QM­ É¥3qBÆV¼HaU† 6:–.5¦æš¨æ87 :AGÕ¥*@¡‰šã$éåG›î~úñ_ãúø‚Ïþk$ë ž•ðvOž’[2‰–«_ε ºdL|÷Ô†Ã-ºþ™h4¾eùòßìÞsa``ÃîÇc}A›™öÙÛõ.$—ÎŽhÑè`ºÔ ¶âÅ «>Û4±bY&BøXTDáxVKÕ['æR˜¨ñØ`^ºxñ‘Ÿnî÷¿Ã?~¼0…qÃ8)‹/`¾'Œ¬(z 9h¢~Ñ3úú‚K—°Æ‹ø×ÁÁA>‘ß犎x.ùŸöí;70ðà¯vMë¹Z/R 2’Kg³‹]`.Ñ®±âüÛeõ¤;Ò¥V¯BÇÓ¥^0‚ªˆ< êp4H AêfG^9¼îöÛÆŒ…ïUŧÎÈ8ò‰á` ¼K‚û’§$ ‹&oÄ0,ùÄèä3¥#BéP‹S,œ¡àÃt?¹ïÞ÷ß{oæu³9e(ĉàOŒGÆrެHP`•VµTSZ‘b…U¯yd –"ò€T·Æ‘‚ªˆ †œœf¢FcS–´wöñgxǶïÛëf]sÍœ9sƦ†Sƒ/äã$åòb,¿ÊˆòRoxW å?ØŠZ…Ë ¨5 Aëïïÿó³ÏŽÅÖ‡~1¹Vs¦.Òõú¸ty|Ôm]`.]yŽ"/ez²bY¹× :ž.-èB¦™¨æX ›í˜ÀÑØ”åñqâøñ»ÖÜqîìÙùóçáYá<á• °ÉG~–ï’ðÆ­r/aH¼-¡Œ(A Öà‡âAsæLTõû³"¾`k, o_ --¤  «¬ ¢$()„¥ÇÁC‡Ž92å“S—|éË nZ2zì8ÿK¨¼l2É¥£ÐަKC1³vV¼XaÕg¡¼K—zÁ< ª"ò€„¨MTs‚á·1Qã±)öÅçÎíÛûÌÓO<öæëG§M›vuO¨—3…/@  ! Èù~jê †ØZÊ€ƒ5p’"—6Ȥ ôA8â䬆þ±cÇÞ:qòäÉW|bÂì¹s§ÎøÔä)µ+k5ähÔè1(½|d ¿pÚ+ìó¨5D»Ì Öìß.Ó¥îH—šsÃÇ:ž. GÕ-TE¤ÏÍ+MGƒ”À‡†+}ƒøUGAG_{íï/>ÿö™3xÔv×d\Ó«‘ðÛŽ"}Р tÐäâNFZ}>"«i9eDy¬·úÅÐèj(/mÔi!zÿ•4A¾@ŸCÐ ð-)¦¯¯ïÔ©Sxo m=)ðÝwñ&‹,£êTqÀ3#p;÷‡?òÑ1cÇ lÀ{¸/k„ |ª–ä rXƒC¡EcqÑÒëFb˼•Ë€–!ß:Š‚zã;¬òyy¬–ËF¦ØP ‹²hµ«« ¤#w„…F2GU§ÊÀˆÈ·BE {õv„ìðù‚d!:ÔÒÙ¶¥Ihe058baÌ‘¹TfŠY@FÀHŠd ”'¬SpS€;XQFK9*ðe@ö6ÈýxÇQø ;‚í€&ÈA°ÁÆA#ãf‹bØ6QüaI82Gè ³"_Xv’\‘œ•H•AÊÀ ÈÄÁBƒ¬£¬¤êTAà€-€À]€#ÿ|’2x_€#ÐÇ‘fœ"Šû« he`1Xö<Öɵa(…r†¤€,À` txáG›œ›Aj6dªrQe ” àw8ÜüÉ#ö?軀¬Qß1M`_)p”ªF¨Rb iÇåÏl<¡Í?ÄÑ•Q6©,8^ÎH–,4ªr#›äJÒ _ÈQö`_ package, and Linux. This is the fastest pure-Python hub. **poll** On platforms that support it **selects** Lowest-common-denominator, available everywhere. **pyevent** This is a libevent-based backend and is thus the fastest. It's disabled by default, because it does not support native threads, but you can enable it yourself if your use case doesn't require them. (You have to install pyevent, too.) If the selected hub is not ideal for the application, another can be selected. You can make the selection either with the environment variable :ref:`EVENTLET_HUB `, or with use_hub. .. function:: eventlet.hubs.use_hub(hub=None) Use this to control which hub Eventlet selects. Call it with the name of the desired hub module. Make sure to do this before the application starts doing any I/O! Calling use_hub completely eliminates the old hub, and any file descriptors or timers that it had been managing will be forgotten. Put the call as one of the first lines in the main module.:: """ This is the main module """ from eventlet import hubs hubs.use_hub("pyevent") Hubs are implemented as thread-local class instances. :func:`eventlet.hubs.use_hub` only operates on the current thread. When using multiple threads that each need their own hub, call :func:`eventlet.hubs.use_hub` at the beginning of each thread function that needs a specific hub. In practice, it may not be necessary to specify a hub in each thread; it works to use one special hub for the main thread, and let other threads use the default hub; this hybrid hub configuration will work fine. It is also possible to use a third-party hub module in place of one of the built-in ones. Simply pass the module itself to :func:`eventlet.hubs.use_hub`. The task of writing such a hub is a little beyond the scope of this document, it's probably a good idea to simply inspect the code of the existing hubs to see how they work.:: from eventlet import hubs from mypackage import myhub hubs.use_hub(myhub) Supplying None as the argument to :func:`eventlet.hubs.use_hub` causes it to select the default hub. How the Hubs Work ----------------- The hub has a main greenlet, MAINLOOP. When one of the running coroutines needs to do some I/O, it registers a listener with the hub (so that the hub knows when to wake it up again), and then switches to MAINLOOP (via ``get_hub().switch()``). If there are other coroutines that are ready to run, MAINLOOP switches to them, and when they complete or need to do more I/O, they switch back to the MAINLOOP. In this manner, MAINLOOP ensures that every coroutine gets scheduled when it has some work to do. MAINLOOP is launched only when the first I/O operation happens, and it is not the same greenlet that __main__ is running in. This lazy launching is why it's not necessary to explicitly call a dispatch() method like other frameworks, which in turn means that code can start using Eventlet without needing to be substantially restructured. More Hub-Related Functions --------------------------- .. autofunction:: eventlet.hubs.get_hub .. autofunction:: eventlet.hubs.get_default_hub .. autofunction:: eventlet.hubs.trampoline eventlet-0.13.0/doc/threading.rst0000644000175000017500000000320512164577340017536 0ustar temototemoto00000000000000Threads ======== Eventlet is thread-safe and can be used in conjunction with normal Python threads. The way this works is that coroutines are confined to their 'parent' Python thread. It's like each thread contains its own little world of coroutines that can switch between themselves but not between coroutines in other threads. .. image:: /images/threading_illustration.png You can only communicate cross-thread using the "real" thread primitives and pipes. Fortunately, there's little reason to use threads for concurrency when you're already using coroutines. The vast majority of the times you'll want to use threads are to wrap some operation that is not "green", such as a C library that uses its own OS calls to do socket operations. The :mod:`~eventlet.tpool` module is provided to make these uses simpler. The optional :ref:`pyevent hub ` is not compatible with threads. Tpool - Simple thread pool --------------------------- The simplest thing to do with :mod:`~eventlet.tpool` is to :func:`~eventlet.tpool.execute` a function with it. The function will be run in a random thread in the pool, while the calling coroutine blocks on its completion:: >>> import thread >>> from eventlet import tpool >>> def my_func(starting_ident): ... print "running in new thread:", starting_ident != thread.get_ident() ... >>> tpool.execute(my_func, thread.get_ident()) running in new thread: True By default there are 20 threads in the pool, but you can configure this by setting the environment variable ``EVENTLET_THREADPOOL_SIZE`` to the desired pool size before importing tpool. .. automodule:: eventlet.tpool :members: eventlet-0.13.0/README.twisted0000644000175000017500000001711512164577340016641 0ustar temototemoto00000000000000--work in progress-- Introduction ------------ Twisted provides solid foundation for asynchronous programming in Python. Eventlet makes asynchronous programming look like synchronous, thus achieving higher signal-to-noise ratio than traditional twisted programs have. Eventlet on top of twisted provides: * stable twisted * usable and readable synchronous style * existing twisted code can be used without any changes * existing blocking code can be used after trivial changes applied NOTE: the maintainer of Eventlet's Twisted support no longer supports it; it still exists but may have had some breakage along the way. Please treat it as experimental, and if you'd like to maintain it, please do! Eventlet features: * utilities for spawning and controlling greenlet execution: api.spawn, api.kill, proc module * utilities for communicating between greenlets: event.Event, queue.Queue, semaphore.Semaphore * standard Python modules that won't block the reactor: eventlet.green package * utilities specific to twisted hub: eventlet.twistedutil package Getting started with eventlet on twisted ---------------------------------------- This section will only mention stuff that may be useful but it won't explain in details how to use it. For that, refer to the docstrings of the modules and the examples. There are 2 ways of using twisted with eventlet, one that is familiar to twisted developers and another that is familiar to eventlet developers: 1. explicitly start the main loop in the main greenlet; 2. implicitly start the main loop in a dedicated greenlet. To enable (1), add this line at the top of your program: from eventlet.twistedutil import join_reactor then start the reactor as you would do in a regular twisted application. For (2) just make sure that you have reactor installed before using any of eventlet functions. Otherwise a non-twisted hub will be selected and twisted code won't work. Most of examples/twisted_* use twisted style with the exception of twisted_client.py and twisted_srvconnector.py. All of the non-twisted examples in examples directory use eventlet-style (they work with any of eventlet's hubs, not just twisted-based). Eventlet implements "blocking" operations by switching to the main loop greenlet, thus it's impossible to call a blocking function when you are already in the main loop. Therefore one must be cautious in a twisted callback, calling only a non-blocking subset of eventlet API here. The following functions won't unschedule the current greenlet and are safe to call from anywhere: 1. Greenlet creation functions: api.spawn, proc.spawn, twistedutil.deferToGreenThread and others based on api.spawn. 2. send(), send_exception(), poll(), ready() methods of event.Event and queue.Queue. 3. wait(timeout=0) is identical to poll(). Currently only Proc.wait supports timeout parameter. 4. Proc.link/link_value/link_exception Other classes that use these names should follow the convention. For an example on how to take advantage of eventlet in a twisted application using deferToGreenThread see examples/twisted_http_proxy.py Although eventlet provides eventlet.green.socket module that implements interface of the standard Python socket, there's also a way to use twisted's network code in a synchronous fashion via GreenTransport class. A GreenTransport interface is reminiscent of socket but it's not a drop-in replacement. It combines features of TCPTransport and Protocol in a single object: * all of transport methods (like getPeer()) are available directly on a GreenTransport instance; in addition, underlying transport object is available via 'transport' attribute; * write method is overriden: it may block if transport write buffer is full; * read() and recv() methods are provided to retrieve the data from protocol synchronously. To make a GreenTransport instance use twistedutil.protocol.GreenClientCreator (usage is similar to that of twisted.internet.protocol.ClientCreator) For an example on how to get a connected GreenTransport instance, see twisted_client.py, twisted_srvconnect.py or twisted_portforward.py. For an example on how to use GreenTransport for incoming connections, see twisted_server.py, twisted_portforward.py. also * twistedutil.block_on - wait for a deferred to fire block_on(reactor.callInThread(func, args)) * twistedutil.protocol.basic.LineOnlyReceiverTransport - a green transport variant built on top of LineOnlyReceiver protocol. Demonstrates how to convert a protocol to a synchronous mode. Coroutines ---------- To understand how eventlet works, one has to understand how to use greenlet: http://codespeak.net/py/dist/greenlet.html Essential points * There always exists MAIN greenlet * Every greenlet except MAIN has a parent. MAIN therefore could be detected as g.parent is None * When greenlet is finished it's return value is propagated to the parent (i.e. switch() call in the parent greenlet returns it) * When an exception leaves a greelen, it's propagated to the parent (i.e. switch() in the parent re-raises it) unless it's a subclass of GreenletExit, which is returned as a value. * parent can be reassigned (by simply setting 'parent' attribute). A cycle would be detected and rejected with ValueError Note, that there's no scheduler of any sort; if a coroutine wants to be scheduled again it must take care of it itself. As an application developer, however, you don't need to worry about it as that's what eventlet does behind the scenes. The cost of that is that you should not use greenlet's switch() and throw() methods, they will likely leave the current greenlet unscheduled forever. Eventlet also takes advantage of greenlet's `parent' attribute, so you should not meddle with it either. How does eventlet work ---------------------- Twisted's reactor and eventlet's hub are very similar in what they do. Both continuously perform polling on the list of registered descriptors and each time a specific event is fired, the associated callback function is called. In addition, both maintain a list of scheduled calls. Polling is performed by the main loop - a function that both reactor and hub have. When twisted calls user's callback it's expected to return almost immediately, without any blocking I/O calls. Eventlet runs the main loop in a dedicated greenlet (MAIN_LOOP). It is the same greenlet as MAIN if you use join_reactor. Otherwise it's a separate greenlet started implicitly. The execution is organized in a such way that the switching always involves MAIN_LOOP. All of functions in eventlet that appear "blocking" use the following algorithm: 1. register a callback that switches back to the current greenlet when an event of interest happens 2. switch to the MAIN_LOOP For example, here's what eventlet's socket recv() does: = blocking operation RECV on socket d = user's greenlet (USER) main loop's greenlet (MAIN_LOOP) | (inside d.recv() call) | add_descriptor(d, RECV) | data=MAIN_LOOP.switch() ---------> poll for events ^---------------------\ | | ... ---------------------------> may execute other greenlets here | | | event RECV on descriptor d? | | | d.remove_descriptor(d, RECV) | | | data = d.recv() # calling blocking op that will return immediately | | \--------- USER.switch(data) # argument data here becomes return value in user's switch return data eventlet-0.13.0/tests/0000755000175000017500000000000012164600754015430 5ustar temototemoto00000000000000eventlet-0.13.0/tests/subprocess_test.py0000644000175000017500000000334112164577340021236 0ustar temototemoto00000000000000import eventlet from eventlet.green import subprocess import eventlet.patcher from nose.plugins.skip import SkipTest import os import sys import time original_subprocess = eventlet.patcher.original('subprocess') def test_subprocess_wait(): # https://bitbucket.org/eventlet/eventlet/issue/89 # In Python 3.3 subprocess.Popen.wait() method acquired `timeout` # argument. # RHEL backported it to their Python 2.6 package. p = subprocess.Popen( [sys.executable, "-c", "import time; time.sleep(0.5)"]) ok = False t1 = time.time() try: p.wait(timeout=0.1) except subprocess.TimeoutExpired: ok = True tdiff = time.time() - t1 assert ok == True, 'did not raise subprocess.TimeoutExpired' assert 0.1 <= tdiff <= 0.2, 'did not stop within allowed time' def test_communicate_with_poll(): # https://github.com/eventlet/eventlet/pull/24 # `eventlet.green.subprocess.Popen.communicate()` was broken # in Python 2.7 because the usage of the `select` module was moved from # `_communicate` into two other methods `_communicate_with_select` # and `_communicate_with_poll`. Link to 2.7's implementation: # http://hg.python.org/cpython/file/2145593d108d/Lib/subprocess.py#l1255 if getattr(original_subprocess.Popen, '_communicate_with_poll', None) is None: raise SkipTest('original subprocess.Popen does not have _communicate_with_poll') p = subprocess.Popen( [sys.executable, '-c', 'import time; time.sleep(0.5)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) t1 = time.time() eventlet.with_timeout(0.1, p.communicate, timeout_value=True) tdiff = time.time() - t1 assert 0.1 <= tdiff <= 0.2, 'did not stop within allowed time' eventlet-0.13.0/tests/wsgi_test.py0000644000175000017500000014031512164577340020022 0ustar temototemoto00000000000000import cgi from eventlet import greenthread import eventlet import errno import os import socket import sys from tests import skipped, LimitedTestCase, skip_with_pyevent, skip_if_no_ssl from unittest import main from eventlet import greenio from eventlet import event from eventlet.green import socket as greensocket from eventlet import wsgi from eventlet.support import get_errno from tests import find_command httplib = eventlet.import_patched('httplib') certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') try: from cStringIO import StringIO except ImportError: from StringIO import StringIO def hello_world(env, start_response): if env['PATH_INFO'] == 'notexist': start_response('404 Not Found', [('Content-type', 'text/plain')]) return ["not found"] start_response('200 OK', [('Content-type', 'text/plain')]) return ["hello world"] def chunked_app(env, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) yield "this" yield "is" yield "chunked" def chunked_fail_app(environ, start_response): """http://rhodesmill.org/brandon/2013/chunked-wsgi/ """ headers = [('Content-Type', 'text/plain')] start_response('200 OK', headers) # We start streaming data just fine. yield "The dwarves of yore made mighty spells," yield "While hammers fell like ringing bells" # Then the back-end fails! try: 1 / 0 except Exception: start_response('500 Error', headers, sys.exc_info()) return # So rest of the response data is not available. yield "In places deep, where dark things sleep," yield "In hollow halls beneath the fells." def big_chunks(env, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) line = 'a' * 8192 for x in range(10): yield line def use_write(env, start_response): if env['PATH_INFO'] == '/a': write = start_response('200 OK', [('Content-type', 'text/plain'), ('Content-Length', '5')]) write('abcde') if env['PATH_INFO'] == '/b': write = start_response('200 OK', [('Content-type', 'text/plain')]) write('abcde') return [] def chunked_post(env, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) if env['PATH_INFO'] == '/a': return [env['wsgi.input'].read()] elif env['PATH_INFO'] == '/b': return [x for x in iter(lambda: env['wsgi.input'].read(4096), '')] elif env['PATH_INFO'] == '/c': return [x for x in iter(lambda: env['wsgi.input'].read(1), '')] def already_handled(env, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) return wsgi.ALREADY_HANDLED class Site(object): def __init__(self): self.application = hello_world def __call__(self, env, start_response): return self.application(env, start_response) class IterableApp(object): def __init__(self, send_start_response=False, return_val=wsgi.ALREADY_HANDLED): self.send_start_response = send_start_response self.return_val = return_val self.env = {} def __call__(self, env, start_response): self.env = env if self.send_start_response: start_response('200 OK', [('Content-type', 'text/plain')]) return self.return_val class IterableSite(Site): def __call__(self, env, start_response): it = self.application(env, start_response) for i in it: yield i CONTENT_LENGTH = 'content-length' """ HTTP/1.1 200 OK Date: foo Content-length: 11 hello world """ class ConnectionClosed(Exception): pass def read_http(sock): fd = sock.makefile() try: response_line = fd.readline() except socket.error, exc: if get_errno(exc) == 10053: raise ConnectionClosed raise if not response_line: raise ConnectionClosed(response_line) header_lines = [] while True: line = fd.readline() if line == '\r\n': break else: header_lines.append(line) headers = dict() for x in header_lines: x = x.strip() if not x: continue key, value = x.split(': ', 1) assert key.lower() not in headers, "%s header duplicated" % key headers[key.lower()] = value if CONTENT_LENGTH in headers: num = int(headers[CONTENT_LENGTH]) body = fd.read(num) else: # read until EOF body = fd.read() return response_line, headers, body class _TestBase(LimitedTestCase): def setUp(self): super(_TestBase, self).setUp() self.logfile = StringIO() self.site = Site() self.killer = None self.set_site() self.spawn_server() def tearDown(self): greenthread.kill(self.killer) eventlet.sleep(0) super(_TestBase, self).tearDown() def spawn_server(self, **kwargs): """Spawns a new wsgi server with the given arguments. Sets self.port to the port of the server, and self.killer is the greenlet running it. Kills any previously-running server.""" eventlet.sleep(0) # give previous server a chance to start if self.killer: greenthread.kill(self.killer) new_kwargs = dict(max_size=128, log=self.logfile, site=self.site) new_kwargs.update(kwargs) if 'sock' not in new_kwargs: new_kwargs['sock'] = eventlet.listen(('localhost', 0)) self.port = new_kwargs['sock'].getsockname()[1] self.killer = eventlet.spawn_n( wsgi.server, **new_kwargs) def set_site(self): raise NotImplementedError class TestHttpd(_TestBase): def set_site(self): self.site = Site() def test_001_server(self): sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') fd.flush() result = fd.read() fd.close() ## The server responds with the maximum version it supports self.assert_(result.startswith('HTTP'), result) self.assert_(result.endswith('hello world')) def test_002_keepalive(self): sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() read_http(sock) fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() read_http(sock) fd.close() def test_003_passing_non_int_to_read(self): # This should go in greenio_test sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() cancel = eventlet.Timeout(1, RuntimeError) self.assertRaises(TypeError, fd.read, "This shouldn't work") cancel.cancel() fd.close() def test_004_close_keepalive(self): sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() read_http(sock) fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() read_http(sock) fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() self.assertRaises(ConnectionClosed, read_http, sock) fd.close() @skipped def test_005_run_apachebench(self): url = 'http://localhost:12346/' # ab is apachebench from eventlet.green import subprocess subprocess.call([find_command('ab'), '-c','64','-n','1024', '-k', url], stdout=subprocess.PIPE) def test_006_reject_long_urls(self): sock = eventlet.connect( ('localhost', self.port)) path_parts = [] for ii in range(3000): path_parts.append('path') path = '/'.join(path_parts) request = 'GET /%s HTTP/1.0\r\nHost: localhost\r\n\r\n' % path fd = sock.makefile('rw') fd.write(request) fd.flush() result = fd.readline() if result: # windows closes the socket before the data is flushed, # so we never get anything back status = result.split(' ')[1] self.assertEqual(status, '414') fd.close() def test_007_get_arg(self): # define a new handler that does a get_arg as well as a read_body def new_app(env, start_response): body = env['wsgi.input'].read() a = cgi.parse_qs(body).get('a', [1])[0] start_response('200 OK', [('Content-type', 'text/plain')]) return ['a is %s, body is %s' % (a, body)] self.site.application = new_app sock = eventlet.connect( ('localhost', self.port)) request = '\r\n'.join(( 'POST / HTTP/1.0', 'Host: localhost', 'Content-Length: 3', '', 'a=a')) fd = sock.makefile('w') fd.write(request) fd.flush() # send some junk after the actual request fd.write('01234567890123456789') reqline, headers, body = read_http(sock) self.assertEqual(body, 'a is a, body is a=a') fd.close() def test_008_correctresponse(self): sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line_200,_,_ = read_http(sock) fd.write('GET /notexist HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line_404,_,_ = read_http(sock) fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line_test,_,_ = read_http(sock) self.assertEqual(response_line_200,response_line_test) fd.close() def test_009_chunked_response(self): self.site.application = chunked_app sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() self.assert_('Transfer-Encoding: chunked' in fd.read()) def test_010_no_chunked_http_1_0(self): self.site.application = chunked_app sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() self.assert_('Transfer-Encoding: chunked' not in fd.read()) def test_011_multiple_chunks(self): self.site.application = big_chunks sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() headers = '' while True: line = fd.readline() if line == '\r\n': break else: headers += line self.assert_('Transfer-Encoding: chunked' in headers) chunks = 0 chunklen = int(fd.readline(), 16) while chunklen: chunks += 1 chunk = fd.read(chunklen) fd.readline() # CRLF chunklen = int(fd.readline(), 16) self.assert_(chunks > 1) response = fd.read() # Require a CRLF to close the message body self.assertEqual(response, '\r\n') @skip_if_no_ssl def test_012_ssl_server(self): def wsgi_app(environ, start_response): start_response('200 OK', {}) return [environ['wsgi.input'].read()] certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') server_sock = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) self.spawn_server(sock=server_sock, site=wsgi_app) sock = eventlet.connect(('localhost', self.port)) sock = eventlet.wrap_ssl(sock) sock.write('POST /foo HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\nContent-length:3\r\n\r\nabc') result = sock.read(8192) self.assertEquals(result[-3:], 'abc') @skip_if_no_ssl def test_013_empty_return(self): def wsgi_app(environ, start_response): start_response("200 OK", []) return [""] certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') server_sock = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) self.spawn_server(sock=server_sock, site=wsgi_app) sock = eventlet.connect(('localhost', server_sock.getsockname()[1])) sock = eventlet.wrap_ssl(sock) sock.write('GET /foo HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') result = sock.read(8192) self.assertEquals(result[-4:], '\r\n\r\n') def test_014_chunked_post(self): self.site.application = chunked_post sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('PUT /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') fd.flush() while True: if fd.readline() == '\r\n': break response = fd.read() self.assert_(response == 'oh hai', 'invalid response %s' % response) sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('PUT /b HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') fd.flush() while True: if fd.readline() == '\r\n': break response = fd.read() self.assert_(response == 'oh hai', 'invalid response %s' % response) sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('PUT /c HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') fd.flush() while True: if fd.readline() == '\r\n': break response = fd.read(8192) self.assert_(response == 'oh hai', 'invalid response %s' % response) def test_015_write(self): self.site.application = use_write sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('w') fd.write('GET /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_('content-length' in headers) sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('w') fd.write('GET /b HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_('transfer-encoding' in headers) self.assert_(headers['transfer-encoding'] == 'chunked') def test_016_repeated_content_length(self): """ content-length header was being doubled up if it was set in start_response and could also be inferred from the iterator """ def wsgi_app(environ, start_response): start_response('200 OK', [('Content-Length', '7')]) return ['testing'] self.site.application = wsgi_app sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() header_lines = [] while True: line = fd.readline() if line == '\r\n': break else: header_lines.append(line) self.assertEquals(1, len([l for l in header_lines if l.lower().startswith('content-length')])) @skip_if_no_ssl def test_017_ssl_zeroreturnerror(self): def server(sock, site, log): try: serv = wsgi.Server(sock, sock.getsockname(), site, log) client_socket = sock.accept() serv.process_request(client_socket) return True except: import traceback traceback.print_exc() return False def wsgi_app(environ, start_response): start_response('200 OK', []) return [environ['wsgi.input'].read()] certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') sock = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) server_coro = eventlet.spawn(server, sock, wsgi_app, self.logfile) client = eventlet.connect(('localhost', sock.getsockname()[1])) client = eventlet.wrap_ssl(client) client.write('X') # non-empty payload so that SSL handshake occurs greenio.shutdown_safe(client) client.close() success = server_coro.wait() self.assert_(success) def test_018_http_10_keepalive(self): # verify that if an http/1.0 client sends connection: keep-alive # that we don't close the connection sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_('connection' in headers) self.assertEqual('keep-alive', headers['connection']) # repeat request to verify connection is actually still open fd.write('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_('connection' in headers) self.assertEqual('keep-alive', headers['connection']) def test_019_fieldstorage_compat(self): def use_fieldstorage(environ, start_response): import cgi fs = cgi.FieldStorage(fp=environ['wsgi.input'], environ=environ) start_response('200 OK', [('Content-type', 'text/plain')]) return ['hello!'] self.site.application = use_fieldstorage sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('POST / HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n' '4\r\n hai\r\n0\r\n\r\n') fd.flush() self.assert_('hello!' in fd.read()) def test_020_x_forwarded_for(self): sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET / HTTP/1.1\r\nHost: localhost\r\nX-Forwarded-For: 1.2.3.4, 5.6.7.8\r\n\r\n') sock.recv(1024) sock.close() self.assert_('1.2.3.4,5.6.7.8,127.0.0.1' in self.logfile.getvalue()) # turning off the option should work too self.logfile = StringIO() self.spawn_server(log_x_forwarded_for=False) sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET / HTTP/1.1\r\nHost: localhost\r\nX-Forwarded-For: 1.2.3.4, 5.6.7.8\r\n\r\n') sock.recv(1024) sock.close() self.assert_('1.2.3.4' not in self.logfile.getvalue()) self.assert_('5.6.7.8' not in self.logfile.getvalue()) self.assert_('127.0.0.1' in self.logfile.getvalue()) def test_socket_remains_open(self): greenthread.kill(self.killer) server_sock = eventlet.listen(('localhost', 0)) server_sock_2 = server_sock.dup() self.spawn_server(sock=server_sock_2) # do a single req/response to verify it's up sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') fd.flush() result = fd.read(1024) fd.close() self.assert_(result.startswith('HTTP'), result) self.assert_(result.endswith('hello world')) # shut down the server and verify the server_socket fd is still open, # but the actual socketobject passed in to wsgi.server is closed greenthread.kill(self.killer) eventlet.sleep(0) # make the kill go through try: server_sock_2.accept() # shouldn't be able to use this one anymore except socket.error, exc: self.assertEqual(get_errno(exc), errno.EBADF) self.spawn_server(sock=server_sock) sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') fd.flush() result = fd.read(1024) fd.close() self.assert_(result.startswith('HTTP'), result) self.assert_(result.endswith('hello world')) def test_021_environ_clobbering(self): def clobberin_time(environ, start_response): for environ_var in ['wsgi.version', 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors', 'wsgi.multithread', 'wsgi.multiprocess', 'wsgi.run_once', 'REQUEST_METHOD', 'SCRIPT_NAME', 'RAW_PATH_INFO', 'PATH_INFO', 'QUERY_STRING', 'CONTENT_TYPE', 'CONTENT_LENGTH', 'SERVER_NAME', 'SERVER_PORT', 'SERVER_PROTOCOL']: environ[environ_var] = None start_response('200 OK', [('Content-type', 'text/plain')]) return [] self.site.application = clobberin_time sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' '\r\n\r\n') fd.flush() self.assert_('200 OK' in fd.read()) def test_022_custom_pool(self): # just test that it accepts the parameter for now # TODO: test that it uses the pool and that you can waitall() to # ensure that all clients finished from eventlet import greenpool p = greenpool.GreenPool(5) self.spawn_server(custom_pool=p) # this stuff is copied from test_001_server, could be better factored sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') fd.flush() result = fd.read() fd.close() self.assert_(result.startswith('HTTP'), result) self.assert_(result.endswith('hello world')) def test_023_bad_content_length(self): sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.0\r\nHost: localhost\r\nContent-length: argh\r\n\r\n') fd.flush() result = fd.read() fd.close() self.assert_(result.startswith('HTTP'), result) self.assert_('400 Bad Request' in result) self.assert_('500' not in result) def test_024_expect_100_continue(self): def wsgi_app(environ, start_response): if int(environ['CONTENT_LENGTH']) > 1024: start_response('417 Expectation Failed', [('Content-Length', '7')]) return ['failure'] else: text = environ['wsgi.input'].read() start_response('200 OK', [('Content-Length', str(len(text)))]) return [text] self.site.application = wsgi_app sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('PUT / HTTP/1.1\r\nHost: localhost\r\nContent-length: 1025\r\nExpect: 100-continue\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 417 Expectation Failed')) self.assertEquals(body, 'failure') fd.write('PUT / HTTP/1.1\r\nHost: localhost\r\nContent-length: 7\r\nExpect: 100-continue\r\n\r\ntesting') fd.flush() header_lines = [] while True: line = fd.readline() if line == '\r\n': break else: header_lines.append(line) self.assert_(header_lines[0].startswith('HTTP/1.1 100 Continue')) header_lines = [] while True: line = fd.readline() if line == '\r\n': break else: header_lines.append(line) self.assert_(header_lines[0].startswith('HTTP/1.1 200 OK')) self.assertEquals(fd.read(7), 'testing') fd.close() def test_025_accept_errors(self): from eventlet import debug debug.hub_exceptions(True) listener = greensocket.socket() listener.bind(('localhost', 0)) # NOT calling listen, to trigger the error self.logfile = StringIO() self.spawn_server(sock=listener) old_stderr = sys.stderr try: sys.stderr = self.logfile eventlet.sleep(0) # need to enter server loop try: eventlet.connect(('localhost', self.port)) self.fail("Didn't expect to connect") except socket.error, exc: self.assertEquals(get_errno(exc), errno.ECONNREFUSED) self.assert_('Invalid argument' in self.logfile.getvalue(), self.logfile.getvalue()) finally: sys.stderr = old_stderr debug.hub_exceptions(False) def test_026_log_format(self): self.spawn_server(log_format="HI %(request_line)s HI") sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET /yo! HTTP/1.1\r\nHost: localhost\r\n\r\n') sock.recv(1024) sock.close() self.assert_('\nHI GET /yo! HTTP/1.1 HI\n' in self.logfile.getvalue(), self.logfile.getvalue()) def test_close_chunked_with_1_0_client(self): # verify that if we return a generator from our app # and we're not speaking with a 1.1 client, that we # close the connection self.site.application = chunked_app sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n') response_line, headers, body = read_http(sock) self.assertEqual(headers['connection'], 'close') self.assertNotEqual(headers.get('transfer-encoding'), 'chunked') self.assertEquals(body, "thisischunked") def test_minimum_chunk_size_parameter_leaves_httpprotocol_class_member_intact(self): start_size = wsgi.HttpProtocol.minimum_chunk_size self.spawn_server(minimum_chunk_size=start_size * 2) sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') read_http(sock) self.assertEqual(wsgi.HttpProtocol.minimum_chunk_size, start_size) def test_error_in_chunked_closes_connection(self): # From http://rhodesmill.org/brandon/2013/chunked-wsgi/ self.spawn_server(minimum_chunk_size=1) self.site.application = chunked_fail_app sock = eventlet.connect(('localhost', self.port)) sock.sendall('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') response_line, headers, body = read_http(sock) self.assertEqual(response_line, 'HTTP/1.1 200 OK\r\n') self.assertEqual(headers.get('transfer-encoding'), 'chunked') self.assertEqual(body, '27\r\nThe dwarves of yore made mighty spells,\r\n25\r\nWhile hammers fell like ringing bells\r\n') # verify that socket is closed by server self.assertEqual(sock.recv(1), '') def test_026_http_10_nokeepalive(self): # verify that if an http/1.0 client sends connection: keep-alive # and the server doesn't accept keep-alives, we close the connection self.spawn_server(keepalive=False) sock = eventlet.connect( ('localhost', self.port)) sock.sendall('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n') response_line, headers, body = read_http(sock) self.assertEqual(headers['connection'], 'close') def test_027_keepalive_chunked(self): self.site.application = chunked_post sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('w') fd.write('PUT /a HTTP/1.1\r\nHost: localhost\r\nTransfer-Encoding: chunked\r\n\r\n10\r\n0123456789abcdef\r\n0\r\n\r\n') fd.flush() read_http(sock) fd.write('PUT /b HTTP/1.1\r\nHost: localhost\r\nTransfer-Encoding: chunked\r\n\r\n10\r\n0123456789abcdef\r\n0\r\n\r\n') fd.flush() read_http(sock) fd.write('PUT /c HTTP/1.1\r\nHost: localhost\r\nTransfer-Encoding: chunked\r\n\r\n10\r\n0123456789abcdef\r\n0\r\n\r\n') fd.flush() read_http(sock) fd.write('PUT /a HTTP/1.1\r\nHost: localhost\r\nTransfer-Encoding: chunked\r\n\r\n10\r\n0123456789abcdef\r\n0\r\n\r\n') fd.flush() read_http(sock) @skip_if_no_ssl def test_028_ssl_handshake_errors(self): errored = [False] def server(sock): try: wsgi.server(sock=sock, site=hello_world, log=self.logfile) errored[0] = 'SSL handshake error caused wsgi.server to exit.' except greenthread.greenlet.GreenletExit: pass except Exception, e: errored[0] = 'SSL handshake error raised exception %s.' % e for data in ('', 'GET /non-ssl-request HTTP/1.0\r\n\r\n'): srv_sock = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) port = srv_sock.getsockname()[1] g = eventlet.spawn_n(server, srv_sock) client = eventlet.connect(('localhost', port)) if data: # send non-ssl request client.sendall(data) else: # close sock prematurely client.close() eventlet.sleep(0) # let context switch back to server self.assert_(not errored[0], errored[0]) # make another request to ensure the server's still alive try: from eventlet.green import ssl client = ssl.wrap_socket(eventlet.connect(('localhost', port))) client.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') result = client.read() self.assert_(result.startswith('HTTP'), result) self.assert_(result.endswith('hello world')) except ImportError: pass # TODO: should test with OpenSSL greenthread.kill(g) def test_029_posthooks(self): posthook1_count = [0] posthook2_count = [0] def posthook1(env, value, multiplier=1): self.assertEquals(env['local.test'], 'test_029_posthooks') posthook1_count[0] += value * multiplier def posthook2(env, value, divisor=1): self.assertEquals(env['local.test'], 'test_029_posthooks') posthook2_count[0] += value / divisor def one_posthook_app(env, start_response): env['local.test'] = 'test_029_posthooks' if 'eventlet.posthooks' not in env: start_response('500 eventlet.posthooks not supported', [('Content-Type', 'text/plain')]) else: env['eventlet.posthooks'].append( (posthook1, (2,), {'multiplier': 3})) start_response('200 OK', [('Content-Type', 'text/plain')]) yield '' self.site.application = one_posthook_app sock = eventlet.connect(('localhost', self.port)) fp = sock.makefile('rw') fp.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fp.flush() self.assertEquals(fp.readline(), 'HTTP/1.1 200 OK\r\n') fp.close() sock.close() self.assertEquals(posthook1_count[0], 6) self.assertEquals(posthook2_count[0], 0) def two_posthook_app(env, start_response): env['local.test'] = 'test_029_posthooks' if 'eventlet.posthooks' not in env: start_response('500 eventlet.posthooks not supported', [('Content-Type', 'text/plain')]) else: env['eventlet.posthooks'].append( (posthook1, (4,), {'multiplier': 5})) env['eventlet.posthooks'].append( (posthook2, (100,), {'divisor': 4})) start_response('200 OK', [('Content-Type', 'text/plain')]) yield '' self.site.application = two_posthook_app sock = eventlet.connect(('localhost', self.port)) fp = sock.makefile('rw') fp.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fp.flush() self.assertEquals(fp.readline(), 'HTTP/1.1 200 OK\r\n') fp.close() sock.close() self.assertEquals(posthook1_count[0], 26) self.assertEquals(posthook2_count[0], 25) def test_030_reject_long_header_lines(self): sock = eventlet.connect(('localhost', self.port)) request = 'GET / HTTP/1.0\r\nHost: localhost\r\nLong: %s\r\n\r\n' % \ ('a' * 10000) fd = sock.makefile('rw') fd.write(request) fd.flush() response_line, headers, body = read_http(sock) self.assertEquals(response_line, 'HTTP/1.0 400 Header Line Too Long\r\n') fd.close() def test_031_reject_large_headers(self): sock = eventlet.connect(('localhost', self.port)) headers = 'Name: Value\r\n' * 5050 request = 'GET / HTTP/1.0\r\nHost: localhost\r\n%s\r\n\r\n' % headers fd = sock.makefile('rw') fd.write(request) fd.flush() response_line, headers, body = read_http(sock) self.assertEquals(response_line, 'HTTP/1.0 400 Headers Too Large\r\n') fd.close() def test_zero_length_chunked_response(self): def zero_chunked_app(env, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) yield "" self.site.application = zero_chunked_app sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() response = fd.read().split('\r\n') headers = [] while True: h = response.pop(0) headers.append(h) if h == '': break self.assert_('Transfer-Encoding: chunked' in ''.join(headers)) # should only be one chunk of zero size with two blank lines # (one terminates the chunk, one terminates the body) self.assertEqual(response, ['0', '', '']) def test_configurable_url_length_limit(self): self.spawn_server(url_length_limit=20000) sock = eventlet.connect( ('localhost', self.port)) path = 'x' * 15000 request = 'GET /%s HTTP/1.0\r\nHost: localhost\r\n\r\n' % path fd = sock.makefile('rw') fd.write(request) fd.flush() result = fd.readline() if result: # windows closes the socket before the data is flushed, # so we never get anything back status = result.split(' ')[1] self.assertEqual(status, '200') fd.close() def test_aborted_chunked_post(self): read_content = event.Event() blew_up = [False] def chunk_reader(env, start_response): try: content = env['wsgi.input'].read(1024) except IOError: blew_up[0] = True content = 'ok' read_content.send(content) start_response('200 OK', [('Content-Type', 'text/plain')]) return [content] self.site.application = chunk_reader expected_body = 'a bunch of stuff' data = "\r\n".join(['PUT /somefile HTTP/1.0', 'Transfer-Encoding: chunked', '', 'def', expected_body]) # start PUT-ing some chunked data but close prematurely sock = eventlet.connect(('127.0.0.1', self.port)) sock.sendall(data) sock.close() # the test passes if we successfully get here, and read all the data # in spite of the early close self.assertEqual(read_content.wait(), 'ok') self.assert_(blew_up[0]) def test_exceptions_close_connection(self): def wsgi_app(environ, start_response): raise RuntimeError("intentional error") self.site.application = wsgi_app sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 500 Internal Server Error')) self.assertEqual(headers['connection'], 'close') self.assert_('transfer-encoding' not in headers) def test_unicode_raises_error(self): def wsgi_app(environ, start_response): start_response("200 OK", []) yield u"oh hai" yield u"non-encodable unicode: \u0230" self.site.application = wsgi_app sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 500 Internal Server Error')) self.assertEqual(headers['connection'], 'close') self.assert_('unicode' in body) def test_path_info_decoding(self): def wsgi_app(environ, start_response): start_response("200 OK", []) yield "decoded: %s" % environ['PATH_INFO'] yield "raw: %s" % environ['RAW_PATH_INFO'] self.site.application = wsgi_app sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET /a*b@%40%233 HTTP/1.1\r\nHost: localhost\r\nConnection: '\ 'close\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 200')) self.assert_('decoded: /a*b@@#3' in body) self.assert_('raw: /a*b@%40%233' in body) def test_ipv6(self): try: sock = eventlet.listen(('::1', 0), family=socket.AF_INET6) except (socket.gaierror, socket.error): # probably no ipv6 return log = StringIO() # first thing the server does is try to log the IP it's bound to def run_server(): try: server = wsgi.server(sock=sock, log=log, site=Site()) except ValueError: log.write('broked') eventlet.spawn_n(run_server) logval = log.getvalue() while not logval: eventlet.sleep(0.0) logval = log.getvalue() if 'broked' in logval: self.fail('WSGI server raised exception with ipv6 socket') def test_debug(self): self.spawn_server(debug=False) def crasher(env, start_response): raise RuntimeError("intentional crash") self.site.application = crasher sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 500 Internal Server Error')) self.assertEqual(body, '') self.assertEqual(headers['connection'], 'close') self.assert_('transfer-encoding' not in headers) # verify traceback when debugging enabled self.spawn_server(debug=True) self.site.application = crasher sock = eventlet.connect(('localhost', self.port)) fd = sock.makefile('w') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assert_(response_line.startswith('HTTP/1.1 500 Internal Server Error')) self.assert_('intentional crash' in body) self.assert_('RuntimeError' in body) self.assert_('Traceback' in body) self.assertEqual(headers['connection'], 'close') self.assert_('transfer-encoding' not in headers) def test_client_disconnect(self): """Issue #95 Server must handle disconnect from client in the middle of response """ def long_response(environ, start_response): start_response('200 OK', [('Content-Length', '9876')]) yield 'a' * 9876 server_sock = eventlet.listen(('localhost', 0)) self.port = server_sock.getsockname()[1] server = wsgi.Server(server_sock, server_sock.getsockname(), long_response, log=self.logfile) def make_request(): sock = eventlet.connect(server_sock.getsockname()) sock.send('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') sock.close() request_thread = eventlet.spawn(make_request) server_conn = server_sock.accept() # Next line must not raise IOError -32 Broken pipe server.process_request(server_conn) request_thread.wait() server_sock.close() def read_headers(sock): fd = sock.makefile() try: response_line = fd.readline() except socket.error, exc: if get_errno(exc) == 10053: raise ConnectionClosed raise if not response_line: raise ConnectionClosed header_lines = [] while True: line = fd.readline() if line == '\r\n': break else: header_lines.append(line) headers = dict() for x in header_lines: x = x.strip() if not x: continue key, value = x.split(': ', 1) assert key.lower() not in headers, "%s header duplicated" % key headers[key.lower()] = value return response_line, headers class IterableAlreadyHandledTest(_TestBase): def set_site(self): self.site = IterableSite() def get_app(self): return IterableApp(True) def test_iterable_app_keeps_socket_open_unless_connection_close_sent(self): self.site.application = self.get_app() sock = eventlet.connect( ('localhost', self.port)) fd = sock.makefile('rw') fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') fd.flush() response_line, headers = read_headers(sock) self.assertEqual(response_line, 'HTTP/1.1 200 OK\r\n') self.assert_('connection' not in headers) fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() response_line, headers, body = read_http(sock) self.assertEqual(response_line, 'HTTP/1.1 200 OK\r\n') self.assertEqual(headers.get('transfer-encoding'), 'chunked') self.assertEqual(body, '0\r\n\r\n') # Still coming back chunked class ProxiedIterableAlreadyHandledTest(IterableAlreadyHandledTest): # same thing as the previous test but ensuring that it works with tpooled # results as well as regular ones @skip_with_pyevent def get_app(self): from eventlet import tpool return tpool.Proxy(super(ProxiedIterableAlreadyHandledTest, self).get_app()) def tearDown(self): from eventlet import tpool tpool.killall() super(ProxiedIterableAlreadyHandledTest, self).tearDown() class TestChunkedInput(_TestBase): dirt = "" validator = None def application(self, env, start_response): input = env['wsgi.input'] response = [] pi = env["PATH_INFO"] if pi=="/short-read": d=input.read(10) response = [d] elif pi=="/lines": for x in input: response.append(x) elif pi=="/ping": input.read() response.append("pong") else: raise RuntimeError("bad path") start_response('200 OK', [('Content-Type', 'text/plain')]) return response def connect(self): return eventlet.connect(('localhost', self.port)) def set_site(self): self.site = Site() self.site.application = self.application def chunk_encode(self, chunks, dirt=None): if dirt is None: dirt = self.dirt b = "" for c in chunks: b += "%x%s\r\n%s\r\n" % (len(c), dirt, c) return b def body(self, dirt=None): return self.chunk_encode(["this", " is ", "chunked", "\nline", " 2", "\n", "line3", ""], dirt=dirt) def ping(self, fd): fd.sendall("GET /ping HTTP/1.1\r\n\r\n") self.assertEquals(read_http(fd)[-1], "pong") def test_short_read_with_content_length(self): body = self.body() req = "POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\nContent-Length:1000\r\n\r\n" + body fd = self.connect() fd.sendall(req) self.assertEquals(read_http(fd)[-1], "this is ch") self.ping(fd) def test_short_read_with_zero_content_length(self): body = self.body() req = "POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\nContent-Length:0\r\n\r\n" + body fd = self.connect() fd.sendall(req) self.assertEquals(read_http(fd)[-1], "this is ch") self.ping(fd) def test_short_read(self): body = self.body() req = "POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body fd = self.connect() fd.sendall(req) self.assertEquals(read_http(fd)[-1], "this is ch") self.ping(fd) def test_dirt(self): body = self.body(dirt="; here is dirt\0bla") req = "POST /ping HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body fd = self.connect() fd.sendall(req) self.assertEquals(read_http(fd)[-1], "pong") self.ping(fd) def test_chunked_readline(self): body = self.body() req = "POST /lines HTTP/1.1\r\nContent-Length: %s\r\ntransfer-encoding: Chunked\r\n\r\n%s" % (len(body), body) fd = self.connect() fd.sendall(req) self.assertEquals(read_http(fd)[-1], 'this is chunked\nline 2\nline3') def test_close_before_finished(self): import signal got_signal = [] def handler(*args): got_signal.append(1) raise KeyboardInterrupt() signal.signal(signal.SIGALRM, handler) signal.alarm(1) try: body = '4\r\nthi' req = "POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body fd = self.connect() fd.sendall(req) fd.close() eventlet.sleep(0.0) finally: signal.alarm(0) signal.signal(signal.SIGALRM, signal.SIG_DFL) assert not got_signal, "caught alarm signal. infinite loop detected." if __name__ == '__main__': main() eventlet-0.13.0/tests/backdoor_test.py0000644000175000017500000000203712164577340020633 0ustar temototemoto00000000000000import eventlet from eventlet import backdoor from eventlet.green import socket from tests import LimitedTestCase, main class BackdoorTest(LimitedTestCase): def test_server(self): listener = socket.socket() listener.bind(('localhost', 0)) listener.listen(50) serv = eventlet.spawn(backdoor.backdoor_server, listener) client = socket.socket() client.connect(('localhost', listener.getsockname()[1])) f = client.makefile('rw') self.assert_('Python' in f.readline()) f.readline() # build info f.readline() # help info self.assert_('InteractiveConsole' in f.readline()) self.assertEquals('>>> ', f.read(4)) f.write('print("hi")\n') f.flush() self.assertEquals('hi\n', f.readline()) self.assertEquals('>>> ', f.read(4)) f.close() client.close() serv.kill() # wait for the console to discover that it's dead eventlet.sleep(0.1) if __name__ == '__main__': main() eventlet-0.13.0/tests/debug_test.py0000644000175000017500000001075212164577340020140 0ustar temototemoto00000000000000import sys import eventlet from eventlet import debug from tests import LimitedTestCase, main, s2b from unittest import TestCase try: from cStringIO import StringIO except ImportError: from StringIO import StringIO class TestSpew(TestCase): def setUp(self): self.orig_trace = sys.settrace sys.settrace = self._settrace self.tracer = None def tearDown(self): sys.settrace = self.orig_trace sys.stdout = sys.__stdout__ def _settrace(self, cb): self.tracer = cb def test_spew(self): debug.spew() self.failUnless(isinstance(self.tracer, debug.Spew)) def test_unspew(self): debug.spew() debug.unspew() self.failUnlessEqual(self.tracer, None) def test_line(self): sys.stdout = StringIO() s = debug.Spew() f = sys._getframe() s(f, "line", None) lineno = f.f_lineno - 1 # -1 here since we called with frame f in the line above output = sys.stdout.getvalue() self.failUnless("%s:%i" % (__name__, lineno) in output, "Didn't find line %i in %s" % (lineno, output)) self.failUnless("f== previtem) # make sure the tick happened at least a few times so that we know # that our iterations in foo() were actually tpooled self.assert_(counter[0] > 10, counter[0]) gt.kill() @skip_with_pyevent def test_raising_exceptions(self): prox = tpool.Proxy(re) def nofunc(): prox.never_name_a_function_like_this() self.assertRaises(AttributeError, nofunc) from tests import tpool_test prox = tpool.Proxy(tpool_test) self.assertRaises(RuntimeError, prox.raise_exception) @skip_with_pyevent def test_variable_and_keyword_arguments_with_function_calls(self): import optparse parser = tpool.Proxy(optparse.OptionParser()) z = parser.add_option('-n', action='store', type='string', dest='n') opts,args = parser.parse_args(["-nfoo"]) self.assertEqual(opts.n, 'foo') @skip_with_pyevent def test_contention(self): from tests import tpool_test prox = tpool.Proxy(tpool_test) pile = eventlet.GreenPile(4) pile.spawn(lambda: self.assertEquals(prox.one, 1)) pile.spawn(lambda: self.assertEquals(prox.two, 2)) pile.spawn(lambda: self.assertEquals(prox.three, 3)) results = list(pile) self.assertEquals(len(results), 3) @skip_with_pyevent def test_timeout(self): import time eventlet.Timeout(0.1, eventlet.TimeoutError()) self.assertRaises(eventlet.TimeoutError, tpool.execute, time.sleep, 0.3) @skip_with_pyevent def test_killall(self): tpool.killall() tpool.setup() @skip_with_pyevent def test_autowrap(self): x = tpool.Proxy({'a':1, 'b':2}, autowrap=(int,)) self.assert_(isinstance(x.get('a'), tpool.Proxy)) self.assert_(not isinstance(x.items(), tpool.Proxy)) # attributes as well as callables from tests import tpool_test x = tpool.Proxy(tpool_test, autowrap=(int,)) self.assert_(isinstance(x.one, tpool.Proxy)) self.assert_(not isinstance(x.none, tpool.Proxy)) @skip_with_pyevent def test_autowrap_names(self): x = tpool.Proxy({'a':1, 'b':2}, autowrap_names=('get',)) self.assert_(isinstance(x.get('a'), tpool.Proxy)) self.assert_(not isinstance(x.items(), tpool.Proxy)) from tests import tpool_test x = tpool.Proxy(tpool_test, autowrap_names=('one',)) self.assert_(isinstance(x.one, tpool.Proxy)) self.assert_(not isinstance(x.two, tpool.Proxy)) @skip_with_pyevent def test_autowrap_both(self): from tests import tpool_test x = tpool.Proxy(tpool_test, autowrap=(int,), autowrap_names=('one',)) self.assert_(isinstance(x.one, tpool.Proxy)) # violating the abstraction to check that we didn't double-wrap self.assert_(not isinstance(x._obj, tpool.Proxy)) @skip_with_pyevent def test_callable(self): def wrapped(arg): return arg x = tpool.Proxy(wrapped) self.assertEquals(4, x(4)) # verify that it wraps return values if specified x = tpool.Proxy(wrapped, autowrap_names=('__call__',)) self.assert_(isinstance(x(4), tpool.Proxy)) self.assertEquals("4", str(x(4))) @skip_with_pyevent def test_callable_iterator(self): def wrapped(arg): yield arg yield arg yield arg x = tpool.Proxy(wrapped, autowrap_names=('__call__',)) for r in x(3): self.assertEquals(3, r) @skip_with_pyevent def test_eventlet_timeout(self): def raise_timeout(): raise eventlet.Timeout() self.assertRaises(eventlet.Timeout, tpool.execute, raise_timeout) @skip_with_pyevent def test_tpool_set_num_threads(self): tpool.set_num_threads(5) self.assertEquals(5, tpool._nthreads) class TpoolLongTests(LimitedTestCase): TEST_TIMEOUT=60 @skip_with_pyevent def test_a_buncha_stuff(self): assert_ = self.assert_ class Dummy(object): def foo(self,when,token=None): assert_(token is not None) time.sleep(random.random()/200.0) return token def sender_loop(loopnum): obj = tpool.Proxy(Dummy()) count = 100 for n in xrange(count): eventlet.sleep(random.random()/200.0) now = time.time() token = loopnum * count + n rv = obj.foo(now,token=token) self.assertEquals(token, rv) eventlet.sleep(random.random()/200.0) cnt = 10 pile = eventlet.GreenPile(cnt) for i in xrange(cnt): pile.spawn(sender_loop,i) results = list(pile) self.assertEquals(len(results), cnt) tpool.killall() @skipped def test_benchmark(self): """ Benchmark computing the amount of overhead tpool adds to function calls.""" iterations = 10000 import timeit imports = """ from tests.tpool_test import noop from eventlet.tpool import execute """ t = timeit.Timer("noop()", imports) results = t.repeat(repeat=3, number=iterations) best_normal = min(results) t = timeit.Timer("execute(noop)", imports) results = t.repeat(repeat=3, number=iterations) best_tpool = min(results) tpool_overhead = (best_tpool-best_normal)/iterations print "%s iterations\nTpool overhead is %s seconds per call. Normal: %s; Tpool: %s" % ( iterations, tpool_overhead, best_normal, best_tpool) tpool.killall() @skip_with_pyevent def test_leakage_from_tracebacks(self): tpool.execute(noop) # get it started gc.collect() initial_objs = len(gc.get_objects()) for i in xrange(10): self.assertRaises(RuntimeError, tpool.execute, raise_exception) gc.collect() middle_objs = len(gc.get_objects()) # some objects will inevitably be created by the previous loop # now we test to ensure that running the loop an order of # magnitude more doesn't generate additional objects for i in xrange(100): self.assertRaises(RuntimeError, tpool.execute, raise_exception) first_created = middle_objs - initial_objs gc.collect() second_created = len(gc.get_objects()) - middle_objs self.assert_(second_created - first_created < 10, "first loop: %s, second loop: %s" % (first_created, second_created)) tpool.killall() if __name__ == '__main__': main() eventlet-0.13.0/tests/nosewrapper.py0000644000175000017500000000134312164577340020354 0ustar temototemoto00000000000000""" This script simply gets the paths correct for testing eventlet with the hub extension for Nose.""" import nose from os.path import dirname, realpath, abspath import sys parent_dir = dirname(dirname(realpath(abspath(__file__)))) if parent_dir not in sys.path: sys.path.insert(0, parent_dir) # hacky hacks: skip test__api_timeout when under 2.4 because otherwise it SyntaxErrors if sys.version_info < (2,5): argv = sys.argv + ["--exclude=.*_with_statement.*"] else: argv = sys.argv # hudson does a better job printing the test results if the exit value is 0 zero_status = '--force-zero-status' if zero_status in argv: argv.remove(zero_status) launch = nose.run else: launch = nose.main launch(argv=argv) eventlet-0.13.0/tests/semaphore_test.py0000644000175000017500000000253712164577340021037 0ustar temototemoto00000000000000import time import unittest import eventlet from eventlet import semaphore from tests import LimitedTestCase class TestSemaphore(LimitedTestCase): def test_bounded(self): sem = semaphore.CappedSemaphore(2, limit=3) self.assertEqual(sem.acquire(), True) self.assertEqual(sem.acquire(), True) gt1 = eventlet.spawn(sem.release) self.assertEqual(sem.acquire(), True) self.assertEqual(-3, sem.balance) sem.release() sem.release() sem.release() gt2 = eventlet.spawn(sem.acquire) sem.release() self.assertEqual(3, sem.balance) gt1.wait() gt2.wait() def test_bounded_with_zero_limit(self): sem = semaphore.CappedSemaphore(0, 0) gt = eventlet.spawn(sem.acquire) sem.release() gt.wait() def test_non_blocking(self): sem = semaphore.Semaphore(0) self.assertEqual(sem.acquire(blocking=False), False) def test_timeout(self): sem = semaphore.Semaphore(0) start = time.time() self.assertEqual(sem.acquire(timeout=0.1), False) self.assertTrue(time.time() - start >= 0.1) def test_timeout_non_blocking(self): sem = semaphore.Semaphore() self.assertRaises(ValueError, sem.acquire, blocking=False, timeout=1) if __name__ == '__main__': unittest.main() eventlet-0.13.0/tests/api_test.py0000644000175000017500000001324712164577340017625 0ustar temototemoto00000000000000import os import os.path import socket from unittest import TestCase, main import warnings import eventlet warnings.simplefilter('ignore', DeprecationWarning) from eventlet import api warnings.simplefilter('default', DeprecationWarning) from eventlet import greenio, util, hubs, greenthread, spawn from tests import skip_if_no_ssl def check_hub(): # Clear through the descriptor queue api.sleep(0) api.sleep(0) hub = hubs.get_hub() for nm in 'get_readers', 'get_writers': dct = getattr(hub, nm)() assert not dct, "hub.%s not empty: %s" % (nm, dct) # Stop the runloop (unless it's twistedhub which does not support that) if not getattr(hub, 'uses_twisted_reactor', None): hub.abort(True) assert not hub.running class TestApi(TestCase): certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') def test_tcp_listener(self): socket = eventlet.listen(('0.0.0.0', 0)) assert socket.getsockname()[0] == '0.0.0.0' socket.close() check_hub() def test_connect_tcp(self): def accept_once(listenfd): try: conn, addr = listenfd.accept() fd = conn.makefile(mode='w') conn.close() fd.write('hello\n') fd.close() finally: listenfd.close() server = eventlet.listen(('0.0.0.0', 0)) api.spawn(accept_once, server) client = eventlet.connect(('127.0.0.1', server.getsockname()[1])) fd = client.makefile() client.close() assert fd.readline() == 'hello\n' assert fd.read() == '' fd.close() check_hub() @skip_if_no_ssl def test_connect_ssl(self): def accept_once(listenfd): try: conn, addr = listenfd.accept() conn.write('hello\r\n') greenio.shutdown_safe(conn) conn.close() finally: greenio.shutdown_safe(listenfd) listenfd.close() server = api.ssl_listener(('0.0.0.0', 0), self.certificate_file, self.private_key_file) api.spawn(accept_once, server) raw_client = eventlet.connect(('127.0.0.1', server.getsockname()[1])) client = util.wrap_ssl(raw_client) fd = socket._fileobject(client, 'rb', 8192) assert fd.readline() == 'hello\r\n' try: self.assertEquals('', fd.read(10)) except greenio.SSL.ZeroReturnError: # if it's a GreenSSL object it'll do this pass greenio.shutdown_safe(client) client.close() check_hub() def test_001_trampoline_timeout(self): from eventlet import coros server_sock = eventlet.listen(('127.0.0.1', 0)) bound_port = server_sock.getsockname()[1] def server(sock): client, addr = sock.accept() api.sleep(0.1) server_evt = spawn(server, server_sock) api.sleep(0) try: desc = eventlet.connect(('127.0.0.1', bound_port)) api.trampoline(desc, read=True, write=False, timeout=0.001) except api.TimeoutError: pass # test passed else: assert False, "Didn't timeout" server_evt.wait() check_hub() def test_timeout_cancel(self): server = eventlet.listen(('0.0.0.0', 0)) bound_port = server.getsockname()[1] done = [False] def client_closer(sock): while True: (conn, addr) = sock.accept() conn.close() def go(): desc = eventlet.connect(('127.0.0.1', bound_port)) try: api.trampoline(desc, read=True, timeout=0.1) except api.TimeoutError: assert False, "Timed out" server.close() desc.close() done[0] = True greenthread.spawn_after_local(0, go) server_coro = api.spawn(client_closer, server) while not done[0]: api.sleep(0) api.kill(server_coro) check_hub() def test_named(self): named_foo = api.named('tests.api_test.Foo') self.assertEquals( named_foo.__name__, "Foo") def test_naming_missing_class(self): self.assertRaises( ImportError, api.named, 'this_name_should_hopefully_not_exist.Foo') def test_killing_dormant(self): DELAY = 0.1 state = [] def test(): try: state.append('start') api.sleep(DELAY) except: state.append('except') # catching GreenletExit pass # when switching to hub, hub makes itself the parent of this greenlet, # thus after the function's done, the control will go to the parent api.sleep(0) state.append('finished') g = api.spawn(test) api.sleep(DELAY/2) self.assertEquals(state, ['start']) api.kill(g) # will not get there, unless switching is explicitly scheduled by kill self.assertEquals(state,['start', 'except']) api.sleep(DELAY) self.assertEquals(state, ['start', 'except', 'finished']) def test_nested_with_timeout(self): def func(): return api.with_timeout(0.2, api.sleep, 2, timeout_value=1) self.assertRaises(api.TimeoutError, api.with_timeout, 0.1, func) class Foo(object): pass if __name__ == '__main__': main() eventlet-0.13.0/tests/processes_test.py0000644000175000017500000000550312164577340021056 0ustar temototemoto00000000000000import sys import warnings from tests import LimitedTestCase, main, skip_on_windows warnings.simplefilter('ignore', DeprecationWarning) from eventlet import processes, api warnings.simplefilter('default', DeprecationWarning) class TestEchoPool(LimitedTestCase): def setUp(self): super(TestEchoPool, self).setUp() self.pool = processes.ProcessPool('echo', ["hello"]) @skip_on_windows def test_echo(self): result = None proc = self.pool.get() try: result = proc.read() finally: self.pool.put(proc) self.assertEquals(result, 'hello\n') @skip_on_windows def test_read_eof(self): proc = self.pool.get() try: proc.read() self.assertRaises(processes.DeadProcess, proc.read) finally: self.pool.put(proc) @skip_on_windows def test_empty_echo(self): p = processes.Process('echo', ['-n']) self.assertEquals('', p.read()) self.assertRaises(processes.DeadProcess, p.read) class TestCatPool(LimitedTestCase): def setUp(self): super(TestCatPool, self).setUp() api.sleep(0) self.pool = processes.ProcessPool('cat') @skip_on_windows def test_cat(self): result = None proc = self.pool.get() try: proc.write('goodbye') proc.close_stdin() result = proc.read() finally: self.pool.put(proc) self.assertEquals(result, 'goodbye') @skip_on_windows def test_write_to_dead(self): result = None proc = self.pool.get() try: proc.write('goodbye') proc.close_stdin() result = proc.read() self.assertRaises(processes.DeadProcess, proc.write, 'foo') finally: self.pool.put(proc) @skip_on_windows def test_close(self): result = None proc = self.pool.get() try: proc.write('hello') proc.close() self.assertRaises(processes.DeadProcess, proc.write, 'goodbye') finally: self.pool.put(proc) class TestDyingProcessesLeavePool(LimitedTestCase): def setUp(self): super(TestDyingProcessesLeavePool, self).setUp() self.pool = processes.ProcessPool('echo', ['hello'], max_size=1) @skip_on_windows def test_dead_process_not_inserted_into_pool(self): proc = self.pool.get() try: try: result = proc.read() self.assertEquals(result, 'hello\n') result = proc.read() except processes.DeadProcess: pass finally: self.pool.put(proc) proc2 = self.pool.get() self.assert_(proc is not proc2) if __name__ == '__main__': main() eventlet-0.13.0/tests/test__twistedutil.py0000644000175000017500000000235112164577340021566 0ustar temototemoto00000000000000from tests import requires_twisted import unittest try: from twisted.internet import reactor from twisted.internet.error import DNSLookupError from twisted.internet import defer from twisted.python.failure import Failure from eventlet.twistedutil import block_on except ImportError: pass class Test(unittest.TestCase): @requires_twisted def test_block_on_success(self): from twisted.internet import reactor d = reactor.resolver.getHostByName('www.google.com') ip = block_on(d) assert len(ip.split('.'))==4, ip ip2 = block_on(d) assert ip == ip2, (ip, ip2) @requires_twisted def test_block_on_fail(self): from twisted.internet import reactor d = reactor.resolver.getHostByName('xxx') self.assertRaises(DNSLookupError, block_on, d) @requires_twisted def test_block_on_already_succeed(self): d = defer.succeed('hey corotwine') res = block_on(d) assert res == 'hey corotwine', repr(res) @requires_twisted def test_block_on_already_failed(self): d = defer.fail(Failure(ZeroDivisionError())) self.assertRaises(ZeroDivisionError, block_on, d) if __name__=='__main__': unittest.main() eventlet-0.13.0/tests/db_pool_test.py0000644000175000017500000005213312164577340020467 0ustar temototemoto00000000000000"Test cases for db_pool" import sys import os import traceback from unittest import TestCase, main from tests import skipped, skip_unless, skip_with_pyevent, get_database_auth from eventlet import event from eventlet import db_pool import eventlet class DBTester(object): __test__ = False # so that nose doesn't try to execute this directly def setUp(self): self.create_db() self.connection = None connection = self._dbmodule.connect(**self._auth) cursor = connection.cursor() cursor.execute("""CREATE TABLE gargleblatz ( a INTEGER );""") connection.commit() cursor.close() def tearDown(self): if self.connection: self.connection.close() self.drop_db() def set_up_dummy_table(self, connection=None): close_connection = False if connection is None: close_connection = True if self.connection is None: connection = self._dbmodule.connect(**self._auth) else: connection = self.connection cursor = connection.cursor() cursor.execute(self.dummy_table_sql) connection.commit() cursor.close() if close_connection: connection.close() # silly mock class class Mock(object): pass class DBConnectionPool(DBTester): __test__ = False # so that nose doesn't try to execute this directly def setUp(self): super(DBConnectionPool, self).setUp() self.pool = self.create_pool() self.connection = self.pool.get() def tearDown(self): if self.connection: self.pool.put(self.connection) self.pool.clear() super(DBConnectionPool, self).tearDown() def assert_cursor_works(self, cursor): cursor.execute("select 1") rows = cursor.fetchall() self.assert_(rows) def test_connecting(self): self.assert_(self.connection is not None) def test_create_cursor(self): cursor = self.connection.cursor() cursor.close() def test_run_query(self): cursor = self.connection.cursor() self.assert_cursor_works(cursor) cursor.close() def test_run_bad_query(self): cursor = self.connection.cursor() try: cursor.execute("garbage blah blah") self.assert_(False) except AssertionError: raise except Exception: pass cursor.close() def test_put_none(self): # the pool is of size 1, and its only connection is out self.assert_(self.pool.free() == 0) self.pool.put(None) # ha ha we fooled it into thinking that we had a dead process self.assert_(self.pool.free() == 1) conn2 = self.pool.get() self.assert_(conn2 is not None) self.assert_(conn2.cursor) self.pool.put(conn2) def test_close_does_a_put(self): self.assert_(self.pool.free() == 0) self.connection.close() self.assert_(self.pool.free() == 1) self.assertRaises(AttributeError, self.connection.cursor) @skipped def test_deletion_does_a_put(self): # doing a put on del causes some issues if __del__ is called in the # main coroutine, so, not doing that for now self.assert_(self.pool.free() == 0) self.connection = None self.assert_(self.pool.free() == 1) def test_put_doesnt_double_wrap(self): self.pool.put(self.connection) conn = self.pool.get() self.assert_(not isinstance(conn._base, db_pool.PooledConnectionWrapper)) self.pool.put(conn) def test_bool(self): self.assert_(self.connection) self.connection.close() self.assert_(not self.connection) def fill_up_table(self, conn): curs = conn.cursor() for i in range(1000): curs.execute('insert into test_table (value_int) values (%s)' % i) conn.commit() def test_returns_immediately(self): self.pool = self.create_pool() conn = self.pool.get() self.set_up_dummy_table(conn) self.fill_up_table(conn) curs = conn.cursor() results = [] SHORT_QUERY = "select * from test_table" evt = event.Event() def a_query(): self.assert_cursor_works(curs) curs.execute(SHORT_QUERY) results.append(2) evt.send() eventlet.spawn(a_query) results.append(1) self.assertEqual([1], results) evt.wait() self.assertEqual([1, 2], results) self.pool.put(conn) def test_connection_is_clean_after_put(self): self.pool = self.create_pool() conn = self.pool.get() self.set_up_dummy_table(conn) curs = conn.cursor() for i in range(10): curs.execute('insert into test_table (value_int) values (%s)' % i) # do not commit :-) self.pool.put(conn) del conn conn2 = self.pool.get() curs2 = conn2.cursor() for i in range(10): curs2.execute('insert into test_table (value_int) values (%s)' % i) conn2.commit() curs2.execute("select * from test_table") # we should have only inserted them once self.assertEqual(10, curs2.rowcount) self.pool.put(conn2) def test_visibility_from_other_connections(self): self.pool = self.create_pool(3) conn = self.pool.get() conn2 = self.pool.get() curs = conn.cursor() try: curs2 = conn2.cursor() curs2.execute("insert into gargleblatz (a) values (%s)" % (314159)) self.assertEqual(curs2.rowcount, 1) conn2.commit() selection_query = "select * from gargleblatz" curs2.execute(selection_query) self.assertEqual(curs2.rowcount, 1) del curs2 self.pool.put(conn2) # create a new connection, it should see the addition conn3 = self.pool.get() curs3 = conn3.cursor() curs3.execute(selection_query) self.assertEqual(curs3.rowcount, 1) # now, does the already-open connection see it? curs.execute(selection_query) self.assertEqual(curs.rowcount, 1) self.pool.put(conn3) finally: # clean up my litter curs.execute("delete from gargleblatz where a=314159") conn.commit() self.pool.put(conn) @skipped def test_two_simultaneous_connections(self): # timing-sensitive test, disabled until we come up with a better # way to do this self.pool = self.create_pool(2) conn = self.pool.get() self.set_up_dummy_table(conn) self.fill_up_table(conn) curs = conn.cursor() conn2 = self.pool.get() self.set_up_dummy_table(conn2) self.fill_up_table(conn2) curs2 = conn2.cursor() results = [] LONG_QUERY = "select * from test_table" SHORT_QUERY = "select * from test_table where row_id <= 20" evt = event.Event() def long_running_query(): self.assert_cursor_works(curs) curs.execute(LONG_QUERY) results.append(1) evt.send() evt2 = event.Event() def short_running_query(): self.assert_cursor_works(curs2) curs2.execute(SHORT_QUERY) results.append(2) evt2.send() eventlet.spawn(long_running_query) eventlet.spawn(short_running_query) evt.wait() evt2.wait() results.sort() self.assertEqual([1, 2], results) def test_clear(self): self.pool = self.create_pool() self.pool.put(self.connection) self.pool.clear() self.assertEqual(len(self.pool.free_items), 0) def test_clear_warmup(self): """Clear implicitly created connections (min_size > 0)""" self.pool = self.create_pool(min_size=1) self.pool.clear() self.assertEqual(len(self.pool.free_items), 0) def test_unwrap_connection(self): self.assert_(isinstance(self.connection, db_pool.GenericConnectionWrapper)) conn = self.pool._unwrap_connection(self.connection) self.assert_(not isinstance(conn, db_pool.GenericConnectionWrapper)) self.assertEquals(None, self.pool._unwrap_connection(None)) self.assertEquals(None, self.pool._unwrap_connection(1)) # testing duck typing here -- as long as the connection has a # _base attribute, it should be unwrappable x = Mock() x._base = 'hi' self.assertEquals('hi', self.pool._unwrap_connection(x)) conn.close() def test_safe_close(self): self.pool._safe_close(self.connection, quiet=True) self.assertEquals(len(self.pool.free_items), 1) self.pool._safe_close(None) self.pool._safe_close(1) # now we're really going for 100% coverage x = Mock() def fail(): raise KeyboardInterrupt() x.close = fail self.assertRaises(KeyboardInterrupt, self.pool._safe_close, x) x = Mock() def fail2(): raise RuntimeError("if this line has been printed, the test succeeded") x.close = fail2 self.pool._safe_close(x, quiet=False) def test_zero_max_idle(self): self.pool.put(self.connection) self.pool.clear() self.pool = self.create_pool(max_size=2, max_idle=0) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 0) def test_zero_max_age(self): self.pool.put(self.connection) self.pool.clear() self.pool = self.create_pool(max_size=2, max_age=0) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 0) @skipped def test_max_idle(self): # This test is timing-sensitive. Rename the function without # the "dont" to run it, but beware that it could fail or take # a while. self.pool = self.create_pool(max_size=2, max_idle=0.02) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.01) # not long enough to trigger the idle timeout self.assertEquals(len(self.pool.free_items), 1) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.01) # idle timeout should have fired but done nothing self.assertEquals(len(self.pool.free_items), 1) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.03) # long enough to trigger idle timeout for real self.assertEquals(len(self.pool.free_items), 0) @skipped def test_max_idle_many(self): # This test is timing-sensitive. Rename the function without # the "dont" to run it, but beware that it could fail or take # a while. self.pool = self.create_pool(max_size=2, max_idle=0.02) self.connection, conn2 = self.pool.get(), self.pool.get() self.connection.close() eventlet.sleep(0.01) self.assertEquals(len(self.pool.free_items), 1) conn2.close() self.assertEquals(len(self.pool.free_items), 2) eventlet.sleep(0.02) # trigger cleanup of conn1 but not conn2 self.assertEquals(len(self.pool.free_items), 1) @skipped def test_max_age(self): # This test is timing-sensitive. Rename the function without # the "dont" to run it, but beware that it could fail or take # a while. self.pool = self.create_pool(max_size=2, max_age=0.05) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.01) # not long enough to trigger the age timeout self.assertEquals(len(self.pool.free_items), 1) self.connection = self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.05) # long enough to trigger age timeout self.assertEquals(len(self.pool.free_items), 0) @skipped def test_max_age_many(self): # This test is timing-sensitive. Rename the function without # the "dont" to run it, but beware that it could fail or take # a while. self.pool = self.create_pool(max_size=2, max_age=0.15) self.connection, conn2 = self.pool.get(), self.pool.get() self.connection.close() self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0) # not long enough to trigger the age timeout self.assertEquals(len(self.pool.free_items), 1) eventlet.sleep(0.2) # long enough to trigger age timeout self.assertEquals(len(self.pool.free_items), 0) conn2.close() # should not be added to the free items self.assertEquals(len(self.pool.free_items), 0) def test_waiters_get_woken(self): # verify that when there's someone waiting on an empty pool # and someone puts an immediately-closed connection back in # the pool that the waiter gets woken self.pool.put(self.connection) self.pool.clear() self.pool = self.create_pool(max_size=1, max_age=0) self.connection = self.pool.get() self.assertEquals(self.pool.free(), 0) self.assertEquals(self.pool.waiting(), 0) e = event.Event() def retrieve(pool, ev): c = pool.get() ev.send(c) eventlet.spawn(retrieve, self.pool, e) eventlet.sleep(0) # these two sleeps should advance the retrieve eventlet.sleep(0) # coroutine until it's waiting in get() self.assertEquals(self.pool.free(), 0) self.assertEquals(self.pool.waiting(), 1) self.pool.put(self.connection) timer = eventlet.Timeout(1) conn = e.wait() timer.cancel() self.assertEquals(self.pool.free(), 0) self.assertEquals(self.pool.waiting(), 0) self.pool.put(conn) @skipped def test_0_straight_benchmark(self): """ Benchmark; don't run unless you want to wait a while.""" import time iterations = 20000 c = self.connection.cursor() self.connection.commit() def bench(c): for i in xrange(iterations): c.execute('select 1') bench(c) # warm-up results = [] for i in xrange(3): start = time.time() bench(c) end = time.time() results.append(end-start) print "\n%u iterations took an average of %f seconds, (%s) in %s\n" % ( iterations, sum(results)/len(results), results, type(self)) def test_raising_create(self): # if the create() method raises an exception the pool should # not lose any connections self.pool = self.create_pool(max_size=1, module=RaisingDBModule()) self.assertRaises(RuntimeError, self.pool.get) self.assertEquals(self.pool.free(), 1) class DummyConnection(object): pass class DummyDBModule(object): def connect(self, *args, **kwargs): return DummyConnection() class RaisingDBModule(object): def connect(self, *args, **kw): raise RuntimeError() class TpoolConnectionPool(DBConnectionPool): __test__ = False # so that nose doesn't try to execute this directly def create_pool(self, min_size=0, max_size=1, max_idle=10, max_age=10, connect_timeout=0.5, module=None): if module is None: module = self._dbmodule return db_pool.TpooledConnectionPool(module, min_size=min_size, max_size=max_size, max_idle=max_idle, max_age=max_age, connect_timeout = connect_timeout, **self._auth) @skip_with_pyevent def setUp(self): super(TpoolConnectionPool, self).setUp() def tearDown(self): super(TpoolConnectionPool, self).tearDown() from eventlet import tpool tpool.killall() class RawConnectionPool(DBConnectionPool): __test__ = False # so that nose doesn't try to execute this directly def create_pool(self, min_size=0, max_size=1, max_idle=10, max_age=10, connect_timeout=0.5, module=None): if module is None: module = self._dbmodule return db_pool.RawConnectionPool(module, min_size=min_size, max_size=max_size, max_idle=max_idle, max_age=max_age, connect_timeout=connect_timeout, **self._auth) class TestRawConnectionPool(TestCase): def test_issue_125(self): # pool = self.create_pool(min_size=3, max_size=5) pool = db_pool.RawConnectionPool(DummyDBModule(), dsn="dbname=test user=jessica port=5433", min_size=3, max_size=5) conn = pool.get() pool.put(conn) get_auth = get_database_auth def mysql_requirement(_f): verbose = os.environ.get('eventlet_test_mysql_verbose') try: import MySQLdb try: auth = get_auth()['MySQLdb'].copy() MySQLdb.connect(**auth) return True except MySQLdb.OperationalError: if verbose: print >> sys.stderr, ">> Skipping mysql tests, error when connecting:" traceback.print_exc() return False except ImportError: if verbose: print >> sys.stderr, ">> Skipping mysql tests, MySQLdb not importable" return False class MysqlConnectionPool(object): dummy_table_sql = """CREATE TEMPORARY TABLE test_table ( row_id INTEGER PRIMARY KEY AUTO_INCREMENT, value_int INTEGER, value_float FLOAT, value_string VARCHAR(200), value_uuid CHAR(36), value_binary BLOB, value_binary_string VARCHAR(200) BINARY, value_enum ENUM('Y','N'), created TIMESTAMP ) ENGINE=InnoDB;""" @skip_unless(mysql_requirement) def setUp(self): import MySQLdb self._dbmodule = MySQLdb self._auth = get_auth()['MySQLdb'] super(MysqlConnectionPool, self).setUp() def tearDown(self): super(MysqlConnectionPool, self).tearDown() def create_db(self): auth = self._auth.copy() try: self.drop_db() except Exception: pass dbname = 'test%s' % os.getpid() db = self._dbmodule.connect(**auth).cursor() db.execute("create database "+dbname) db.close() self._auth['db'] = dbname del db def drop_db(self): db = self._dbmodule.connect(**self._auth).cursor() db.execute("drop database "+self._auth['db']) db.close() del db class Test01MysqlTpool(MysqlConnectionPool, TpoolConnectionPool, TestCase): __test__ = True class Test02MysqlRaw(MysqlConnectionPool, RawConnectionPool, TestCase): __test__ = True def postgres_requirement(_f): try: import psycopg2 try: auth = get_auth()['psycopg2'].copy() psycopg2.connect(**auth) return True except psycopg2.OperationalError: print "Skipping postgres tests, error when connecting" return False except ImportError: print "Skipping postgres tests, psycopg2 not importable" return False class Psycopg2ConnectionPool(object): dummy_table_sql = """CREATE TEMPORARY TABLE test_table ( row_id SERIAL PRIMARY KEY, value_int INTEGER, value_float FLOAT, value_string VARCHAR(200), value_uuid CHAR(36), value_binary BYTEA, value_binary_string BYTEA, created TIMESTAMP );""" @skip_unless(postgres_requirement) def setUp(self): import psycopg2 self._dbmodule = psycopg2 self._auth = get_auth()['psycopg2'] super(Psycopg2ConnectionPool, self).setUp() def tearDown(self): super(Psycopg2ConnectionPool, self).tearDown() def create_db(self): dbname = 'test%s' % os.getpid() self._auth['database'] = dbname try: self.drop_db() except Exception: pass auth = self._auth.copy() auth.pop('database') # can't create if you're connecting to it conn = self._dbmodule.connect(**auth) conn.set_isolation_level(0) db = conn.cursor() db.execute("create database "+dbname) db.close() del db def drop_db(self): auth = self._auth.copy() auth.pop('database') # can't drop database we connected to conn = self._dbmodule.connect(**auth) conn.set_isolation_level(0) db = conn.cursor() db.execute("drop database "+self._auth['database']) db.close() del db class Test01Psycopg2Tpool(Psycopg2ConnectionPool, TpoolConnectionPool, TestCase): __test__ = True class Test02Psycopg2Raw(Psycopg2ConnectionPool, RawConnectionPool, TestCase): __test__ = True if __name__ == '__main__': main() eventlet-0.13.0/tests/zmq_test.py0000644000175000017500000003673212164577340017667 0ustar temototemoto00000000000000from __future__ import with_statement from eventlet import event, spawn, sleep, patcher, semaphore from eventlet.hubs import get_hub, _threadlocal, use_hub from nose.tools import * from tests import check_idle_cpu_usage, mock, LimitedTestCase, using_pyevent, skip_unless from unittest import TestCase from threading import Thread try: from eventlet.green import zmq except ImportError: zmq = {} # for systems lacking zmq, skips tests instead of barfing def zmq_supported(_): try: import zmq except ImportError: return False return not using_pyevent(_) class TestUpstreamDownStream(LimitedTestCase): @skip_unless(zmq_supported) def setUp(self): super(TestUpstreamDownStream, self).setUp() self.context = zmq.Context() self.sockets = [] @skip_unless(zmq_supported) def tearDown(self): self.clear_up_sockets() super(TestUpstreamDownStream, self).tearDown() def create_bound_pair(self, type1, type2, interface='tcp://127.0.0.1'): """Create a bound socket pair using a random port.""" s1 = self.context.socket(type1) port = s1.bind_to_random_port(interface) s2 = self.context.socket(type2) s2.connect('%s:%s' % (interface, port)) self.sockets.append(s1) self.sockets.append(s2) return s1, s2, port def clear_up_sockets(self): for sock in self.sockets: sock.close() self.sockets = None self.context.destroy(0) def assertRaisesErrno(self, errno, func, *args): try: func(*args) except zmq.ZMQError, e: self.assertEqual(e.errno, errno, "wrong error raised, expected '%s' \ got '%s'" % (zmq.ZMQError(errno), zmq.ZMQError(e.errno))) else: self.fail("Function did not raise any error") @skip_unless(zmq_supported) def test_close_linger(self): """Socket.close() must support linger argument. https://github.com/eventlet/eventlet/issues/9 """ sock1, sock2, _ = self.create_bound_pair(zmq.PAIR, zmq.PAIR) sock1.close(1) sock2.close(linger=0) @skip_unless(zmq_supported) def test_recv_spawned_before_send_is_non_blocking(self): req, rep, port = self.create_bound_pair(zmq.PAIR, zmq.PAIR) # req.connect(ipc) # rep.bind(ipc) sleep() msg = dict(res=None) done = event.Event() def rx(): msg['res'] = rep.recv() done.send('done') spawn(rx) req.send('test') done.wait() self.assertEqual(msg['res'], 'test') @skip_unless(zmq_supported) def test_close_socket_raises_enotsup(self): req, rep, port = self.create_bound_pair(zmq.PAIR, zmq.PAIR) rep.close() req.close() self.assertRaisesErrno(zmq.ENOTSUP, rep.recv) self.assertRaisesErrno(zmq.ENOTSUP, req.send, 'test') @skip_unless(zmq_supported) def test_close_xsocket_raises_enotsup(self): req, rep, port = self.create_bound_pair(zmq.XREQ, zmq.XREP) rep.close() req.close() self.assertRaisesErrno(zmq.ENOTSUP, rep.recv) self.assertRaisesErrno(zmq.ENOTSUP, req.send, 'test') @skip_unless(zmq_supported) def test_send_1k_req_rep(self): req, rep, port = self.create_bound_pair(zmq.REQ, zmq.REP) sleep() done = event.Event() def tx(): tx_i = 0 req.send(str(tx_i)) while req.recv() != 'done': tx_i += 1 req.send(str(tx_i)) done.send(0) def rx(): while True: rx_i = rep.recv() if rx_i == "1000": rep.send('done') break rep.send('i') spawn(tx) spawn(rx) final_i = done.wait() self.assertEqual(final_i, 0) @skip_unless(zmq_supported) def test_send_1k_push_pull(self): down, up, port = self.create_bound_pair(zmq.PUSH, zmq.PULL) sleep() done = event.Event() def tx(): tx_i = 0 while tx_i <= 1000: tx_i += 1 down.send(str(tx_i)) def rx(): while True: rx_i = up.recv() if rx_i == "1000": done.send(0) break spawn(tx) spawn(rx) final_i = done.wait() self.assertEqual(final_i, 0) @skip_unless(zmq_supported) def test_send_1k_pub_sub(self): pub, sub_all, port = self.create_bound_pair(zmq.PUB, zmq.SUB) sub1 = self.context.socket(zmq.SUB) sub2 = self.context.socket(zmq.SUB) self.sockets.extend([sub1, sub2]) addr = 'tcp://127.0.0.1:%s' % port sub1.connect(addr) sub2.connect(addr) sub_all.setsockopt(zmq.SUBSCRIBE, '') sub1.setsockopt(zmq.SUBSCRIBE, 'sub1') sub2.setsockopt(zmq.SUBSCRIBE, 'sub2') sub_all_done = event.Event() sub1_done = event.Event() sub2_done = event.Event() sleep(0.2) def rx(sock, done_evt, msg_count=10000): count = 0 while count < msg_count: msg = sock.recv() sleep() if 'LAST' in msg: break count += 1 done_evt.send(count) def tx(sock): for i in range(1, 1001): msg = "sub%s %s" % ([2,1][i % 2], i) sock.send(msg) sleep() sock.send('sub1 LAST') sock.send('sub2 LAST') spawn(rx, sub_all, sub_all_done) spawn(rx, sub1, sub1_done) spawn(rx, sub2, sub2_done) spawn(tx, pub) sub1_count = sub1_done.wait() sub2_count = sub2_done.wait() sub_all_count = sub_all_done.wait() self.assertEqual(sub1_count, 500) self.assertEqual(sub2_count, 500) self.assertEqual(sub_all_count, 1000) @skip_unless(zmq_supported) def test_change_subscription(self): pub, sub, port = self.create_bound_pair(zmq.PUB, zmq.SUB) sub.setsockopt(zmq.SUBSCRIBE, 'test') sleep(0.2) sub_done = event.Event() def rx(sock, done_evt): count = 0 sub = 'test' while True: msg = sock.recv() sleep() if 'DONE' in msg: break if 'LAST' in msg and sub == 'test': sock.setsockopt(zmq.UNSUBSCRIBE, 'test') sock.setsockopt(zmq.SUBSCRIBE, 'done') sub = 'done' count += 1 done_evt.send(count) def tx(sock): for i in range(1, 101): msg = "test %s" % i if i != 50: sock.send(msg) else: sock.send('test LAST') sleep() sock.send('done DONE') spawn(rx, sub, sub_done) spawn(tx, pub) rx_count = sub_done.wait() self.assertEqual(rx_count, 50) @skip_unless(zmq_supported) def test_recv_multipart_bug68(self): req, rep, port = self.create_bound_pair(zmq.REQ, zmq.REP) msg = [''] req.send_multipart(msg) recieved_msg = rep.recv_multipart() self.assertEqual(recieved_msg, msg) # Send a message back the other way msg2 = [""] rep.send_multipart(msg2, copy=False) # When receiving a copy it's a zmq.core.message.Message you get back recieved_msg = req.recv_multipart(copy=False) # So it needs to be converted to a string # I'm calling str(m) consciously here; Message has a .data attribute # but it's private __str__ appears to be the way to go self.assertEqual([str(m) for m in recieved_msg], msg2) @skip_unless(zmq_supported) def test_recv_noblock_bug76(self): req, rep, port = self.create_bound_pair(zmq.REQ, zmq.REP) self.assertRaisesErrno(zmq.EAGAIN, rep.recv, zmq.NOBLOCK) self.assertRaisesErrno(zmq.EAGAIN, rep.recv, zmq.NOBLOCK, True) @skip_unless(zmq_supported) def test_send_during_recv(self): sender, receiver, port = self.create_bound_pair(zmq.XREQ, zmq.XREQ) sleep() num_recvs = 30 done_evts = [event.Event() for _ in range(num_recvs)] def slow_rx(done, msg): self.assertEqual(sender.recv(), msg) done.send(0) def tx(): tx_i = 0 while tx_i <= 1000: sender.send(str(tx_i)) tx_i += 1 def rx(): while True: rx_i = receiver.recv() if rx_i == "1000": for i in range(num_recvs): receiver.send('done%d' % i) sleep() return for i in range(num_recvs): spawn(slow_rx, done_evts[i], "done%d" % i) spawn(tx) spawn(rx) for evt in done_evts: self.assertEqual(evt.wait(), 0) @skip_unless(zmq_supported) def test_send_during_recv_multipart(self): sender, receiver, port = self.create_bound_pair(zmq.XREQ, zmq.XREQ) sleep() num_recvs = 30 done_evts = [event.Event() for _ in range(num_recvs)] def slow_rx(done, msg): self.assertEqual(sender.recv_multipart(), msg) done.send(0) def tx(): tx_i = 0 while tx_i <= 1000: sender.send_multipart([str(tx_i), '1', '2', '3']) tx_i += 1 def rx(): while True: rx_i = receiver.recv_multipart() if rx_i == ["1000", '1', '2', '3']: for i in range(num_recvs): receiver.send_multipart(['done%d' % i, 'a', 'b', 'c']) sleep() return for i in range(num_recvs): spawn(slow_rx, done_evts[i], ["done%d" % i, 'a', 'b', 'c']) spawn(tx) spawn(rx) for i in range(num_recvs): final_i = done_evts[i].wait() self.assertEqual(final_i, 0) # Need someway to ensure a thread is blocked on send... This isn't working @skip_unless(zmq_supported) def test_recv_during_send(self): sender, receiver, port = self.create_bound_pair(zmq.XREQ, zmq.XREQ) sleep() num_recvs = 30 done = event.Event() try: SNDHWM = zmq.SNDHWM except AttributeError: # ZeroMQ <3.0 SNDHWM = zmq.HWM sender.setsockopt(SNDHWM, 10) sender.setsockopt(zmq.SNDBUF, 10) receiver.setsockopt(zmq.RCVBUF, 10) def tx(): tx_i = 0 while tx_i <= 1000: sender.send(str(tx_i)) tx_i += 1 done.send(0) spawn(tx) final_i = done.wait() self.assertEqual(final_i, 0) @skip_unless(zmq_supported) def test_close_during_recv(self): sender, receiver, port = self.create_bound_pair(zmq.XREQ, zmq.XREQ) sleep() done1 = event.Event() done2 = event.Event() def rx(e): self.assertRaisesErrno(zmq.ENOTSUP, receiver.recv) e.send() spawn(rx, done1) spawn(rx, done2) sleep() receiver.close() done1.wait() done2.wait() @skip_unless(zmq_supported) def test_getsockopt_events(self): sock1, sock2, _port = self.create_bound_pair(zmq.DEALER, zmq.DEALER) sleep() poll_out = zmq.Poller() poll_out.register(sock1, zmq.POLLOUT) sock_map = poll_out.poll(100) self.assertEqual(len(sock_map), 1) events = sock1.getsockopt(zmq.EVENTS) self.assertEqual(events & zmq.POLLOUT, zmq.POLLOUT) sock1.send('') poll_in = zmq.Poller() poll_in.register(sock2, zmq.POLLIN) sock_map = poll_in.poll(100) self.assertEqual(len(sock_map), 1) events = sock2.getsockopt(zmq.EVENTS) self.assertEqual(events & zmq.POLLIN, zmq.POLLIN) @skip_unless(zmq_supported) def test_cpu_usage_after_bind(self): """zmq eats CPU after PUB socket .bind() https://bitbucket.org/eventlet/eventlet/issue/128 According to the ZeroMQ documentation, the socket file descriptor can be readable without any pending messages. So we need to ensure that Eventlet wraps around ZeroMQ sockets do not create busy loops. A naive way to test it is to measure resource usage. This will require some tuning to set appropriate acceptable limits. """ sock = self.context.socket(zmq.PUB) self.sockets.append(sock) sock.bind_to_random_port("tcp://127.0.0.1") sleep() check_idle_cpu_usage(0.2, 0.1) @skip_unless(zmq_supported) def test_cpu_usage_after_pub_send_or_dealer_recv(self): """zmq eats CPU after PUB send or DEALER recv. Same https://bitbucket.org/eventlet/eventlet/issue/128 """ pub, sub, _port = self.create_bound_pair(zmq.PUB, zmq.SUB) sub.setsockopt(zmq.SUBSCRIBE, "") sleep() pub.send('test_send') check_idle_cpu_usage(0.2, 0.1) sender, receiver, _port = self.create_bound_pair(zmq.DEALER, zmq.DEALER) sleep() sender.send('test_recv') msg = receiver.recv() self.assertEqual(msg, 'test_recv') check_idle_cpu_usage(0.2, 0.1) class TestQueueLock(LimitedTestCase): @skip_unless(zmq_supported) def test_queue_lock_order(self): q = zmq._QueueLock() s = semaphore.Semaphore(0) results = [] def lock(x): with q: results.append(x) s.release() q.acquire() spawn(lock, 1) sleep() spawn(lock, 2) sleep() spawn(lock, 3) sleep() self.assertEquals(results, []) q.release() s.acquire() s.acquire() s.acquire() self.assertEquals(results, [1,2,3]) @skip_unless(zmq_supported) def test_count(self): q = zmq._QueueLock() self.assertFalse(q) q.acquire() self.assertTrue(q) q.release() self.assertFalse(q) with q: self.assertTrue(q) self.assertFalse(q) @skip_unless(zmq_supported) def test_errors(self): q = zmq._QueueLock() self.assertRaises(zmq.LockReleaseError, q.release) q.acquire() q.release() self.assertRaises(zmq.LockReleaseError, q.release) @skip_unless(zmq_supported) def test_nested_acquire(self): q = zmq._QueueLock() self.assertFalse(q) q.acquire() q.acquire() s = semaphore.Semaphore(0) results = [] def lock(x): with q: results.append(x) s.release() spawn(lock, 1) sleep() self.assertEquals(results, []) q.release() sleep() self.assertEquals(results, []) self.assertTrue(q) q.release() s.acquire() self.assertEquals(results, [1]) class TestBlockedThread(LimitedTestCase): @skip_unless(zmq_supported) def test_block(self): e = zmq._BlockedThread() done = event.Event() self.assertFalse(e) def block(): e.block() done.send(1) spawn(block) sleep() self.assertFalse(done.has_result()) e.wake() done.wait() eventlet-0.13.0/tests/greenpool_test.py0000644000175000017500000003612312164577340021044 0ustar temototemoto00000000000000import gc import itertools import os import random import eventlet from eventlet import debug from eventlet import hubs, greenpool, coros, event from eventlet.support import greenlets as greenlet import tests def passthru(a): eventlet.sleep(0.01) return a def passthru2(a, b): eventlet.sleep(0.01) return a,b def raiser(exc): raise exc class GreenPool(tests.LimitedTestCase): def test_spawn(self): p = greenpool.GreenPool(4) waiters = [] for i in xrange(10): waiters.append(p.spawn(passthru, i)) results = [waiter.wait() for waiter in waiters] self.assertEquals(results, list(xrange(10))) def test_spawn_n(self): p = greenpool.GreenPool(4) results_closure = [] def do_something(a): eventlet.sleep(0.01) results_closure.append(a) for i in xrange(10): p.spawn(do_something, i) p.waitall() self.assertEquals(results_closure, range(10)) def test_waiting(self): pool = greenpool.GreenPool(1) done = event.Event() def consume(): done.wait() def waiter(pool): gt = pool.spawn(consume) gt.wait() waiters = [] self.assertEqual(pool.running(), 0) waiters.append(eventlet.spawn(waiter, pool)) eventlet.sleep(0) self.assertEqual(pool.waiting(), 0) waiters.append(eventlet.spawn(waiter, pool)) eventlet.sleep(0) self.assertEqual(pool.waiting(), 1) waiters.append(eventlet.spawn(waiter, pool)) eventlet.sleep(0) self.assertEqual(pool.waiting(), 2) self.assertEqual(pool.running(), 1) done.send(None) for w in waiters: w.wait() self.assertEqual(pool.waiting(), 0) self.assertEqual(pool.running(), 0) def test_multiple_coros(self): evt = event.Event() results = [] def producer(): results.append('prod') evt.send() def consumer(): results.append('cons1') evt.wait() results.append('cons2') pool = greenpool.GreenPool(2) done = pool.spawn(consumer) pool.spawn_n(producer) done.wait() self.assertEquals(['cons1', 'prod', 'cons2'], results) def test_timer_cancel(self): # this test verifies that local timers are not fired # outside of the context of the spawn timer_fired = [] def fire_timer(): timer_fired.append(True) def some_work(): hubs.get_hub().schedule_call_local(0, fire_timer) pool = greenpool.GreenPool(2) worker = pool.spawn(some_work) worker.wait() eventlet.sleep(0) eventlet.sleep(0) self.assertEquals(timer_fired, []) def test_reentrant(self): pool = greenpool.GreenPool(1) def reenter(): waiter = pool.spawn(lambda a: a, 'reenter') self.assertEqual('reenter', waiter.wait()) outer_waiter = pool.spawn(reenter) outer_waiter.wait() evt = event.Event() def reenter_async(): pool.spawn_n(lambda a: a, 'reenter') evt.send('done') pool.spawn_n(reenter_async) self.assertEquals('done', evt.wait()) def assert_pool_has_free(self, pool, num_free): self.assertEquals(pool.free(), num_free) def wait_long_time(e): e.wait() timer = eventlet.Timeout(1) try: evt = event.Event() for x in xrange(num_free): pool.spawn(wait_long_time, evt) # if the pool has fewer free than we expect, # then we'll hit the timeout error finally: timer.cancel() # if the runtime error is not raised it means the pool had # some unexpected free items timer = eventlet.Timeout(0, RuntimeError) try: self.assertRaises(RuntimeError, pool.spawn, wait_long_time, evt) finally: timer.cancel() # clean up by causing all the wait_long_time functions to return evt.send(None) eventlet.sleep(0) eventlet.sleep(0) def test_resize(self): pool = greenpool.GreenPool(2) evt = event.Event() def wait_long_time(e): e.wait() pool.spawn(wait_long_time, evt) pool.spawn(wait_long_time, evt) self.assertEquals(pool.free(), 0) self.assertEquals(pool.running(), 2) self.assert_pool_has_free(pool, 0) # verify that the pool discards excess items put into it pool.resize(1) # cause the wait_long_time functions to return, which will # trigger puts to the pool evt.send(None) eventlet.sleep(0) eventlet.sleep(0) self.assertEquals(pool.free(), 1) self.assertEquals(pool.running(), 0) self.assert_pool_has_free(pool, 1) # resize larger and assert that there are more free items pool.resize(2) self.assertEquals(pool.free(), 2) self.assertEquals(pool.running(), 0) self.assert_pool_has_free(pool, 2) def test_pool_smash(self): # The premise is that a coroutine in a Pool tries to get a token out # of a token pool but times out before getting the token. We verify # that neither pool is adversely affected by this situation. from eventlet import pools pool = greenpool.GreenPool(1) tp = pools.TokenPool(max_size=1) token = tp.get() # empty out the pool def do_receive(tp): timer = eventlet.Timeout(0, RuntimeError()) try: t = tp.get() self.fail("Shouldn't have recieved anything from the pool") except RuntimeError: return 'timed out' else: timer.cancel() # the spawn makes the token pool expect that coroutine, but then # immediately cuts bait e1 = pool.spawn(do_receive, tp) self.assertEquals(e1.wait(), 'timed out') # the pool can get some random item back def send_wakeup(tp): tp.put('wakeup') gt = eventlet.spawn(send_wakeup, tp) # now we ask the pool to run something else, which should not # be affected by the previous send at all def resume(): return 'resumed' e2 = pool.spawn(resume) self.assertEquals(e2.wait(), 'resumed') # we should be able to get out the thing we put in there, too self.assertEquals(tp.get(), 'wakeup') gt.wait() def test_spawn_n_2(self): p = greenpool.GreenPool(2) self.assertEqual(p.free(), 2) r = [] def foo(a): r.append(a) gt = p.spawn(foo, 1) self.assertEqual(p.free(), 1) gt.wait() self.assertEqual(r, [1]) eventlet.sleep(0) self.assertEqual(p.free(), 2) #Once the pool is exhausted, spawning forces a yield. p.spawn_n(foo, 2) self.assertEqual(1, p.free()) self.assertEqual(r, [1]) p.spawn_n(foo, 3) self.assertEqual(0, p.free()) self.assertEqual(r, [1]) p.spawn_n(foo, 4) self.assertEqual(set(r), set([1,2,3])) eventlet.sleep(0) self.assertEqual(set(r), set([1,2,3,4])) def test_exceptions(self): p = greenpool.GreenPool(2) for m in (p.spawn, p.spawn_n): self.assert_pool_has_free(p, 2) m(raiser, RuntimeError()) self.assert_pool_has_free(p, 1) p.waitall() self.assert_pool_has_free(p, 2) m(raiser, greenlet.GreenletExit) self.assert_pool_has_free(p, 1) p.waitall() self.assert_pool_has_free(p, 2) def test_imap(self): p = greenpool.GreenPool(4) result_list = list(p.imap(passthru, xrange(10))) self.assertEquals(result_list, list(xrange(10))) def test_empty_imap(self): p = greenpool.GreenPool(4) result_iter = p.imap(passthru, []) self.assertRaises(StopIteration, result_iter.next) def test_imap_nonefunc(self): p = greenpool.GreenPool(4) result_list = list(p.imap(None, xrange(10))) self.assertEquals(result_list, [(x,) for x in xrange(10)]) def test_imap_multi_args(self): p = greenpool.GreenPool(4) result_list = list(p.imap(passthru2, xrange(10), xrange(10, 20))) self.assertEquals(result_list, list(itertools.izip(xrange(10), xrange(10,20)))) def test_imap_raises(self): # testing the case where the function raises an exception; # both that the caller sees that exception, and that the iterator # continues to be usable to get the rest of the items p = greenpool.GreenPool(4) def raiser(item): if item == 1 or item == 7: raise RuntimeError("intentional error") else: return item it = p.imap(raiser, xrange(10)) results = [] while True: try: results.append(it.next()) except RuntimeError: results.append('r') except StopIteration: break self.assertEquals(results, [0,'r',2,3,4,5,6,'r',8,9]) def test_starmap(self): p = greenpool.GreenPool(4) result_list = list(p.starmap(passthru, [(x,) for x in xrange(10)])) self.assertEquals(result_list, range(10)) def test_waitall_on_nothing(self): p = greenpool.GreenPool() p.waitall() def test_recursive_waitall(self): p = greenpool.GreenPool() gt = p.spawn(p.waitall) self.assertRaises(AssertionError, gt.wait) class GreenPile(tests.LimitedTestCase): def test_pile(self): p = greenpool.GreenPile(4) for i in xrange(10): p.spawn(passthru, i) result_list = list(p) self.assertEquals(result_list, list(xrange(10))) def test_pile_spawn_times_out(self): p = greenpool.GreenPile(4) for i in xrange(4): p.spawn(passthru, i) # now it should be full and this should time out eventlet.Timeout(0) self.assertRaises(eventlet.Timeout, p.spawn, passthru, "time out") # verify that the spawn breakage didn't interrupt the sequence # and terminates properly for i in xrange(4,10): p.spawn(passthru, i) self.assertEquals(list(p), list(xrange(10))) def test_constructing_from_pool(self): pool = greenpool.GreenPool(2) pile1 = greenpool.GreenPile(pool) pile2 = greenpool.GreenPile(pool) def bunch_of_work(pile, unique): for i in xrange(10): pile.spawn(passthru, i + unique) eventlet.spawn(bunch_of_work, pile1, 0) eventlet.spawn(bunch_of_work, pile2, 100) eventlet.sleep(0) self.assertEquals(list(pile2), list(xrange(100,110))) self.assertEquals(list(pile1), list(xrange(10))) class StressException(Exception): pass r = random.Random(0) def pressure(arg): while r.random() < 0.5: eventlet.sleep(r.random() * 0.001) if r.random() < 0.8: return arg else: raise StressException(arg) def passthru(arg): while r.random() < 0.5: eventlet.sleep(r.random() * 0.001) return arg class Stress(tests.LimitedTestCase): # tests will take extra-long TEST_TIMEOUT=60 @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def spawn_order_check(self, concurrency): # checks that piles are strictly ordered p = greenpool.GreenPile(concurrency) def makework(count, unique): for i in xrange(count): token = (unique, i) p.spawn(pressure, token) iters = 1000 eventlet.spawn(makework, iters, 1) eventlet.spawn(makework, iters, 2) eventlet.spawn(makework, iters, 3) p.spawn(pressure, (0,0)) latest = [-1] * 4 received = 0 it = iter(p) while True: try: i = it.next() except StressException, exc: i = exc.args[0] except StopIteration: break received += 1 if received % 5 == 0: eventlet.sleep(0.0001) unique, order = i self.assert_(latest[unique] < order) latest[unique] = order for l in latest[1:]: self.assertEquals(l, iters - 1) @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def test_ordering_5(self): self.spawn_order_check(5) @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def test_ordering_50(self): self.spawn_order_check(50) def imap_memory_check(self, concurrency): # checks that imap is strictly # ordered and consumes a constant amount of memory p = greenpool.GreenPool(concurrency) count = 1000 it = p.imap(passthru, xrange(count)) latest = -1 while True: try: i = it.next() except StopIteration: break if latest == -1: gc.collect() initial_obj_count = len(gc.get_objects()) self.assert_(i > latest) latest = i if latest % 5 == 0: eventlet.sleep(0.001) if latest % 10 == 0: gc.collect() objs_created = len(gc.get_objects()) - initial_obj_count self.assert_(objs_created < 25 * concurrency, objs_created) # make sure we got to the end self.assertEquals(latest, count - 1) @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def test_imap_50(self): self.imap_memory_check(50) @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def test_imap_500(self): self.imap_memory_check(500) @tests.skip_unless(os.environ.get('RUN_STRESS_TESTS') == 'YES') def test_with_intpool(self): from eventlet import pools class IntPool(pools.Pool): def create(self): self.current_integer = getattr(self, 'current_integer', 0) + 1 return self.current_integer def subtest(intpool_size, pool_size, num_executes): def run(int_pool): token = int_pool.get() eventlet.sleep(0.0001) int_pool.put(token) return token int_pool = IntPool(max_size=intpool_size) pool = greenpool.GreenPool(pool_size) for ix in xrange(num_executes): pool.spawn(run, int_pool) pool.waitall() subtest(4, 7, 7) subtest(50, 75, 100) for isize in (10, 20, 30, 40, 50): for psize in (5, 25, 35, 50): subtest(isize, psize, psize) eventlet-0.13.0/tests/hub_test.py0000644000175000017500000003260012164577340017624 0ustar temototemoto00000000000000from __future__ import with_statement import sys from tests import LimitedTestCase, main, skip_with_pyevent, skip_if_no_itimer, skip_unless from tests.patcher_test import ProcessBase import time import eventlet from eventlet import hubs from eventlet.green import socket from eventlet.event import Event from eventlet.semaphore import Semaphore from eventlet.support import greenlets DELAY = 0.001 def noop(): pass class TestTimerCleanup(LimitedTestCase): TEST_TIMEOUT = 2 @skip_with_pyevent def test_cancel_immediate(self): hub = hubs.get_hub() stimers = hub.get_timers_count() scanceled = hub.timers_canceled for i in xrange(2000): t = hubs.get_hub().schedule_call_global(60, noop) t.cancel() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count() + 1) # there should be fewer than 1000 new timers and canceled self.assert_less_than_equal(hub.get_timers_count(), 1000 + stimers) self.assert_less_than_equal(hub.timers_canceled, 1000) @skip_with_pyevent def test_cancel_accumulated(self): hub = hubs.get_hub() stimers = hub.get_timers_count() scanceled = hub.timers_canceled for i in xrange(2000): t = hubs.get_hub().schedule_call_global(60, noop) eventlet.sleep() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count() + 1) t.cancel() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count() + 1, hub.timers) # there should be fewer than 1000 new timers and canceled self.assert_less_than_equal(hub.get_timers_count(), 1000 + stimers) self.assert_less_than_equal(hub.timers_canceled, 1000) @skip_with_pyevent def test_cancel_proportion(self): # if fewer than half the pending timers are canceled, it should # not clean them out hub = hubs.get_hub() uncanceled_timers = [] stimers = hub.get_timers_count() scanceled = hub.timers_canceled for i in xrange(1000): # 2/3rds of new timers are uncanceled t = hubs.get_hub().schedule_call_global(60, noop) t2 = hubs.get_hub().schedule_call_global(60, noop) t3 = hubs.get_hub().schedule_call_global(60, noop) eventlet.sleep() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count() + 1) t.cancel() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count() + 1) uncanceled_timers.append(t2) uncanceled_timers.append(t3) # 3000 new timers, plus a few extras self.assert_less_than_equal(stimers + 3000, stimers + hub.get_timers_count()) self.assertEqual(hub.timers_canceled, 1000) for t in uncanceled_timers: t.cancel() self.assert_less_than_equal(hub.timers_canceled, hub.get_timers_count()) eventlet.sleep() class TestScheduleCall(LimitedTestCase): def test_local(self): lst = [1] eventlet.spawn(hubs.get_hub().schedule_call_local, DELAY, lst.pop) eventlet.sleep(0) eventlet.sleep(DELAY * 2) assert lst == [1], lst def test_global(self): lst = [1] eventlet.spawn(hubs.get_hub().schedule_call_global, DELAY, lst.pop) eventlet.sleep(0) eventlet.sleep(DELAY * 2) assert lst == [], lst def test_ordering(self): lst = [] hubs.get_hub().schedule_call_global(DELAY * 2, lst.append, 3) hubs.get_hub().schedule_call_global(DELAY, lst.append, 1) hubs.get_hub().schedule_call_global(DELAY, lst.append, 2) while len(lst) < 3: eventlet.sleep(DELAY) self.assertEquals(lst, [1, 2, 3]) class TestDebug(LimitedTestCase): def test_debug_listeners(self): hubs.get_hub().set_debug_listeners(True) hubs.get_hub().set_debug_listeners(False) def test_timer_exceptions(self): hubs.get_hub().set_timer_exceptions(True) hubs.get_hub().set_timer_exceptions(False) class TestExceptionInMainloop(LimitedTestCase): def test_sleep(self): # even if there was an error in the mainloop, the hub should continue # to work start = time.time() eventlet.sleep(DELAY) delay = time.time() - start assert delay >= DELAY * \ 0.9, 'sleep returned after %s seconds (was scheduled for %s)' % ( delay, DELAY) def fail(): 1 // 0 hubs.get_hub().schedule_call_global(0, fail) start = time.time() eventlet.sleep(DELAY) delay = time.time() - start assert delay >= DELAY * \ 0.9, 'sleep returned after %s seconds (was scheduled for %s)' % ( delay, DELAY) class TestExceptionInGreenthread(LimitedTestCase): @skip_unless(greenlets.preserves_excinfo) def test_exceptionpreservation(self): # events for controlling execution order gt1event = Event() gt2event = Event() def test_gt1(): try: raise KeyError() except KeyError: gt1event.send('exception') gt2event.wait() assert sys.exc_info()[0] is KeyError gt1event.send('test passed') def test_gt2(): gt1event.wait() gt1event.reset() assert sys.exc_info()[0] is None try: raise ValueError() except ValueError: gt2event.send('exception') gt1event.wait() assert sys.exc_info()[0] is ValueError g1 = eventlet.spawn(test_gt1) g2 = eventlet.spawn(test_gt2) try: g1.wait() g2.wait() finally: g1.kill() g2.kill() def test_exceptionleaks(self): # tests expected behaviour with all versions of greenlet def test_gt(sem): try: raise KeyError() except KeyError: sem.release() hubs.get_hub().switch() # semaphores for controlling execution order sem = Semaphore() sem.acquire() g = eventlet.spawn(test_gt, sem) try: sem.acquire() assert sys.exc_info()[0] is None finally: g.kill() class TestHubSelection(LimitedTestCase): def test_explicit_hub(self): if getattr(hubs.get_hub(), 'uses_twisted_reactor', None): # doesn't work with twisted return oldhub = hubs.get_hub() try: hubs.use_hub(Foo) self.assert_(isinstance(hubs.get_hub(), Foo), hubs.get_hub()) finally: hubs._threadlocal.hub = oldhub class TestHubBlockingDetector(LimitedTestCase): TEST_TIMEOUT = 10 @skip_with_pyevent def test_block_detect(self): def look_im_blocking(): import time time.sleep(2) from eventlet import debug debug.hub_blocking_detection(True) gt = eventlet.spawn(look_im_blocking) self.assertRaises(RuntimeError, gt.wait) debug.hub_blocking_detection(False) @skip_with_pyevent @skip_if_no_itimer def test_block_detect_with_itimer(self): def look_im_blocking(): import time time.sleep(0.5) from eventlet import debug debug.hub_blocking_detection(True, resolution=0.1) gt = eventlet.spawn(look_im_blocking) self.assertRaises(RuntimeError, gt.wait) debug.hub_blocking_detection(False) class TestSuspend(LimitedTestCase): TEST_TIMEOUT = 3 def test_suspend_doesnt_crash(self): import errno import os import shutil import signal import subprocess import sys import tempfile self.tempdir = tempfile.mkdtemp('test_suspend') filename = os.path.join(self.tempdir, 'test_suspend.py') fd = open(filename, "w") fd.write("""import eventlet eventlet.Timeout(0.5) try: eventlet.listen(("127.0.0.1", 0)).accept() except eventlet.Timeout: print "exited correctly" """) fd.close() python_path = os.pathsep.join(sys.path + [self.tempdir]) new_env = os.environ.copy() new_env['PYTHONPATH'] = python_path p = subprocess.Popen([sys.executable, os.path.join(self.tempdir, filename)], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=new_env) eventlet.sleep(0.4) # wait for process to hit accept os.kill(p.pid, signal.SIGSTOP) # suspend and resume to generate EINTR os.kill(p.pid, signal.SIGCONT) output, _ = p.communicate() lines = [l for l in output.split("\n") if l] self.assert_("exited correctly" in lines[-1]) shutil.rmtree(self.tempdir) class TestBadFilenos(LimitedTestCase): @skip_with_pyevent def test_repeated_selects(self): from eventlet.green import select self.assertRaises(ValueError, select.select, [-1], [], []) self.assertRaises(ValueError, select.select, [-1], [], []) class TestFork(ProcessBase): @skip_with_pyevent def test_fork(self): new_mod = """ import os import eventlet server = eventlet.listen(('localhost', 12345)) t = eventlet.Timeout(0.01) try: new_sock, address = server.accept() except eventlet.Timeout, t: pass pid = os.fork() if not pid: t = eventlet.Timeout(0.1) try: new_sock, address = server.accept() except eventlet.Timeout, t: print "accept blocked" else: kpid, status = os.wait() assert kpid == pid assert status == 0 print "child died ok" """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 3, output) self.assert_("accept blocked" in lines[0]) self.assert_("child died ok" in lines[1]) class TestDeadRunLoop(LimitedTestCase): TEST_TIMEOUT = 2 class CustomException(Exception): pass def test_kill(self): """ Checks that killing a process after the hub runloop dies does not immediately return to hub greenlet's parent and schedule a redundant timer. """ hub = hubs.get_hub() def dummyproc(): hub.switch() g = eventlet.spawn(dummyproc) eventlet.sleep(0) # let dummyproc run assert hub.greenlet.parent == eventlet.greenthread.getcurrent() self.assertRaises(KeyboardInterrupt, hub.greenlet.throw, KeyboardInterrupt()) # kill dummyproc, this schedules a timer to return execution to # this greenlet before throwing an exception in dummyproc. # it is from this timer that execution should be returned to this # greenlet, and not by propogating of the terminating greenlet. g.kill() with eventlet.Timeout(0.5, self.CustomException()): # we now switch to the hub, there should be no existing timers # that switch back to this greenlet and so this hub.switch() # call should block indefinately. self.assertRaises(self.CustomException, hub.switch) def test_parent(self): """ Checks that a terminating greenthread whose parent was a previous, now-defunct hub greenlet returns execution to the hub runloop and not the hub greenlet's parent. """ hub = hubs.get_hub() def dummyproc(): pass g = eventlet.spawn(dummyproc) assert hub.greenlet.parent == eventlet.greenthread.getcurrent() self.assertRaises(KeyboardInterrupt, hub.greenlet.throw, KeyboardInterrupt()) assert not g.dead # check dummyproc hasn't completed with eventlet.Timeout(0.5, self.CustomException()): # we now switch to the hub which will allow # completion of dummyproc. # this should return execution back to the runloop and not # this greenlet so that hub.switch() would block indefinately. self.assertRaises(self.CustomException, hub.switch) assert g.dead # sanity check that dummyproc has completed class Foo(object): pass class TestDefaultHub(ProcessBase): def test_kqueue_unsupported(self): # https://github.com/eventlet/eventlet/issues/38 # get_hub on windows broken by kqueue module_source = r''' # Simulate absence of kqueue even on platforms that support it. import select try: del select.kqueue except AttributeError: pass import __builtin__ original_import = __builtin__.__import__ def fail_import(name, *args, **kwargs): if 'epoll' in name: raise ImportError('disabled for test') if 'kqueue' in name: print('kqueue tried') return original_import(name, *args, **kwargs) __builtin__.__import__ = fail_import import eventlet.hubs eventlet.hubs.get_default_hub() print('ok') ''' self.write_to_tempfile('newmod', module_source) output, _ = self.launch_subprocess('newmod.py') self.assertEqual(output, 'kqueue tried\nok\n') if __name__ == '__main__': main() eventlet-0.13.0/tests/greenthread_test.py0000644000175000017500000001106612164577340021341 0ustar temototemoto00000000000000from tests import LimitedTestCase from eventlet import greenthread from eventlet.support import greenlets as greenlet _g_results = [] def passthru(*args, **kw): _g_results.append((args, kw)) return args, kw def waiter(a): greenthread.sleep(0.1) return a class Asserts(object): def assert_dead(self, gt): if hasattr(gt, 'wait'): self.assertRaises(greenlet.GreenletExit, gt.wait) self.assert_(gt.dead) self.assert_(not gt) class Spawn(LimitedTestCase, Asserts): def tearDown(self): global _g_results super(Spawn, self).tearDown() _g_results = [] def test_simple(self): gt = greenthread.spawn(passthru, 1, b=2) self.assertEquals(gt.wait(), ((1,),{'b':2})) self.assertEquals(_g_results, [((1,),{'b':2})]) def test_n(self): gt = greenthread.spawn_n(passthru, 2, b=3) self.assert_(not gt.dead) greenthread.sleep(0) self.assert_(gt.dead) self.assertEquals(_g_results, [((2,),{'b':3})]) def test_kill(self): gt = greenthread.spawn(passthru, 6) greenthread.kill(gt) self.assert_dead(gt) greenthread.sleep(0.001) self.assertEquals(_g_results, []) greenthread.kill(gt) self.assert_dead(gt) def test_kill_meth(self): gt = greenthread.spawn(passthru, 6) gt.kill() self.assert_dead(gt) greenthread.sleep(0.001) self.assertEquals(_g_results, []) gt.kill() self.assert_dead(gt) def test_kill_n(self): gt = greenthread.spawn_n(passthru, 7) greenthread.kill(gt) self.assert_dead(gt) greenthread.sleep(0.001) self.assertEquals(_g_results, []) greenthread.kill(gt) self.assert_dead(gt) def test_link(self): results = [] def link_func(g, *a, **kw): results.append(g) results.append(a) results.append(kw) gt = greenthread.spawn(passthru, 5) gt.link(link_func, 4, b=5) self.assertEquals(gt.wait(), ((5,), {})) self.assertEquals(results, [gt, (4,), {'b':5}]) def test_link_after_exited(self): results = [] def link_func(g, *a, **kw): results.append(g) results.append(a) results.append(kw) gt = greenthread.spawn(passthru, 5) self.assertEquals(gt.wait(), ((5,), {})) gt.link(link_func, 4, b=5) self.assertEquals(results, [gt, (4,), {'b':5}]) def test_link_relinks(self): # test that linking in a linked func doesn't cause infinite recursion. called = [] def link_func(g): g.link(link_func_pass) def link_func_pass(g): called.append(True) gt = greenthread.spawn(passthru) gt.link(link_func) gt.wait() self.assertEquals(called, [True]) class SpawnAfter(LimitedTestCase, Asserts): def test_basic(self): gt = greenthread.spawn_after(0.1, passthru, 20) self.assertEquals(gt.wait(), ((20,), {})) def test_cancel(self): gt = greenthread.spawn_after(0.1, passthru, 21) gt.cancel() self.assert_dead(gt) def test_cancel_already_started(self): gt = greenthread.spawn_after(0, waiter, 22) greenthread.sleep(0) gt.cancel() self.assertEquals(gt.wait(), 22) def test_kill_already_started(self): gt = greenthread.spawn_after(0, waiter, 22) greenthread.sleep(0) gt.kill() self.assert_dead(gt) class SpawnAfterLocal(LimitedTestCase, Asserts): def setUp(self): super(SpawnAfterLocal, self).setUp() self.lst = [1] def test_timer_fired(self): def func(): greenthread.spawn_after_local(0.1, self.lst.pop) greenthread.sleep(0.2) greenthread.spawn(func) assert self.lst == [1], self.lst greenthread.sleep(0.3) assert self.lst == [], self.lst def test_timer_cancelled_upon_greenlet_exit(self): def func(): greenthread.spawn_after_local(0.1, self.lst.pop) greenthread.spawn(func) assert self.lst == [1], self.lst greenthread.sleep(0.2) assert self.lst == [1], self.lst def test_spawn_is_not_cancelled(self): def func(): greenthread.spawn(self.lst.pop) # exiting immediatelly, but self.lst.pop must be called greenthread.spawn(func) greenthread.sleep(0.1) assert self.lst == [], self.lst eventlet-0.13.0/tests/convenience_test.py0000644000175000017500000001131412164577340021341 0ustar temototemoto00000000000000import os import eventlet from eventlet import event from eventlet.green import socket from tests import LimitedTestCase, s2b, skip_if_no_ssl certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt') private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key') class TestServe(LimitedTestCase): def setUp(self): super(TestServe, self).setUp() from eventlet import debug debug.hub_exceptions(False) def tearDown(self): super(TestServe, self).tearDown() from eventlet import debug debug.hub_exceptions(True) def test_exiting_server(self): # tests that the server closes the client sock on handle() exit def closer(sock,addr): pass l = eventlet.listen(('localhost', 0)) gt = eventlet.spawn(eventlet.serve, l, closer) client = eventlet.connect(('localhost', l.getsockname()[1])) client.sendall(s2b('a')) self.assertFalse(client.recv(100)) gt.kill() def test_excepting_server(self): # tests that the server closes the client sock on handle() exception def crasher(sock,addr): sock.recv(1024) 0//0 l = eventlet.listen(('localhost', 0)) gt = eventlet.spawn(eventlet.serve, l, crasher) client = eventlet.connect(('localhost', l.getsockname()[1])) client.sendall(s2b('a')) self.assertRaises(ZeroDivisionError, gt.wait) self.assertFalse(client.recv(100)) def test_excepting_server_already_closed(self): # same as above but with explicit clsoe before crash def crasher(sock,addr): sock.recv(1024) sock.close() 0//0 l = eventlet.listen(('localhost', 0)) gt = eventlet.spawn(eventlet.serve, l, crasher) client = eventlet.connect(('localhost', l.getsockname()[1])) client.sendall(s2b('a')) self.assertRaises(ZeroDivisionError, gt.wait) self.assertFalse(client.recv(100)) def test_called_for_each_connection(self): hits = [0] def counter(sock, addr): hits[0]+=1 l = eventlet.listen(('localhost', 0)) gt = eventlet.spawn(eventlet.serve, l, counter) for i in xrange(100): client = eventlet.connect(('localhost', l.getsockname()[1])) self.assertFalse(client.recv(100)) gt.kill() self.assertEqual(100, hits[0]) def test_blocking(self): l = eventlet.listen(('localhost', 0)) x = eventlet.with_timeout(0.01, eventlet.serve, l, lambda c,a: None, timeout_value="timeout") self.assertEqual(x, "timeout") def test_raising_stopserve(self): def stopit(conn, addr): raise eventlet.StopServe() l = eventlet.listen(('localhost', 0)) # connect to trigger a call to stopit gt = eventlet.spawn(eventlet.connect, ('localhost', l.getsockname()[1])) eventlet.serve(l, stopit) gt.wait() def test_concurrency(self): evt = event.Event() def waiter(sock, addr): sock.sendall(s2b('hi')) evt.wait() l = eventlet.listen(('localhost', 0)) gt = eventlet.spawn(eventlet.serve, l, waiter, 5) def test_client(): c = eventlet.connect(('localhost', l.getsockname()[1])) # verify the client is connected by getting data self.assertEquals(s2b('hi'), c.recv(2)) return c clients = [test_client() for i in xrange(5)] # very next client should not get anything x = eventlet.with_timeout(0.01, test_client, timeout_value="timed out") self.assertEquals(x, "timed out") @skip_if_no_ssl def test_wrap_ssl(self): server = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) port = server.getsockname()[1] def handle(sock,addr): sock.sendall(sock.recv(1024)) raise eventlet.StopServe() eventlet.spawn(eventlet.serve, server, handle) client = eventlet.wrap_ssl(eventlet.connect(('localhost', port))) client.sendall("echo") self.assertEquals("echo", client.recv(1024)) def test_socket_reuse(self): lsock1 = eventlet.listen(('localhost',0)) port = lsock1.getsockname()[1] def same_socket(): return eventlet.listen(('localhost',port)) self.assertRaises(socket.error,same_socket) lsock1.close() assert same_socket() eventlet-0.13.0/tests/__init__.py0000644000175000017500000002264412164577340017555 0ustar temototemoto00000000000000# package is named tests, not test, so it won't be confused with test in stdlib import errno import os import resource import signal import unittest import warnings import eventlet from eventlet import debug, hubs # convenience for importers main = unittest.main def s2b(s): """portable way to convert string to bytes. In 3.x socket.send and recv require bytes""" return s.encode() def skipped(func): """ Decorator that marks a function as skipped. Uses nose's SkipTest exception if installed. Without nose, this will count skipped tests as passing tests.""" try: from nose.plugins.skip import SkipTest def skipme(*a, **k): raise SkipTest() skipme.__name__ = func.__name__ return skipme except ImportError: # no nose, we'll just skip the test ourselves def skipme(*a, **k): print "Skipping", func.__name__ skipme.__name__ = func.__name__ return skipme def skip_if(condition): """ Decorator that skips a test if the *condition* evaluates True. *condition* can be a boolean or a callable that accepts one argument. The callable will be called with the function to be decorated, and should return True to skip the test. """ def skipped_wrapper(func): def wrapped(*a, **kw): if isinstance(condition, bool): result = condition else: result = condition(func) if result: return skipped(func)(*a, **kw) else: return func(*a, **kw) wrapped.__name__ = func.__name__ return wrapped return skipped_wrapper def skip_unless(condition): """ Decorator that skips a test if the *condition* does not return True. *condition* can be a boolean or a callable that accepts one argument. The callable will be called with the function to be decorated, and should return True if the condition is satisfied. """ def skipped_wrapper(func): def wrapped(*a, **kw): if isinstance(condition, bool): result = condition else: result = condition(func) if not result: return skipped(func)(*a, **kw) else: return func(*a, **kw) wrapped.__name__ = func.__name__ return wrapped return skipped_wrapper def requires_twisted(func): """ Decorator that skips a test if Twisted is not present.""" def requirement(_f): from eventlet.hubs import get_hub try: return 'Twisted' in type(get_hub()).__name__ except Exception: return False return skip_unless(requirement)(func) def using_pyevent(_f): from eventlet.hubs import get_hub return 'pyevent' in type(get_hub()).__module__ def skip_with_pyevent(func): """ Decorator that skips a test if we're using the pyevent hub.""" return skip_if(using_pyevent)(func) def skip_on_windows(func): """ Decorator that skips a test on Windows.""" import sys return skip_if(sys.platform.startswith('win'))(func) def skip_if_no_itimer(func): """ Decorator that skips a test if the `itimer` module isn't found """ has_itimer = False try: import itimer has_itimer = True except ImportError: pass return skip_unless(has_itimer)(func) def skip_if_no_ssl(func): """ Decorator that skips a test if SSL is not available.""" try: import eventlet.green.ssl return func except ImportError: try: import eventlet.green.OpenSSL return func except ImportError: return skipped(func) class TestIsTakingTooLong(Exception): """ Custom exception class to be raised when a test's runtime exceeds a limit. """ pass class LimitedTestCase(unittest.TestCase): """ Unittest subclass that adds a timeout to all tests. Subclasses must be sure to call the LimitedTestCase setUp and tearDown methods. The default timeout is 1 second, change it by setting TEST_TIMEOUT to the desired quantity.""" TEST_TIMEOUT = 1 def setUp(self): self.previous_alarm = None self.timer = eventlet.Timeout(self.TEST_TIMEOUT, TestIsTakingTooLong(self.TEST_TIMEOUT)) def reset_timeout(self, new_timeout): """Changes the timeout duration; only has effect during one test. `new_timeout` can be int or float. """ self.timer.cancel() self.timer = eventlet.Timeout(new_timeout, TestIsTakingTooLong(new_timeout)) def set_alarm(self, new_timeout): """Call this in the beginning of your test if you expect busy loops. Only has effect during one test. `new_timeout` must be int. """ def sig_alarm_handler(sig, frame): # Could arm previous alarm but test is failed anyway # seems to be no point in restoring previous state. raise TestIsTakingTooLong(new_timeout) self.previous_alarm = ( signal.signal(signal.SIGALRM, sig_alarm_handler), signal.alarm(new_timeout), ) def tearDown(self): self.timer.cancel() if self.previous_alarm: signal.signal(signal.SIGALRM, self.previous_alarm[0]) signal.alarm(self.previous_alarm[1]) try: hub = hubs.get_hub() num_readers = len(hub.get_readers()) num_writers = len(hub.get_writers()) assert num_readers == num_writers == 0 except AssertionError: print "ERROR: Hub not empty" print debug.format_hub_timers() print debug.format_hub_listeners() def assert_less_than(self, a,b,msg=None): if msg: self.assert_(a 0, counter[0]) gt.kill() def assert_cursor_works(self, cursor): cursor.execute("select 1") rows = cursor.fetchall() self.assertEqual(rows, ((1L,),)) self.assert_cursor_yields(cursor) def assert_connection_works(self, conn): curs = conn.cursor() self.assert_cursor_works(curs) def test_module_attributes(self): import MySQLdb as orig for key in dir(orig): if key not in ('__author__', '__path__', '__revision__', '__version__', '__loader__'): self.assert_(hasattr(MySQLdb, key), "%s %s" % (key, getattr(orig, key))) def test_connecting(self): self.assert_(self.connection is not None) def test_connecting_annoyingly(self): self.assert_connection_works(MySQLdb.Connect(**self._auth)) self.assert_connection_works(MySQLdb.Connection(**self._auth)) self.assert_connection_works(MySQLdb.connections.Connection(**self._auth)) def test_create_cursor(self): cursor = self.connection.cursor() cursor.close() def test_run_query(self): cursor = self.connection.cursor() self.assert_cursor_works(cursor) cursor.close() def test_run_bad_query(self): cursor = self.connection.cursor() try: cursor.execute("garbage blah blah") self.assert_(False) except AssertionError: raise except Exception: pass cursor.close() def fill_up_table(self, conn): curs = conn.cursor() for i in range(1000): curs.execute('insert into test_table (value_int) values (%s)' % i) conn.commit() def test_yields(self): conn = self.connection self.set_up_dummy_table(conn) self.fill_up_table(conn) curs = conn.cursor() results = [] SHORT_QUERY = "select * from test_table" evt = event.Event() def a_query(): self.assert_cursor_works(curs) curs.execute(SHORT_QUERY) results.append(2) evt.send() eventlet.spawn(a_query) results.append(1) self.assertEqual([1], results) evt.wait() self.assertEqual([1, 2], results) def test_visibility_from_other_connections(self): conn = MySQLdb.connect(**self._auth) conn2 = MySQLdb.connect(**self._auth) curs = conn.cursor() try: curs2 = conn2.cursor() curs2.execute("insert into gargleblatz (a) values (%s)" % (314159)) self.assertEqual(curs2.rowcount, 1) conn2.commit() selection_query = "select * from gargleblatz" curs2.execute(selection_query) self.assertEqual(curs2.rowcount, 1) del curs2, conn2 # create a new connection, it should see the addition conn3 = MySQLdb.connect(**self._auth) curs3 = conn3.cursor() curs3.execute(selection_query) self.assertEqual(curs3.rowcount, 1) # now, does the already-open connection see it? curs.execute(selection_query) self.assertEqual(curs.rowcount, 1) del curs3, conn3 finally: # clean up my litter curs.execute("delete from gargleblatz where a=314159") conn.commit() from tests import patcher_test class MonkeyPatchTester(patcher_test.ProcessBase): @skip_unless(mysql_requirement) def test_monkey_patching(self): output, lines = self.run_script(""" from eventlet import patcher import MySQLdb as m from eventlet.green import MySQLdb as gm patcher.monkey_patch(all=True, MySQLdb=True) print "mysqltest", ",".join(sorted(patcher.already_patched.keys())) print "connect", m.connect == gm.connect """) self.assertEqual(len(lines), 3) self.assertEqual(lines[0].replace("psycopg,", ""), 'mysqltest MySQLdb,os,select,socket,thread,time') self.assertEqual(lines[1], "connect True") eventlet-0.13.0/tests/stdlib/0000755000175000017500000000000012164600754016711 5ustar temototemoto00000000000000eventlet-0.13.0/tests/stdlib/test_urllib2_localnet.py0000644000175000017500000000063212164577340023563 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import BaseHTTPServer from eventlet.green import threading from eventlet.green import socket from eventlet.green import urllib2 patcher.inject('test.test_urllib2_localnet', globals(), ('BaseHTTPServer', BaseHTTPServer), ('threading', threading), ('socket', socket), ('urllib2', urllib2)) if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_socket.py0000644000175000017500000000077512164577340021627 0ustar temototemoto00000000000000#!/usr/bin/env python from eventlet import patcher from eventlet.green import socket from eventlet.green import select from eventlet.green import time from eventlet.green import thread from eventlet.green import threading patcher.inject('test.test_socket', globals(), ('socket', socket), ('select', select), ('time', time), ('thread', thread), ('threading', threading)) # TODO: fix TCPTimeoutTest.testInterruptedTimeout = lambda *a: None if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/all_monkey.py0000644000175000017500000000134012164577340021417 0ustar temototemoto00000000000000import eventlet eventlet.sleep(0) from eventlet import patcher patcher.monkey_patch() def assimilate_real(name): print "Assimilating", name try: modobj = __import__('test.' + name, globals(), locals(), ['test_main']) except ImportError: print "Not importing %s, it doesn't exist in this installation/version of Python" % name return else: method_name = name + "_test_main" try: globals()[method_name] = modobj.test_main modobj.test_main.__name__ = name + '.test_main' except AttributeError: print "No test_main for %s, assuming it tests on import" % name import all_modules for m in all_modules.get_modules(): assimilate_real(m) eventlet-0.13.0/tests/stdlib/test_select.py0000644000175000017500000000035712164577340021612 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import select patcher.inject('test.test_select', globals(), ('select', select)) if __name__ == "__main__": try: test_main() except NameError: pass # 2.5eventlet-0.13.0/tests/stdlib/test_httpservers.py0000644000175000017500000000106512164577340022721 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import BaseHTTPServer from eventlet.green import SimpleHTTPServer from eventlet.green import CGIHTTPServer from eventlet.green import urllib from eventlet.green import httplib from eventlet.green import threading patcher.inject('test.test_httpservers', globals(), ('BaseHTTPServer', BaseHTTPServer), ('SimpleHTTPServer', SimpleHTTPServer), ('CGIHTTPServer', CGIHTTPServer), ('urllib', urllib), ('httplib', httplib), ('threading', threading)) if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_os.py0000644000175000017500000000024612164577340020751 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import os patcher.inject('test.test_os', globals(), ('os', os)) if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_subprocess.py0000644000175000017500000000037212164577340022520 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import subprocess from eventlet.green import time patcher.inject('test.test_subprocess', globals(), ('subprocess', subprocess), ('time', time)) if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_threading_local.py0000644000175000017500000000062512164577340023450 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import thread from eventlet.green import threading from eventlet.green import time # hub requires initialization before test can run from eventlet import hubs hubs.get_hub() patcher.inject('test.test_threading_local', globals(), ('time', time), ('thread', thread), ('threading', threading)) if __name__ == '__main__': test_main()eventlet-0.13.0/tests/stdlib/test_urllib2.py0000644000175000017500000000123512164577340021702 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket from eventlet.green import urllib2 patcher.inject('test.test_urllib2', globals(), ('socket', socket), ('urllib2', urllib2)) HandlerTests.test_file = patcher.patch_function(HandlerTests.test_file, ('socket', socket)) HandlerTests.test_cookie_redirect = patcher.patch_function(HandlerTests.test_cookie_redirect, ('urllib2', urllib2)) try: OpenerDirectorTests.test_badly_named_methods = patcher.patch_function(OpenerDirectorTests.test_badly_named_methods, ('urllib2', urllib2)) except AttributeError: pass # 2.4 doesn't have this test method if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_asyncore.py0000644000175000017500000000366112164577340022157 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import asyncore from eventlet.green import select from eventlet.green import socket from eventlet.green import threading from eventlet.green import time patcher.inject("test.test_asyncore", globals()) def new_closeall_check(self, usedefault): # Check that close_all() closes everything in a given map l = [] testmap = {} for i in range(10): c = dummychannel() l.append(c) self.assertEqual(c.socket.closed, False) testmap[i] = c if usedefault: # the only change we make is to not assign to asyncore.socket_map # because doing so fails to assign to the real asyncore's socket_map # and thus the test fails socketmap = asyncore.socket_map.copy() try: asyncore.socket_map.clear() asyncore.socket_map.update(testmap) asyncore.close_all() finally: testmap = asyncore.socket_map.copy() asyncore.socket_map.clear() asyncore.socket_map.update(socketmap) else: asyncore.close_all(testmap) self.assertEqual(len(testmap), 0) for c in l: self.assertEqual(c.socket.closed, True) HelperFunctionTests.closeall_check = new_closeall_check try: # Eventlet's select() emulation doesn't support the POLLPRI flag, # which this test relies on. Therefore, nuke it! BaseTestAPI.test_handle_expt = lambda *a, **kw: None except NameError: pass try: # temporarily disabling these tests in the python2.7/pyevent configuration from tests import using_pyevent import sys if using_pyevent(None) and sys.version_info >= (2, 7): TestAPI_UseSelect.test_handle_accept = lambda *a, **kw: None TestAPI_UseSelect.test_handle_close = lambda *a, **kw: None TestAPI_UseSelect.test_handle_read = lambda *a, **kw: None except NameError: pass if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_socketserver.py0000644000175000017500000000165712164577340023056 0ustar temototemoto00000000000000#!/usr/bin/env python from eventlet import patcher from eventlet.green import SocketServer from eventlet.green import socket from eventlet.green import select from eventlet.green import time from eventlet.green import threading # to get past the silly 'requires' check from test import test_support test_support.use_resources = ['network'] patcher.inject('test.test_socketserver', globals(), ('SocketServer', SocketServer), ('socket', socket), ('select', select), ('time', time), ('threading', threading)) # only a problem with pyevent from eventlet import tests if tests.using_pyevent(): try: SocketServerTest.test_ForkingUDPServer = lambda *a, **kw: None SocketServerTest.test_ForkingTCPServer = lambda *a, **kw: None SocketServerTest.test_ForkingUnixStreamServer = lambda *a, **kw: None except (NameError, AttributeError): pass if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/all_modules.py0000644000175000017500000000206712164577340021574 0ustar temototemoto00000000000000def get_modules(): test_modules = [ 'test_select', 'test_SimpleHTTPServer', 'test_asynchat', 'test_asyncore', 'test_ftplib', 'test_httplib', 'test_os', 'test_queue', 'test_socket_ssl', 'test_socketserver', # 'test_subprocess', 'test_thread', 'test_threading', 'test_threading_local', 'test_urllib', 'test_urllib2_localnet'] network_modules = [ 'test_httpservers', 'test_socket', 'test_ssl', 'test_timeout', 'test_urllib2'] # quick and dirty way of testing whether we can access # remote hosts; any tests that try internet connections # will fail if we cannot import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.settimeout(0.5) s.connect(('eventlet.net', 80)) s.close() test_modules = test_modules + network_modules except socket.error, e: print "Skipping network tests" return test_modules eventlet-0.13.0/tests/stdlib/all.py0000644000175000017500000000340712164577340020043 0ustar temototemoto00000000000000""" Convenience module for running standard library tests with nose. The standard tests are not especially homogeneous, but they mostly expose a test_main method that does the work of selecting which tests to run based on what is supported by the platform. On its own, Nose would run all possible tests and many would fail; therefore we collect all of the test_main methods here in one module and Nose can run it. Hopefully in the future the standard tests get rewritten to be more nosey. Many of these tests make connections to external servers, and all.py tries to skip these tests rather than failing them, so you can get some work done on a plane. """ from eventlet import debug debug.hub_prevent_multiple_readers(False) def restart_hub(): from eventlet import hubs hub = hubs.get_hub() hub_shortname = hub.__module__.split('.')[-1] # don't restart the pyevent hub; it's not necessary if hub_shortname != 'pyevent': hub.abort() hubs.use_hub(hub_shortname) def assimilate_patched(name): try: modobj = __import__(name, globals(), locals(), ['test_main']) restart_hub() except ImportError: print "Not importing %s, it doesn't exist in this installation/version of Python" % name return else: method_name = name + "_test_main" try: test_method = modobj.test_main def test_main(): restart_hub() test_method() restart_hub() globals()[method_name] = test_main test_main.__name__ = name + '.test_main' except AttributeError: print "No test_main for %s, assuming it tests on import" % name import all_modules for m in all_modules.get_modules(): assimilate_patched(m) eventlet-0.13.0/tests/stdlib/test_SimpleHTTPServer.py0000644000175000017500000000034112164577340023444 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import SimpleHTTPServer patcher.inject('test.test_SimpleHTTPServer', globals(), ('SimpleHTTPServer', SimpleHTTPServer)) if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_thread__boundedsem.py0000644000175000017500000000105712164577340024144 0ustar temototemoto00000000000000"""Test that BoundedSemaphore with a very high bound is as good as unbounded one""" from eventlet import coros from eventlet.green import thread def allocate_lock(): return coros.semaphore(1, 9999) original_allocate_lock = thread.allocate_lock thread.allocate_lock = allocate_lock original_LockType = thread.LockType thread.LockType = coros.CappedSemaphore try: import os.path execfile(os.path.join(os.path.dirname(__file__), 'test_thread.py')) finally: thread.allocate_lock = original_allocate_lock thread.LockType = original_LockType eventlet-0.13.0/tests/stdlib/test_socket_ssl.py0000644000175000017500000000157312164577340022505 0ustar temototemoto00000000000000#!/usr/bin/env python from eventlet import patcher from eventlet.green import socket # enable network resource import test.test_support i_r_e = test.test_support.is_resource_enabled def is_resource_enabled(resource): if resource == 'network': return True else: return i_r_e(resource) test.test_support.is_resource_enabled = is_resource_enabled try: socket.ssl socket.sslerror except AttributeError: raise ImportError("Socket module doesn't support ssl") patcher.inject('test.test_socket_ssl', globals()) test_basic = patcher.patch_function(test_basic) test_rude_shutdown = patcher.patch_function(test_rude_shutdown) def test_main(): if not hasattr(socket, "ssl"): raise test_support.TestSkipped("socket module has no ssl support") test_rude_shutdown() test_basic() test_timeout() if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_ftplib.py0000644000175000017500000000070312164577340021606 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import asyncore from eventlet.green import ftplib from eventlet.green import threading from eventlet.green import socket patcher.inject('test.test_ftplib', globals()) # this test only fails on python2.7/pyevent/--with-xunit; screw that try: TestTLS_FTPClass.test_data_connection = lambda *a, **kw: None except (AttributeError, NameError): pass if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_ssl.py0000644000175000017500000000335412164577340021134 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import asyncore from eventlet.green import BaseHTTPServer from eventlet.green import select from eventlet.green import socket from eventlet.green import SocketServer from eventlet.green import SimpleHTTPServer from eventlet.green import ssl from eventlet.green import threading from eventlet.green import urllib # stupid test_support messing with our mojo import test.test_support i_r_e = test.test_support.is_resource_enabled def is_resource_enabled(resource): if resource == 'network': return True else: return i_r_e(resource) test.test_support.is_resource_enabled = is_resource_enabled patcher.inject('test.test_ssl', globals(), ('asyncore', asyncore), ('BaseHTTPServer', BaseHTTPServer), ('select', select), ('socket', socket), ('SocketServer', SocketServer), ('ssl', ssl), ('threading', threading), ('urllib', urllib)) # TODO svn.python.org stopped serving up the cert that these tests expect; # presumably they've updated svn trunk but the tests in released versions will # probably break forever. This is why you don't write tests that connect to # external servers. NetworkedTests.testConnect = lambda s: None NetworkedTests.testFetchServerCert = lambda s: None NetworkedTests.test_algorithms = lambda s: None # these don't pass because nonblocking ssl sockets don't report # when the socket is closed uncleanly, per the docstring on # eventlet.green.GreenSSLSocket # *TODO: fix and restore these tests ThreadedTests.testProtocolSSL2 = lambda s: None ThreadedTests.testProtocolSSL3 = lambda s: None ThreadedTests.testProtocolTLS1 = lambda s: None ThreadedTests.testSocketServer = lambda s: None if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_httplib.py0000644000175000017500000000036712164577340022002 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import httplib from eventlet.green import socket patcher.inject('test.test_httplib', globals(), ('httplib', httplib), ('socket', socket)) if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_queue.py0000644000175000017500000000045112164577340021452 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import Queue from eventlet.green import threading from eventlet.green import time patcher.inject('test.test_queue', globals(), ('Queue', Queue), ('threading', threading), ('time', time)) if __name__ == "__main__": test_main() eventlet-0.13.0/tests/stdlib/test_thread.py0000644000175000017500000000075512164577340021604 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import thread from eventlet.green import time # necessary to initialize the hub before running on 2.5 from eventlet import hubs hubs.get_hub() patcher.inject('test.test_thread', globals()) try: # this is a new test in 2.7 that we don't support yet TestForkInThread.test_forkinthread = lambda *a, **kw: None except NameError: pass if __name__ == "__main__": try: test_main() except NameError: pass # 2.5 eventlet-0.13.0/tests/stdlib/test_timeout.py0000644000175000017500000000053312164577340022015 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import socket from eventlet.green import time patcher.inject('test.test_timeout', globals(), ('socket', socket), ('time', time)) # to get past the silly 'requires' check from test import test_support test_support.use_resources = ['network'] if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_asynchat.py0000644000175000017500000000075412164577340022146 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import asyncore from eventlet.green import asynchat from eventlet.green import socket from eventlet.green import thread from eventlet.green import threading from eventlet.green import time patcher.inject("test.test_asynchat", globals(), ('asyncore', asyncore), ('asynchat', asynchat), ('socket', socket), ('thread', thread), ('threading', threading), ('time', time)) if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_urllib.py0000644000175000017500000000036612164577340021624 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import httplib from eventlet.green import urllib patcher.inject('test.test_urllib', globals(), ('httplib', httplib), ('urllib', urllib)) if __name__ == "__main__": test_main()eventlet-0.13.0/tests/stdlib/test_threading.py0000644000175000017500000000320612164577340022274 0ustar temototemoto00000000000000from eventlet import patcher from eventlet.green import threading from eventlet.green import thread from eventlet.green import time # *NOTE: doesn't test as much of the threading api as we'd like because many of # the tests are launched via subprocess and therefore don't get patched patcher.inject('test.test_threading', globals()) # "PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it." # Therefore it's also disabled when testing eventlet, as it's not emulated. try: ThreadTests.test_PyThreadState_SetAsyncExc = lambda s: None except (AttributeError, NameError): pass # disabling this test because it fails when run in Hudson even though it always # succeeds when run manually try: ThreadJoinOnShutdown.test_3_join_in_forked_from_thread = lambda *a, **kw: None except (AttributeError, NameError): pass # disabling this test because it relies on dorking with the hidden # innards of the threading module in a way that doesn't appear to work # when patched try: ThreadTests.test_limbo_cleanup = lambda *a, **kw: None except (AttributeError, NameError): pass # this test has nothing to do with Eventlet; if it fails it's not # because of patching (which it does, grump grump) try: ThreadTests.test_finalize_runnning_thread = lambda *a, **kw: None # it's misspelled in the stdlib, silencing this version as well because # inevitably someone will correct the error ThreadTests.test_finalize_running_thread = lambda *a, **kw: None except (AttributeError, NameError): pass if __name__ == "__main__": test_main() eventlet-0.13.0/tests/parse_results.py0000644000175000017500000000712712164577340020710 0ustar temototemoto00000000000000import sys import os import traceback try: import sqlite3 except ImportError: import pysqlite2.dbapi2 as sqlite3 import re import glob def parse_stdout(s): argv = re.search('^===ARGV=(.*?)$', s, re.M).group(1) argv = argv.split() testname = argv[-1] del argv[-1] hub = None reactor = None while argv: if argv[0]=='--hub': hub = argv[1] del argv[0] del argv[0] elif argv[0]=='--reactor': reactor = argv[1] del argv[0] del argv[0] else: del argv[0] if reactor is not None: hub += '/%s' % reactor return testname, hub unittest_delim = '----------------------------------------------------------------------' def parse_unittest_output(s): s = s[s.rindex(unittest_delim)+len(unittest_delim):] num = int(re.search('^Ran (\d+) test.*?$', s, re.M).group(1)) ok = re.search('^OK$', s, re.M) error, fail, timeout = 0, 0, 0 failed_match = re.search(r'^FAILED \((?:failures=(?P\d+))?,? ?(?:errors=(?P\d+))?\)$', s, re.M) ok_match = re.search('^OK$', s, re.M) if failed_match: assert not ok_match, (ok_match, s) fail = failed_match.group('f') error = failed_match.group('e') fail = int(fail or '0') error = int(error or '0') else: assert ok_match, repr(s) timeout_match = re.search('^===disabled because of timeout: (\d+)$', s, re.M) if timeout_match: timeout = int(timeout_match.group(1)) return num, error, fail, timeout def main(db): c = sqlite3.connect(db) c.execute('''create table if not exists parsed_command_record (id integer not null unique, testname text, hub text, runs integer, errors integer, fails integer, timeouts integer, error_names text, fail_names text, timeout_names text)''') c.commit() parse_error = 0 SQL = ('select command_record.id, command, stdout, exitcode from command_record ' 'where not exists (select * from parsed_command_record where ' 'parsed_command_record.id=command_record.id)') for row in c.execute(SQL).fetchall(): id, command, stdout, exitcode = row try: testname, hub = parse_stdout(stdout) if unittest_delim in stdout: runs, errors, fails, timeouts = parse_unittest_output(stdout) else: if exitcode == 0: runs, errors, fails, timeouts = 1,0,0,0 if exitcode == 7: runs, errors, fails, timeouts = 0,0,0,1 elif exitcode: runs, errors, fails, timeouts = 1,1,0,0 except Exception: parse_error += 1 sys.stderr.write('Failed to parse id=%s\n' % id) print repr(stdout) traceback.print_exc() else: print id, hub, testname, runs, errors, fails, timeouts c.execute('insert into parsed_command_record ' '(id, testname, hub, runs, errors, fails, timeouts) ' 'values (?, ?, ?, ?, ?, ?, ?)', (id, testname, hub, runs, errors, fails, timeouts)) c.commit() if __name__=='__main__': if not sys.argv[1:]: latest_db = sorted(glob.glob('results.*.db'), key=lambda f: os.stat(f).st_mtime)[-1] print latest_db sys.argv.append(latest_db) for db in sys.argv[1:]: main(db) execfile('generate_report.py') eventlet-0.13.0/tests/test__event.py0000644000175000017500000000221412164577340020324 0ustar temototemoto00000000000000import unittest from eventlet.event import Event from eventlet.api import spawn, sleep, with_timeout import eventlet from tests import LimitedTestCase DELAY= 0.01 class TestEvent(LimitedTestCase): def test_send_exc(self): log = [] e = Event() def waiter(): try: result = e.wait() log.append(('received', result)) except Exception, ex: log.append(('catched', ex)) spawn(waiter) sleep(0) # let waiter to block on e.wait() obj = Exception() e.send(exc=obj) sleep(0) sleep(0) assert log == [('catched', obj)], log def test_send(self): event1 = Event() event2 = Event() spawn(event1.send, 'hello event1') eventlet.Timeout(0, ValueError('interrupted')) try: result = event1.wait() except ValueError: X = object() result = with_timeout(DELAY, event2.wait, timeout_value=X) assert result is X, 'Nobody sent anything to event2 yet it received %r' % (result, ) if __name__=='__main__': unittest.main() eventlet-0.13.0/tests/mock.py0000644000175000017500000002034712164577340016745 0ustar temototemoto00000000000000# mock.py # Test tools for mocking and patching. # Copyright (C) 2007-2009 Michael Foord # E-mail: fuzzyman AT voidspace DOT org DOT uk # mock 0.6.0 # http://www.voidspace.org.uk/python/mock/ # Released subject to the BSD License # Please see http://www.voidspace.org.uk/python/license.shtml # Scripts maintained at http://www.voidspace.org.uk/python/index.shtml # Comments, suggestions and bug reports welcome. __all__ = ( 'Mock', 'patch', 'patch_object', 'sentinel', 'DEFAULT' ) __version__ = '0.6.0' class SentinelObject(object): def __init__(self, name): self.name = name def __repr__(self): return '' % self.name class Sentinel(object): def __init__(self): self._sentinels = {} def __getattr__(self, name): return self._sentinels.setdefault(name, SentinelObject(name)) sentinel = Sentinel() DEFAULT = sentinel.DEFAULT class OldStyleClass: pass ClassType = type(OldStyleClass) def _is_magic(name): return '__%s__' % name[2:-2] == name def _copy(value): if type(value) in (dict, list, tuple, set): return type(value)(value) return value class Mock(object): def __init__(self, spec=None, side_effect=None, return_value=DEFAULT, name=None, parent=None, wraps=None): self._parent = parent self._name = name if spec is not None and not isinstance(spec, list): spec = [member for member in dir(spec) if not _is_magic(member)] self._methods = spec self._children = {} self._return_value = return_value self.side_effect = side_effect self._wraps = wraps self.reset_mock() def reset_mock(self): self.called = False self.call_args = None self.call_count = 0 self.call_args_list = [] self.method_calls = [] for child in self._children.itervalues(): child.reset_mock() if isinstance(self._return_value, Mock): self._return_value.reset_mock() def __get_return_value(self): if self._return_value is DEFAULT: self._return_value = Mock() return self._return_value def __set_return_value(self, value): self._return_value = value return_value = property(__get_return_value, __set_return_value) def __call__(self, *args, **kwargs): self.called = True self.call_count += 1 self.call_args = (args, kwargs) self.call_args_list.append((args, kwargs)) parent = self._parent name = self._name while parent is not None: parent.method_calls.append((name, args, kwargs)) if parent._parent is None: break name = parent._name + '.' + name parent = parent._parent ret_val = DEFAULT if self.side_effect is not None: if (isinstance(self.side_effect, Exception) or isinstance(self.side_effect, (type, ClassType)) and issubclass(self.side_effect, Exception)): raise self.side_effect ret_val = self.side_effect(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value if self._wraps is not None and self._return_value is DEFAULT: return self._wraps(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value return ret_val def __getattr__(self, name): if self._methods is not None: if name not in self._methods: raise AttributeError("Mock object has no attribute '%s'" % name) elif _is_magic(name): raise AttributeError(name) if name not in self._children: wraps = None if self._wraps is not None: wraps = getattr(self._wraps, name) self._children[name] = Mock(parent=self, name=name, wraps=wraps) return self._children[name] def assert_called_with(self, *args, **kwargs): assert self.call_args == (args, kwargs), 'Expected: %s\nCalled with: %s' % ((args, kwargs), self.call_args) def _dot_lookup(thing, comp, import_path): try: return getattr(thing, comp) except AttributeError: __import__(import_path) return getattr(thing, comp) def _importer(target): components = target.split('.') import_path = components.pop(0) thing = __import__(import_path) for comp in components: import_path += ".%s" % comp thing = _dot_lookup(thing, comp, import_path) return thing class _patch(object): def __init__(self, target, attribute, new, spec, create): self.target = target self.attribute = attribute self.new = new self.spec = spec self.create = create self.has_local = False def __call__(self, func): if hasattr(func, 'patchings'): func.patchings.append(self) return func def patched(*args, **keywargs): # don't use a with here (backwards compatability with 2.5) extra_args = [] for patching in patched.patchings: arg = patching.__enter__() if patching.new is DEFAULT: extra_args.append(arg) args += tuple(extra_args) try: return func(*args, **keywargs) finally: for patching in getattr(patched, 'patchings', []): patching.__exit__() patched.patchings = [self] patched.__name__ = func.__name__ patched.compat_co_firstlineno = getattr(func, "compat_co_firstlineno", func.func_code.co_firstlineno) return patched def get_original(self): target = self.target name = self.attribute create = self.create original = DEFAULT if _has_local_attr(target, name): try: original = target.__dict__[name] except AttributeError: # for instances of classes with slots, they have no __dict__ original = getattr(target, name) elif not create and not hasattr(target, name): raise AttributeError("%s does not have the attribute %r" % (target, name)) return original def __enter__(self): new, spec, = self.new, self.spec original = self.get_original() if new is DEFAULT: # XXXX what if original is DEFAULT - shouldn't use it as a spec inherit = False if spec == True: # set spec to the object we are replacing spec = original if isinstance(spec, (type, ClassType)): inherit = True new = Mock(spec=spec) if inherit: new.return_value = Mock(spec=spec) self.temp_original = original setattr(self.target, self.attribute, new) return new def __exit__(self, *_): if self.temp_original is not DEFAULT: setattr(self.target, self.attribute, self.temp_original) else: delattr(self.target, self.attribute) del self.temp_original def patch_object(target, attribute, new=DEFAULT, spec=None, create=False): return _patch(target, attribute, new, spec, create) def patch(target, new=DEFAULT, spec=None, create=False): try: target, attribute = target.rsplit('.', 1) except (TypeError, ValueError): raise TypeError("Need a valid target to patch. You supplied: %r" % (target,)) target = _importer(target) return _patch(target, attribute, new, spec, create) def _has_local_attr(obj, name): try: return name in vars(obj) except TypeError: # objects without a __dict__ return hasattr(obj, name) eventlet-0.13.0/tests/test__twistedutil_protocol.py0000644000175000017500000001745012164577340023515 0ustar temototemoto00000000000000from tests import requires_twisted import unittest try: from twisted.internet import reactor from twisted.internet.error import ConnectionDone import eventlet.twistedutil.protocol as pr from eventlet.twistedutil.protocols.basic import LineOnlyReceiverTransport except ImportError: # stub out some of the twisted dependencies so it at least imports class dummy(object): pass pr = dummy() pr.UnbufferedTransport = None pr.GreenTransport = None pr.GreenClientCreator = lambda *a, **k: None class reactor(object): pass from eventlet import spawn, sleep, with_timeout, spawn_after from eventlet.coros import Event try: from eventlet.green import socket except SyntaxError: socket = None DELAY=0.01 if socket is not None: def setup_server_socket(self, delay=DELAY, port=0): s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('127.0.0.1', port)) port = s.getsockname()[1] s.listen(5) s.settimeout(delay*3) def serve(): conn, addr = s.accept() conn.settimeout(delay+1) try: hello = conn.makefile().readline()[:-2] except socket.timeout: return conn.sendall('you said %s. ' % hello) sleep(delay) conn.sendall('BYE') sleep(delay) #conn.close() spawn(serve) return port def setup_server_SpawnFactory(self, delay=DELAY, port=0): def handle(conn): port.stopListening() try: hello = conn.readline() except ConnectionDone: return conn.write('you said %s. ' % hello) sleep(delay) conn.write('BYE') sleep(delay) conn.loseConnection() port = reactor.listenTCP(0, pr.SpawnFactory(handle, LineOnlyReceiverTransport)) return port.getHost().port class TestCase(unittest.TestCase): transportBufferSize = None @property def connector(self): return pr.GreenClientCreator(reactor, self.gtransportClass, self.transportBufferSize) @requires_twisted def setUp(self): port = self.setup_server() self.conn = self.connector.connectTCP('127.0.0.1', port) if self.transportBufferSize is not None: self.assertEqual(self.transportBufferSize, self.conn.transport.bufferSize) class TestUnbufferedTransport(TestCase): gtransportClass = pr.UnbufferedTransport setup_server = setup_server_SpawnFactory @requires_twisted def test_full_read(self): self.conn.write('hello\r\n') self.assertEqual(self.conn.read(), 'you said hello. BYE') self.assertEqual(self.conn.read(), '') self.assertEqual(self.conn.read(), '') @requires_twisted def test_iterator(self): self.conn.write('iterator\r\n') self.assertEqual('you said iterator. BYE', ''.join(self.conn)) class TestUnbufferedTransport_bufsize1(TestUnbufferedTransport): transportBufferSize = 1 setup_server = setup_server_SpawnFactory class TestGreenTransport(TestUnbufferedTransport): gtransportClass = pr.GreenTransport setup_server = setup_server_SpawnFactory @requires_twisted def test_read(self): self.conn.write('hello\r\n') self.assertEqual(self.conn.read(9), 'you said ') self.assertEqual(self.conn.read(999), 'hello. BYE') self.assertEqual(self.conn.read(9), '') self.assertEqual(self.conn.read(1), '') self.assertEqual(self.conn.recv(9), '') self.assertEqual(self.conn.recv(1), '') @requires_twisted def test_read2(self): self.conn.write('world\r\n') self.assertEqual(self.conn.read(), 'you said world. BYE') self.assertEqual(self.conn.read(), '') self.assertEqual(self.conn.recv(), '') @requires_twisted def test_iterator(self): self.conn.write('iterator\r\n') self.assertEqual('you said iterator. BYE', ''.join(self.conn)) _tests = [x for x in locals().keys() if x.startswith('test_')] @requires_twisted def test_resume_producing(self): for test in self._tests: self.setUp() self.conn.resumeProducing() getattr(self, test)() @requires_twisted def test_pause_producing(self): self.conn.pauseProducing() self.conn.write('hi\r\n') result = with_timeout(DELAY*10, self.conn.read, timeout_value='timed out') self.assertEqual('timed out', result) @requires_twisted def test_pauseresume_producing(self): self.conn.pauseProducing() spawn_after(DELAY*5, self.conn.resumeProducing) self.conn.write('hi\r\n') result = with_timeout(DELAY*10, self.conn.read, timeout_value='timed out') self.assertEqual('you said hi. BYE', result) class TestGreenTransport_bufsize1(TestGreenTransport): transportBufferSize = 1 # class TestGreenTransportError(TestCase): # setup_server = setup_server_SpawnFactory # gtransportClass = pr.GreenTransport # # def test_read_error(self): # self.conn.write('hello\r\n') # sleep(DELAY*1.5) # make sure the rest of data arrives # try: # 1//0 # except: # #self.conn.loseConnection(failure.Failure()) # does not work, why? # spawn(self.conn._queue.send_exception, *sys.exc_info()) # self.assertEqual(self.conn.read(9), 'you said ') # self.assertEqual(self.conn.read(7), 'hello. ') # self.assertEqual(self.conn.read(9), 'BYE') # self.assertRaises(ZeroDivisionError, self.conn.read, 9) # self.assertEqual(self.conn.read(1), '') # self.assertEqual(self.conn.read(1), '') # # def test_recv_error(self): # self.conn.write('hello') # self.assertEqual('you said hello. ', self.conn.recv()) # sleep(DELAY*1.5) # make sure the rest of data arrives # try: # 1//0 # except: # #self.conn.loseConnection(failure.Failure()) # does not work, why? # spawn(self.conn._queue.send_exception, *sys.exc_info()) # self.assertEqual('BYE', self.conn.recv()) # self.assertRaises(ZeroDivisionError, self.conn.recv, 9) # self.assertEqual('', self.conn.recv(1)) # self.assertEqual('', self.conn.recv()) # if socket is not None: class TestUnbufferedTransport_socketserver(TestUnbufferedTransport): setup_server = setup_server_socket class TestUnbufferedTransport_socketserver_bufsize1(TestUnbufferedTransport): transportBufferSize = 1 setup_server = setup_server_socket class TestGreenTransport_socketserver(TestGreenTransport): setup_server = setup_server_socket class TestGreenTransport_socketserver_bufsize1(TestGreenTransport): transportBufferSize = 1 setup_server = setup_server_socket class TestTLSError(unittest.TestCase): @requires_twisted def test_server_connectionMade_never_called(self): # trigger case when protocol instance is created, # but it's connectionMade is never called from gnutls.interfaces.twisted import X509Credentials from gnutls.errors import GNUTLSError cred = X509Credentials(None, None) ev = Event() def handle(conn): ev.send("handle must not be called") s = reactor.listenTLS(0, pr.SpawnFactory(handle, LineOnlyReceiverTransport), cred) creator = pr.GreenClientCreator(reactor, LineOnlyReceiverTransport) try: conn = creator.connectTLS('127.0.0.1', s.getHost().port, cred) except GNUTLSError: pass assert ev.poll() is None, repr(ev.poll()) try: import gnutls.interfaces.twisted except ImportError: del TestTLSError @requires_twisted def main(): unittest.main() if __name__=='__main__': main() eventlet-0.13.0/tests/test__socket_errors.py0000644000175000017500000000352012164577340022070 0ustar temototemoto00000000000000import unittest import socket as _original_sock from eventlet import api from eventlet.green import socket class TestSocketErrors(unittest.TestCase): def test_connection_refused(self): # open and close a dummy server to find an unused port server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(('127.0.0.1', 0)) server.listen(1) port = server.getsockname()[1] server.close() del server s = socket.socket() try: s.connect(('127.0.0.1', port)) self.fail("Shouldn't have connected") except socket.error, ex: code, text = ex.args assert code in [111, 61, 10061], (code, text) assert 'refused' in text.lower(), (code, text) def test_timeout_real_socket(self): """ Test underlying socket behavior to ensure correspondence between green sockets and the underlying socket module. """ return self.test_timeout(socket=_original_sock) def test_timeout(self, socket=socket): """ Test that the socket timeout exception works correctly. """ server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(('127.0.0.1', 0)) server.listen(1) port = server.getsockname()[1] s = socket.socket() s.connect(('127.0.0.1', port)) cs, addr = server.accept() cs.settimeout(1) try: try: cs.recv(1024) self.fail("Should have timed out") except socket.timeout, ex: assert hasattr(ex, 'args') assert len(ex.args) == 1 assert ex.args[0] == 'timed out' finally: s.close() cs.close() server.close() if __name__=='__main__': unittest.main() eventlet-0.13.0/tests/queue_test.py0000644000175000017500000002331112164577340020171 0ustar temototemoto00000000000000from tests import LimitedTestCase, main import eventlet from eventlet import event def do_bail(q): eventlet.Timeout(0, RuntimeError()) try: result = q.get() return result except RuntimeError: return 'timed out' class TestQueue(LimitedTestCase): def test_send_first(self): q = eventlet.Queue() q.put('hi') self.assertEquals(q.get(), 'hi') def test_send_last(self): q = eventlet.Queue() def waiter(q): self.assertEquals(q.get(), 'hi2') gt = eventlet.spawn(eventlet.with_timeout, 0.1, waiter, q) eventlet.sleep(0) eventlet.sleep(0) q.put('hi2') gt.wait() def test_max_size(self): q = eventlet.Queue(2) results = [] def putter(q): q.put('a') results.append('a') q.put('b') results.append('b') q.put('c') results.append('c') gt = eventlet.spawn(putter, q) eventlet.sleep(0) self.assertEquals(results, ['a', 'b']) self.assertEquals(q.get(), 'a') eventlet.sleep(0) self.assertEquals(results, ['a', 'b', 'c']) self.assertEquals(q.get(), 'b') self.assertEquals(q.get(), 'c') gt.wait() def test_zero_max_size(self): q = eventlet.Queue(0) def sender(evt, q): q.put('hi') evt.send('done') def receiver(q): x = q.get() return x evt = event.Event() gt = eventlet.spawn(sender, evt, q) eventlet.sleep(0) self.assert_(not evt.ready()) gt2 = eventlet.spawn(receiver, q) self.assertEquals(gt2.wait(),'hi') self.assertEquals(evt.wait(),'done') gt.wait() def test_resize_up(self): q = eventlet.Queue(0) def sender(evt, q): q.put('hi') evt.send('done') evt = event.Event() gt = eventlet.spawn(sender, evt, q) eventlet.sleep(0) self.assert_(not evt.ready()) q.resize(1) eventlet.sleep(0) self.assert_(evt.ready()) gt.wait() def test_resize_down(self): size = 5 q = eventlet.Queue(5) for i in range(5): q.put(i) self.assertEquals(list(q.queue), range(5)) q.resize(1) eventlet.sleep(0) self.assertEquals(list(q.queue), range(5)) def test_resize_to_Unlimited(self): q = eventlet.Queue(0) def sender(evt, q): q.put('hi') evt.send('done') evt = event.Event() gt = eventlet.spawn(sender, evt, q) eventlet.sleep() self.assertFalse(evt.ready()) q.resize(None) eventlet.sleep() self.assertTrue(evt.ready()) gt.wait() def test_multiple_waiters(self): # tests that multiple waiters get their results back q = eventlet.Queue() sendings = ['1', '2', '3', '4'] gts = [eventlet.spawn(q.get) for x in sendings] eventlet.sleep(0.01) # get 'em all waiting q.put(sendings[0]) q.put(sendings[1]) q.put(sendings[2]) q.put(sendings[3]) results = set() for i, gt in enumerate(gts): results.add(gt.wait()) self.assertEquals(results, set(sendings)) def test_waiters_that_cancel(self): q = eventlet.Queue() gt = eventlet.spawn(do_bail, q) self.assertEquals(gt.wait(), 'timed out') q.put('hi') self.assertEquals(q.get(), 'hi') def test_getting_before_sending(self): q = eventlet.Queue() gt = eventlet.spawn(q.put, 'sent') self.assertEquals(q.get(), 'sent') gt.wait() def test_two_waiters_one_dies(self): def waiter(q): return q.get() q = eventlet.Queue() dying = eventlet.spawn(do_bail, q) waiting = eventlet.spawn(waiter, q) eventlet.sleep(0) q.put('hi') self.assertEquals(dying.wait(), 'timed out') self.assertEquals(waiting.wait(), 'hi') def test_two_bogus_waiters(self): q = eventlet.Queue() gt1 = eventlet.spawn(do_bail, q) gt2 = eventlet.spawn(do_bail, q) eventlet.sleep(0) q.put('sent') self.assertEquals(gt1.wait(), 'timed out') self.assertEquals(gt2.wait(), 'timed out') self.assertEquals(q.get(), 'sent') def test_waiting(self): q = eventlet.Queue() gt1 = eventlet.spawn(q.get) eventlet.sleep(0) self.assertEquals(1, q.getting()) q.put('hi') eventlet.sleep(0) self.assertEquals(0, q.getting()) self.assertEquals('hi', gt1.wait()) self.assertEquals(0, q.getting()) def test_channel_send(self): channel = eventlet.Queue(0) events = [] def another_greenlet(): events.append(channel.get()) events.append(channel.get()) gt = eventlet.spawn(another_greenlet) events.append('sending') channel.put('hello') events.append('sent hello') channel.put('world') events.append('sent world') self.assertEqual(['sending', 'hello', 'sent hello', 'world', 'sent world'], events) def test_channel_wait(self): channel = eventlet.Queue(0) events = [] def another_greenlet(): events.append('sending hello') channel.put('hello') events.append('sending world') channel.put('world') events.append('sent world') gt = eventlet.spawn(another_greenlet) events.append('waiting') events.append(channel.get()) events.append(channel.get()) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world'], events) eventlet.sleep(0) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world', 'sent world'], events) def test_channel_waiters(self): c = eventlet.Queue(0) w1 = eventlet.spawn(c.get) w2 = eventlet.spawn(c.get) w3 = eventlet.spawn(c.get) eventlet.sleep(0) self.assertEquals(c.getting(), 3) s1 = eventlet.spawn(c.put, 1) s2 = eventlet.spawn(c.put, 2) s3 = eventlet.spawn(c.put, 3) s1.wait() s2.wait() s3.wait() self.assertEquals(c.getting(), 0) # NOTE: we don't guarantee that waiters are served in order results = sorted([w1.wait(), w2.wait(), w3.wait()]) self.assertEquals(results, [1,2,3]) def test_channel_sender_timing_out(self): from eventlet import queue c = eventlet.Queue(0) self.assertRaises(queue.Full, c.put, "hi", timeout=0.001) self.assertRaises(queue.Empty, c.get_nowait) def test_task_done(self): from eventlet import queue, debug channel = queue.Queue(0) X = object() gt = eventlet.spawn(channel.put, X) result = channel.get() assert result is X, (result, X) assert channel.unfinished_tasks == 1, channel.unfinished_tasks channel.task_done() assert channel.unfinished_tasks == 0, channel.unfinished_tasks gt.wait() def store_result(result, func, *args): try: result.append(func(*args)) except Exception, exc: result.append(exc) class TestNoWait(LimitedTestCase): def test_put_nowait_simple(self): from eventlet import hubs,queue hub = hubs.get_hub() result = [] q = eventlet.Queue(1) hub.schedule_call_global(0, store_result, result, q.put_nowait, 2) hub.schedule_call_global(0, store_result, result, q.put_nowait, 3) eventlet.sleep(0) eventlet.sleep(0) assert len(result)==2, result assert result[0]==None, result assert isinstance(result[1], queue.Full), result def test_get_nowait_simple(self): from eventlet import hubs,queue hub = hubs.get_hub() result = [] q = queue.Queue(1) q.put(4) hub.schedule_call_global(0, store_result, result, q.get_nowait) hub.schedule_call_global(0, store_result, result, q.get_nowait) eventlet.sleep(0) assert len(result)==2, result assert result[0]==4, result assert isinstance(result[1], queue.Empty), result # get_nowait must work from the mainloop def test_get_nowait_unlock(self): from eventlet import hubs,queue hub = hubs.get_hub() result = [] q = queue.Queue(0) p = eventlet.spawn(q.put, 5) assert q.empty(), q assert q.full(), q eventlet.sleep(0) assert q.empty(), q assert q.full(), q hub.schedule_call_global(0, store_result, result, q.get_nowait) eventlet.sleep(0) assert q.empty(), q assert q.full(), q assert result == [5], result # TODO add ready to greenthread #assert p.ready(), p assert p.dead, p assert q.empty(), q # put_nowait must work from the mainloop def test_put_nowait_unlock(self): from eventlet import hubs,queue hub = hubs.get_hub() result = [] q = queue.Queue(0) p = eventlet.spawn(q.get) assert q.empty(), q assert q.full(), q eventlet.sleep(0) assert q.empty(), q assert q.full(), q hub.schedule_call_global(0, store_result, result, q.put_nowait, 10) # TODO ready method on greenthread #assert not p.ready(), p eventlet.sleep(0) assert result == [None], result # TODO ready method # assert p.ready(), p assert q.full(), q assert q.empty(), q if __name__=='__main__': main() eventlet-0.13.0/tests/saranwrap_test.py0000644000175000017500000002764712164577340021063 0ustar temototemoto00000000000000import warnings warnings.simplefilter('ignore', DeprecationWarning) from eventlet import saranwrap warnings.simplefilter('default', DeprecationWarning) from eventlet import greenpool, sleep import os import eventlet import sys import tempfile import time from tests import LimitedTestCase, main, skip_on_windows, skip_with_pyevent import re import StringIO # random test stuff def list_maker(): return [0,1,2] one = 1 two = 2 three = 3 class CoroutineCallingClass(object): def __init__(self): self._my_dict = {} def run_coroutine(self): eventlet.spawn_n(self._add_random_key) def _add_random_key(self): self._my_dict['random'] = 'yes, random' def get_dict(self): return self._my_dict class TestSaranwrap(LimitedTestCase): TEST_TIMEOUT=8 def assert_server_exists(self, prox): self.assert_(saranwrap.status(prox)) prox.foo = 0 self.assertEqual(0, prox.foo) @skip_on_windows @skip_with_pyevent def test_wrap_tuple(self): my_tuple = (1, 2) prox = saranwrap.wrap(my_tuple) self.assertEqual(prox[0], 1) self.assertEqual(prox[1], 2) self.assertEqual(len(my_tuple), 2) @skip_on_windows @skip_with_pyevent def test_wrap_string(self): my_object = "whatever" prox = saranwrap.wrap(my_object) self.assertEqual(str(my_object), str(prox)) self.assertEqual(len(my_object), len(prox)) self.assertEqual(my_object.join(['a', 'b']), prox.join(['a', 'b'])) @skip_on_windows @skip_with_pyevent def test_wrap_uniterable(self): # here we're treating the exception as just a normal class prox = saranwrap.wrap(FloatingPointError()) def index(): prox[0] def key(): prox['a'] self.assertRaises(IndexError, index) self.assertRaises(TypeError, key) @skip_on_windows @skip_with_pyevent def test_wrap_dict(self): my_object = {'a':1} prox = saranwrap.wrap(my_object) self.assertEqual('a', prox.keys()[0]) self.assertEqual(1, prox['a']) self.assertEqual(str(my_object), str(prox)) self.assertEqual('saran:' + repr(my_object), repr(prox)) @skip_on_windows @skip_with_pyevent def test_wrap_module_class(self): prox = saranwrap.wrap(re) self.assertEqual(saranwrap.Proxy, type(prox)) exp = prox.compile('.') self.assertEqual(exp.flags, 0) self.assert_(repr(prox.compile)) @skip_on_windows @skip_with_pyevent def test_wrap_eq(self): prox = saranwrap.wrap(re) exp1 = prox.compile('.') exp2 = prox.compile(exp1.pattern) self.assertEqual(exp1, exp2) exp3 = prox.compile('/') self.assert_(exp1 != exp3) @skip_on_windows @skip_with_pyevent def test_wrap_nonzero(self): prox = saranwrap.wrap(re) exp1 = prox.compile('.') self.assert_(bool(exp1)) prox2 = saranwrap.Proxy([1, 2, 3]) self.assert_(bool(prox2)) @skip_on_windows @skip_with_pyevent def test_multiple_wraps(self): prox1 = saranwrap.wrap(re) prox2 = saranwrap.wrap(re) x1 = prox1.compile('.') x2 = prox1.compile('.') del x2 x3 = prox2.compile('.') @skip_on_windows @skip_with_pyevent def test_dict_passthru(self): prox = saranwrap.wrap(StringIO) x = prox.StringIO('a') self.assertEqual(type(x.__dict__), saranwrap.ObjectProxy) # try it all on one line just for the sake of it self.assertEqual(type(saranwrap.wrap(StringIO).StringIO('a').__dict__), saranwrap.ObjectProxy) @skip_on_windows @skip_with_pyevent def test_is_value(self): server = saranwrap.Server(None, None, None) self.assert_(server.is_value(None)) @skip_on_windows @skip_with_pyevent def test_wrap_getitem(self): prox = saranwrap.wrap([0,1,2]) self.assertEqual(prox[0], 0) @skip_on_windows @skip_with_pyevent def test_wrap_setitem(self): prox = saranwrap.wrap([0,1,2]) prox[1] = 2 self.assertEqual(prox[1], 2) @skip_on_windows @skip_with_pyevent def test_raising_exceptions(self): prox = saranwrap.wrap(re) def nofunc(): prox.never_name_a_function_like_this() self.assertRaises(AttributeError, nofunc) @skip_on_windows @skip_with_pyevent def test_unpicklable_server_exception(self): prox = saranwrap.wrap(saranwrap) def unpickle(): prox.raise_an_unpicklable_error() self.assertRaises(saranwrap.UnrecoverableError, unpickle) # It's basically dead #self.assert_server_exists(prox) @skip_on_windows @skip_with_pyevent def test_pickleable_server_exception(self): prox = saranwrap.wrap(saranwrap) def fperror(): prox.raise_standard_error() self.assertRaises(FloatingPointError, fperror) self.assert_server_exists(prox) @skip_on_windows @skip_with_pyevent def test_print_does_not_break_wrapper(self): prox = saranwrap.wrap(saranwrap) prox.print_string('hello') self.assert_server_exists(prox) @skip_on_windows @skip_with_pyevent def test_stderr_does_not_break_wrapper(self): prox = saranwrap.wrap(saranwrap) prox.err_string('goodbye') self.assert_server_exists(prox) @skip_on_windows @skip_with_pyevent def test_status(self): prox = saranwrap.wrap(time) a = prox.gmtime(0) status = saranwrap.status(prox) self.assertEqual(status['object_count'], 1) self.assertEqual(status['next_id'], 2) self.assert_(status['pid']) # can't guess what it will be # status of an object should be the same as the module self.assertEqual(saranwrap.status(a), status) # create a new one then immediately delete it prox.gmtime(1) is_id = prox.ctime(1) # sync up deletes status = saranwrap.status(prox) self.assertEqual(status['object_count'], 1) self.assertEqual(status['next_id'], 3) prox2 = saranwrap.wrap(re) self.assert_(status['pid'] != saranwrap.status(prox2)['pid']) @skip_on_windows @skip_with_pyevent def test_del(self): prox = saranwrap.wrap(time) delme = prox.gmtime(0) status_before = saranwrap.status(prox) #print status_before['objects'] del delme # need to do an access that doesn't create an object # in order to sync up the deleted objects prox.ctime(1) status_after = saranwrap.status(prox) #print status_after['objects'] self.assertLessThan(status_after['object_count'], status_before['object_count']) @skip_on_windows @skip_with_pyevent def test_contains(self): prox = saranwrap.wrap({'a':'b'}) self.assert_('a' in prox) self.assert_('x' not in prox) @skip_on_windows @skip_with_pyevent def test_variable_and_keyword_arguments_with_function_calls(self): import optparse prox = saranwrap.wrap(optparse) parser = prox.OptionParser() z = parser.add_option('-n', action='store', type='string', dest='n') opts,args = parser.parse_args(["-nfoo"]) self.assertEqual(opts.n, 'foo') @skip_on_windows @skip_with_pyevent def test_original_proxy_going_out_of_scope(self): def make_re(): prox = saranwrap.wrap(re) # after this function returns, prox should fall out of scope return prox.compile('.') tid = make_re() self.assertEqual(tid.flags, 0) def make_list(): from tests import saranwrap_test prox = saranwrap.wrap(saranwrap_test.list_maker) # after this function returns, prox should fall out of scope return prox() proxl = make_list() self.assertEqual(proxl[2], 2) def test_status_of_none(self): try: saranwrap.status(None) self.assert_(False) except AttributeError, e: pass @skip_on_windows @skip_with_pyevent def test_not_inheriting_pythonpath(self): # construct a fake module in the temp directory temp_dir = tempfile.mkdtemp("saranwrap_test") fp = open(os.path.join(temp_dir, "tempmod.py"), "w") fp.write("""import os, sys pypath = os.environ['PYTHONPATH'] sys_path = sys.path""") fp.close() # this should fail because we haven't stuck the temp_dir in our path yet prox = saranwrap.wrap_module('tempmod') try: prox.pypath self.fail() except ImportError: pass # now try to saranwrap it sys.path.append(temp_dir) try: import tempmod prox = saranwrap.wrap(tempmod) self.assert_(prox.pypath.count(temp_dir)) self.assert_(prox.sys_path.count(temp_dir)) finally: import shutil shutil.rmtree(temp_dir) sys.path.remove(temp_dir) @skip_on_windows @skip_with_pyevent def test_contention(self): from tests import saranwrap_test prox = saranwrap.wrap(saranwrap_test) pool = greenpool.GreenPool(4) pool.spawn_n(lambda: self.assertEquals(prox.one, 1)) pool.spawn_n(lambda: self.assertEquals(prox.two, 2)) pool.spawn_n(lambda: self.assertEquals(prox.three, 3)) pool.waitall() @skip_on_windows @skip_with_pyevent def test_copy(self): import copy compound_object = {'a':[1,2,3]} prox = saranwrap.wrap(compound_object) def make_assertions(copied): self.assert_(isinstance(copied, dict)) self.assert_(isinstance(copied['a'], list)) self.assertEquals(copied, compound_object) self.assertNotEqual(id(compound_object), id(copied)) make_assertions(copy.copy(prox)) make_assertions(copy.deepcopy(prox)) @skip_on_windows @skip_with_pyevent def test_list_of_functions(self): return # this test is known to fail, we can implement it sometime in the future if we wish from tests import saranwrap_test prox = saranwrap.wrap([saranwrap_test.list_maker]) self.assertEquals(list_maker(), prox[0]()) @skip_on_windows @skip_with_pyevent def test_under_the_hood_coroutines(self): # so, we want to write a class which uses a coroutine to call # a function. Then we want to saranwrap that class, have # the object call the coroutine and verify that it ran from tests import saranwrap_test mod_proxy = saranwrap.wrap(saranwrap_test) obj_proxy = mod_proxy.CoroutineCallingClass() obj_proxy.run_coroutine() # sleep for a bit to make sure out coroutine ran by the time # we check the assert below sleep(0.1) self.assert_( 'random' in obj_proxy.get_dict(), 'Coroutine in saranwrapped object did not run') @skip_on_windows @skip_with_pyevent def test_child_process_death(self): prox = saranwrap.wrap({}) pid = saranwrap.getpid(prox) self.assertEqual(os.kill(pid, 0), None) # assert that the process is running del prox # removing all references to the proxy should kill the child process sleep(0.1) # need to let the signal handler run self.assertRaises(OSError, os.kill, pid, 0) # raises OSError if pid doesn't exist @skip_on_windows @skip_with_pyevent def test_detection_of_server_crash(self): # make the server crash here pass @skip_on_windows @skip_with_pyevent def test_equality_with_local_object(self): # we'll implement this if there's a use case for it pass @skip_on_windows @skip_with_pyevent def test_non_blocking(self): # here we test whether it's nonblocking pass if __name__ == '__main__': main() eventlet-0.13.0/tests/event_test.py0000644000175000017500000000450512164577340020172 0ustar temototemoto00000000000000import eventlet from eventlet import event from tests import LimitedTestCase class TestEvent(LimitedTestCase): def test_waiting_for_event(self): evt = event.Event() value = 'some stuff' def send_to_event(): evt.send(value) eventlet.spawn_n(send_to_event) self.assertEqual(evt.wait(), value) def test_multiple_waiters(self): self._test_multiple_waiters(False) def test_multiple_waiters_with_exception(self): self._test_multiple_waiters(True) def _test_multiple_waiters(self, exception): evt = event.Event() value = 'some stuff' results = [] def wait_on_event(i_am_done): evt.wait() results.append(True) i_am_done.send() if exception: raise Exception() waiters = [] count = 5 for i in range(count): waiters.append(event.Event()) eventlet.spawn_n(wait_on_event, waiters[-1]) eventlet.sleep() # allow spawns to start executing evt.send() for w in waiters: w.wait() self.assertEqual(len(results), count) def test_reset(self): evt = event.Event() # calling reset before send should throw self.assertRaises(AssertionError, evt.reset) value = 'some stuff' def send_to_event(): evt.send(value) eventlet.spawn_n(send_to_event) self.assertEqual(evt.wait(), value) # now try it again, and we should get the same exact value, # and we shouldn't be allowed to resend without resetting value2 = 'second stuff' self.assertRaises(AssertionError, evt.send, value2) self.assertEqual(evt.wait(), value) # reset and everything should be happy evt.reset() def send_to_event2(): evt.send(value2) eventlet.spawn_n(send_to_event2) self.assertEqual(evt.wait(), value2) def test_double_exception(self): evt = event.Event() # send an exception through the event evt.send(exc=RuntimeError('from test_double_exception')) self.assertRaises(RuntimeError, evt.wait) evt.reset() # shouldn't see the RuntimeError again eventlet.Timeout(0.001) self.assertRaises(eventlet.Timeout, evt.wait) eventlet-0.13.0/tests/env_test.py0000644000175000017500000000650712164577340017645 0ustar temototemoto00000000000000import os from tests.patcher_test import ProcessBase from tests import skip_with_pyevent class Socket(ProcessBase): def test_patched_thread(self): new_mod = """from eventlet.green import socket socket.gethostbyname('localhost') socket.getaddrinfo('localhost', 80) """ os.environ['EVENTLET_TPOOL_DNS'] = 'yes' try: self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 1, lines) finally: del os.environ['EVENTLET_TPOOL_DNS'] class Tpool(ProcessBase): @skip_with_pyevent def test_tpool_size(self): expected = "40" normal = "20" new_mod = """from eventlet import tpool import eventlet import time current = [0] highwater = [0] def count(): current[0] += 1 time.sleep(0.1) if current[0] > highwater[0]: highwater[0] = current[0] current[0] -= 1 expected = %s normal = %s p = eventlet.GreenPool() for i in xrange(expected*2): p.spawn(tpool.execute, count) p.waitall() assert highwater[0] > 20, "Highwater %%s <= %%s" %% (highwater[0], normal) """ os.environ['EVENTLET_THREADPOOL_SIZE'] = expected try: self.write_to_tempfile("newmod", new_mod % (expected, normal)) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 1, lines) finally: del os.environ['EVENTLET_THREADPOOL_SIZE'] def test_tpool_negative(self): new_mod = """from eventlet import tpool import eventlet import time def do(): print "should not get here" try: tpool.execute(do) except AssertionError: print "success" """ os.environ['EVENTLET_THREADPOOL_SIZE'] = "-1" try: self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, lines) self.assertEqual(lines[0], "success", output) finally: del os.environ['EVENTLET_THREADPOOL_SIZE'] def test_tpool_zero(self): new_mod = """from eventlet import tpool import eventlet import time def do(): print "ran it" tpool.execute(do) """ os.environ['EVENTLET_THREADPOOL_SIZE'] = "0" try: self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 4, lines) self.assertEqual(lines[-2], 'ran it', lines) self.assert_('Warning' in lines[1] or 'Warning' in lines[0], lines) finally: del os.environ['EVENTLET_THREADPOOL_SIZE'] class Hub(ProcessBase): def setUp(self): super(Hub, self).setUp() self.old_environ = os.environ.get('EVENTLET_HUB') os.environ['EVENTLET_HUB'] = 'selects' def tearDown(self): if self.old_environ: os.environ['EVENTLET_HUB'] = self.old_environ else: del os.environ['EVENTLET_HUB'] super(Hub, self).tearDown() def test_eventlet_hub(self): new_mod = """from eventlet import hubs print hubs.get_hub() """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, "\n".join(lines)) self.assert_("selects" in lines[0]) eventlet-0.13.0/tests/coros_test.py0000644000175000017500000000705012164577340020174 0ustar temototemoto00000000000000from unittest import main from tests import LimitedTestCase, silence_warnings import eventlet from eventlet import coros from eventlet import event from eventlet import greenthread class IncrActor(coros.Actor): def received(self, evt): self.value = getattr(self, 'value', 0) + 1 if evt: evt.send() class TestActor(LimitedTestCase): mode = 'static' @silence_warnings def setUp(self): super(TestActor, self).setUp() self.actor = IncrActor() def tearDown(self): super(TestActor, self).tearDown() greenthread.kill(self.actor._killer) def test_cast(self): evt = event.Event() self.actor.cast(evt) evt.wait() evt.reset() self.assertEqual(self.actor.value, 1) self.actor.cast(evt) evt.wait() self.assertEqual(self.actor.value, 2) def test_cast_multi_1(self): # make sure that both messages make it in there evt = event.Event() evt1 = event.Event() self.actor.cast(evt) self.actor.cast(evt1) evt.wait() evt1.wait() self.assertEqual(self.actor.value, 2) def test_cast_multi_2(self): # the actor goes through a slightly different code path if it # is forced to enter its event loop prior to any cast()s eventlet.sleep(0) self.test_cast_multi_1() def test_sleeping_during_received(self): # ensure that even if the received method cooperatively # yields, eventually all messages are delivered msgs = [] waiters = [] def received( (message, evt) ): eventlet.sleep(0) msgs.append(message) evt.send() self.actor.received = received waiters.append(event.Event()) self.actor.cast( (1, waiters[-1])) eventlet.sleep(0) waiters.append(event.Event()) self.actor.cast( (2, waiters[-1]) ) waiters.append(event.Event()) self.actor.cast( (3, waiters[-1]) ) eventlet.sleep(0) waiters.append(event.Event()) self.actor.cast( (4, waiters[-1]) ) waiters.append(event.Event()) self.actor.cast( (5, waiters[-1]) ) for evt in waiters: evt.wait() self.assertEqual(msgs, [1,2,3,4,5]) def test_raising_received(self): msgs = [] def received( (message, evt) ): evt.send() if message == 'fail': raise RuntimeError() else: msgs.append(message) self.actor.received = received evt = event.Event() self.actor.cast( ('fail', evt) ) evt.wait() evt.reset() self.actor.cast( ('should_appear', evt) ) evt.wait() self.assertEqual(['should_appear'], msgs) @silence_warnings def test_multiple(self): self.actor = IncrActor(concurrency=2) total = [0] def received( (func, ev, value) ): func() total[0] += value ev.send() self.actor.received = received def onemoment(): eventlet.sleep(0.1) evt = event.Event() evt1 = event.Event() self.actor.cast( (onemoment, evt, 1) ) self.actor.cast( (lambda: None, evt1, 2) ) evt1.wait() self.assertEqual(total[0], 2) eventlet.sleep(0) self.assertEqual(self.actor._pool.free(), 1) evt.wait() self.assertEqual(total[0], 3) eventlet.sleep(0) self.assertEqual(self.actor._pool.free(), 2) if __name__ == '__main__': main() eventlet-0.13.0/tests/websocket_test.py0000644000175000017500000005460512164577340021045 0ustar temototemoto00000000000000import socket import errno import eventlet from eventlet.green import urllib2 from eventlet.green import httplib from eventlet.websocket import WebSocket, WebSocketWSGI from eventlet import wsgi from eventlet import event from eventlet import greenio from tests import mock, LimitedTestCase, certificate_file, private_key_file from tests import skip_if_no_ssl from tests.wsgi_test import _TestBase # demo app def handle(ws): if ws.path == '/echo': while True: m = ws.wait() if m is None: break ws.send(m) elif ws.path == '/range': for i in xrange(10): ws.send("msg %d" % i) eventlet.sleep(0.01) elif ws.path == '/error': # some random socket error that we shouldn't normally get raise socket.error(errno.ENOTSOCK) else: ws.close() wsapp = WebSocketWSGI(handle) class TestWebSocket(_TestBase): TEST_TIMEOUT = 5 def set_site(self): self.site = wsapp def test_incorrect_headers(self): def raiser(): try: urllib2.urlopen("http://localhost:%s/echo" % self.port) except urllib2.HTTPError, e: self.assertEqual(e.code, 400) raise self.assertRaises(urllib2.HTTPError, raiser) def test_incomplete_headers_75(self): headers = dict(kv.split(': ') for kv in [ "Upgrade: WebSocket", # NOTE: intentionally no connection header "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ]) http = httplib.HTTPConnection('localhost', self.port) http.request("GET", "/echo", headers=headers) resp = http.getresponse() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('connection'), 'close') self.assertEqual(resp.read(), '') def test_incomplete_headers_76(self): # First test: Missing Connection: headers = dict(kv.split(': ') for kv in [ "Upgrade: WebSocket", # NOTE: intentionally no connection header "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", ]) http = httplib.HTTPConnection('localhost', self.port) http.request("GET", "/echo", headers=headers) resp = http.getresponse() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('connection'), 'close') self.assertEqual(resp.read(), '') # Now, miss off key2 headers = dict(kv.split(': ') for kv in [ "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", # NOTE: Intentionally no Key2 header ]) http = httplib.HTTPConnection('localhost', self.port) http.request("GET", "/echo", headers=headers) resp = http.getresponse() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('connection'), 'close') self.assertEqual(resp.read(), '') def test_correct_upgrade_request_75(self): connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n') result = sock.recv(1024) ## The server responds the correct Websocket handshake self.assertEqual(result, '\r\n'.join(['HTTP/1.1 101 Web Socket Protocol Handshake', 'Upgrade: WebSocket', 'Connection: Upgrade', 'WebSocket-Origin: http://localhost:%s' % self.port, 'WebSocket-Location: ws://localhost:%s/echo\r\n\r\n' % self.port])) def test_correct_upgrade_request_76(self): connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') result = sock.recv(1024) ## The server responds the correct Websocket handshake self.assertEqual(result, '\r\n'.join(['HTTP/1.1 101 WebSocket Protocol Handshake', 'Upgrade: WebSocket', 'Connection: Upgrade', 'Sec-WebSocket-Origin: http://localhost:%s' % self.port, 'Sec-WebSocket-Protocol: ws', 'Sec-WebSocket-Location: ws://localhost:%s/echo\r\n\r\n8jKS\'y:G*Co,Wxa-' % self.port])) def test_query_string(self): # verify that the query string comes out the other side unscathed connect = [ "GET /echo?query_string HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') result = sock.recv(1024) self.assertEqual(result, '\r\n'.join(['HTTP/1.1 101 WebSocket Protocol Handshake', 'Upgrade: WebSocket', 'Connection: Upgrade', 'Sec-WebSocket-Origin: http://localhost:%s' % self.port, 'Sec-WebSocket-Protocol: ws', 'Sec-WebSocket-Location: ws://localhost:%s/echo?query_string\r\n\r\n8jKS\'y:G*Co,Wxa-' % self.port])) def test_empty_query_string(self): # verify that a single trailing ? doesn't get nuked connect = [ "GET /echo? HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') result = sock.recv(1024) self.assertEqual(result, '\r\n'.join(['HTTP/1.1 101 WebSocket Protocol Handshake', 'Upgrade: WebSocket', 'Connection: Upgrade', 'Sec-WebSocket-Origin: http://localhost:%s' % self.port, 'Sec-WebSocket-Protocol: ws', 'Sec-WebSocket-Location: ws://localhost:%s/echo?\r\n\r\n8jKS\'y:G*Co,Wxa-' % self.port])) def test_sending_messages_to_websocket_75(self): connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n') first_resp = sock.recv(1024) sock.sendall('\x00hello\xFF') result = sock.recv(1024) self.assertEqual(result, '\x00hello\xff') sock.sendall('\x00start') eventlet.sleep(0.001) sock.sendall(' end\xff') result = sock.recv(1024) self.assertEqual(result, '\x00start end\xff') sock.shutdown(socket.SHUT_RDWR) sock.close() eventlet.sleep(0.01) def test_sending_messages_to_websocket_76(self): connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') first_resp = sock.recv(1024) sock.sendall('\x00hello\xFF') result = sock.recv(1024) self.assertEqual(result, '\x00hello\xff') sock.sendall('\x00start') eventlet.sleep(0.001) sock.sendall(' end\xff') result = sock.recv(1024) self.assertEqual(result, '\x00start end\xff') sock.shutdown(socket.SHUT_RDWR) sock.close() eventlet.sleep(0.01) def test_getting_messages_from_websocket_75(self): connect = [ "GET /range HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n') resp = sock.recv(1024) headers, result = resp.split('\r\n\r\n') msgs = [result.strip('\x00\xff')] cnt = 10 while cnt: msgs.append(sock.recv(20).strip('\x00\xff')) cnt -= 1 # Last item in msgs is an empty string self.assertEqual(msgs[:-1], ['msg %d' % i for i in range(10)]) def test_getting_messages_from_websocket_76(self): connect = [ "GET /range HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) headers, result = resp.split('\r\n\r\n') msgs = [result[16:].strip('\x00\xff')] cnt = 10 while cnt: msgs.append(sock.recv(20).strip('\x00\xff')) cnt -= 1 # Last item in msgs is an empty string self.assertEqual(msgs[:-1], ['msg %d' % i for i in range(10)]) def test_breaking_the_connection_75(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /range HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n') resp = sock.recv(1024) # get the headers sock.close() # close while the app is running done_with_request.wait() self.assert_(not error_detected[0]) def test_breaking_the_connection_76(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /range HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) # get the headers sock.close() # close while the app is running done_with_request.wait() self.assert_(not error_detected[0]) def test_client_closing_connection_76(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) # get the headers sock.sendall('\xff\x00') # "Close the connection" packet. done_with_request.wait() self.assert_(not error_detected[0]) def test_client_invalid_packet_76(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) # get the headers sock.sendall('\xef\x00') # Weird packet. done_with_request.wait() self.assert_(error_detected[0]) def test_server_closing_connect_76(self): connect = [ "GET / HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) headers, result = resp.split('\r\n\r\n') # The remote server should have immediately closed the connection. self.assertEqual(result[16:], '\xff\x00') def test_app_socket_errors_75(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /error HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "WebSocket-Protocol: ws", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n') resp = sock.recv(1024) done_with_request.wait() self.assert_(error_detected[0]) def test_app_socket_errors_76(self): error_detected = [False] done_with_request = event.Event() site = self.site def error_detector(environ, start_response): try: try: return site(environ, start_response) except: error_detected[0] = True raise finally: done_with_request.send(True) self.site = error_detector self.spawn_server() connect = [ "GET /error HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.connect( ('localhost', self.port)) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') resp = sock.recv(1024) done_with_request.wait() self.assert_(error_detected[0]) class TestWebSocketSSL(_TestBase): def set_site(self): self.site = wsapp @skip_if_no_ssl def test_ssl_sending_messages(self): s = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)), certfile=certificate_file, keyfile=private_key_file, server_side=True) self.spawn_server(sock=s) connect = [ "GET /echo HTTP/1.1", "Upgrade: WebSocket", "Connection: Upgrade", "Host: localhost:%s" % self.port, "Origin: http://localhost:%s" % self.port, "Sec-WebSocket-Protocol: ws", "Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5", "Sec-WebSocket-Key2: 12998 5 Y3 1 .P00", ] sock = eventlet.wrap_ssl(eventlet.connect( ('localhost', self.port))) sock.sendall('\r\n'.join(connect) + '\r\n\r\n^n:ds[4U') first_resp = sock.recv(1024) # make sure it sets the wss: protocol on the location header loc_line = [x for x in first_resp.split("\r\n") if x.lower().startswith('sec-websocket-location')][0] self.assert_("wss://localhost" in loc_line, "Expecting wss protocol in location: %s" % loc_line) sock.sendall('\x00hello\xFF') result = sock.recv(1024) self.assertEqual(result, '\x00hello\xff') sock.sendall('\x00start') eventlet.sleep(0.001) sock.sendall(' end\xff') result = sock.recv(1024) self.assertEqual(result, '\x00start end\xff') greenio.shutdown_safe(sock) sock.close() eventlet.sleep(0.01) class TestWebSocketObject(LimitedTestCase): def setUp(self): self.mock_socket = s = mock.Mock() self.environ = env = dict(HTTP_ORIGIN='http://localhost', HTTP_WEBSOCKET_PROTOCOL='ws', PATH_INFO='test') self.test_ws = WebSocket(s, env) super(TestWebSocketObject, self).setUp() def test_recieve(self): ws = self.test_ws ws.socket.recv.return_value = '\x00hello\xFF' self.assertEqual(ws.wait(), 'hello') self.assertEqual(ws._buf, '') self.assertEqual(len(ws._msgs), 0) ws.socket.recv.return_value = '' self.assertEqual(ws.wait(), None) self.assertEqual(ws._buf, '') self.assertEqual(len(ws._msgs), 0) def test_send_to_ws(self): ws = self.test_ws ws.send(u'hello') self.assert_(ws.socket.sendall.called_with("\x00hello\xFF")) ws.send(10) self.assert_(ws.socket.sendall.called_with("\x0010\xFF")) def test_close_ws(self): ws = self.test_ws ws.close() self.assert_(ws.socket.shutdown.called_with(True)) eventlet-0.13.0/tests/patcher_psycopg_test.py0000644000175000017500000000342312164577340022241 0ustar temototemoto00000000000000import os from tests import patcher_test, skip_unless from tests import get_database_auth from tests.db_pool_test import postgres_requirement psycopg_test_file = """ import os import sys import eventlet eventlet.monkey_patch() from eventlet import patcher if not patcher.is_monkey_patched('psycopg'): print "Psycopg not monkeypatched" sys.exit(0) count = [0] def tick(totalseconds, persecond): for i in xrange(totalseconds*persecond): count[0] += 1 eventlet.sleep(1.0/persecond) dsn = os.environ['PSYCOPG_TEST_DSN'] import psycopg2 def fetch(num, secs): conn = psycopg2.connect(dsn) cur = conn.cursor() for i in range(num): cur.execute("select pg_sleep(%s)", (secs,)) f = eventlet.spawn(fetch, 2, 1) t = eventlet.spawn(tick, 2, 100) f.wait() assert count[0] > 100, count[0] print "done" """ class PatchingPsycopg(patcher_test.ProcessBase): @skip_unless(postgres_requirement) def test_psycopg_patched(self): if 'PSYCOPG_TEST_DSN' not in os.environ: # construct a non-json dsn for the subprocess psycopg_auth = get_database_auth()['psycopg2'] if isinstance(psycopg_auth,str): dsn = psycopg_auth else: dsn = " ".join(["%s=%s" % (k,v) for k,v, in psycopg_auth.iteritems()]) os.environ['PSYCOPG_TEST_DSN'] = dsn self.write_to_tempfile("psycopg_patcher", psycopg_test_file) output, lines = self.launch_subprocess('psycopg_patcher.py') if lines[0].startswith('Psycopg not monkeypatched'): print "Can't test psycopg2 patching; it's not installed." return # if there's anything wrong with the test program it'll have a stack trace self.assert_(lines[0].startswith('done'), output) eventlet-0.13.0/tests/test__coros_queue.py0000644000175000017500000001543412164577340021544 0ustar temototemoto00000000000000from tests import LimitedTestCase, silence_warnings from unittest import main import eventlet from eventlet import coros, spawn, sleep from eventlet.event import Event class TestQueue(LimitedTestCase): @silence_warnings def test_send_first(self): q = coros.queue() q.send('hi') self.assertEquals(q.wait(), 'hi') @silence_warnings def test_send_exception_first(self): q = coros.queue() q.send(exc=RuntimeError()) self.assertRaises(RuntimeError, q.wait) @silence_warnings def test_send_last(self): q = coros.queue() def waiter(q): timer = eventlet.Timeout(0.1) self.assertEquals(q.wait(), 'hi2') timer.cancel() spawn(waiter, q) sleep(0) sleep(0) q.send('hi2') @silence_warnings def test_max_size(self): q = coros.queue(2) results = [] def putter(q): q.send('a') results.append('a') q.send('b') results.append('b') q.send('c') results.append('c') spawn(putter, q) sleep(0) self.assertEquals(results, ['a', 'b']) self.assertEquals(q.wait(), 'a') sleep(0) self.assertEquals(results, ['a', 'b', 'c']) self.assertEquals(q.wait(), 'b') self.assertEquals(q.wait(), 'c') @silence_warnings def test_zero_max_size(self): q = coros.queue(0) def sender(evt, q): q.send('hi') evt.send('done') def receiver(evt, q): x = q.wait() evt.send(x) e1 = Event() e2 = Event() spawn(sender, e1, q) sleep(0) self.assert_(not e1.ready()) spawn(receiver, e2, q) self.assertEquals(e2.wait(),'hi') self.assertEquals(e1.wait(),'done') @silence_warnings def test_multiple_waiters(self): # tests that multiple waiters get their results back q = coros.queue() sendings = ['1', '2', '3', '4'] gts = [eventlet.spawn(q.wait) for x in sendings] eventlet.sleep(0.01) # get 'em all waiting q.send(sendings[0]) q.send(sendings[1]) q.send(sendings[2]) q.send(sendings[3]) results = set() for i, gt in enumerate(gts): results.add(gt.wait()) self.assertEquals(results, set(sendings)) @silence_warnings def test_waiters_that_cancel(self): q = coros.queue() def do_receive(q, evt): eventlet.Timeout(0, RuntimeError()) try: result = q.wait() evt.send(result) except RuntimeError: evt.send('timed out') evt = Event() spawn(do_receive, q, evt) self.assertEquals(evt.wait(), 'timed out') q.send('hi') self.assertEquals(q.wait(), 'hi') @silence_warnings def test_senders_that_die(self): q = coros.queue() def do_send(q): q.send('sent') spawn(do_send, q) self.assertEquals(q.wait(), 'sent') @silence_warnings def test_two_waiters_one_dies(self): def waiter(q, evt): evt.send(q.wait()) def do_receive(q, evt): eventlet.Timeout(0, RuntimeError()) try: result = q.wait() evt.send(result) except RuntimeError: evt.send('timed out') q = coros.queue() dying_evt = Event() waiting_evt = Event() spawn(do_receive, q, dying_evt) spawn(waiter, q, waiting_evt) sleep(0) q.send('hi') self.assertEquals(dying_evt.wait(), 'timed out') self.assertEquals(waiting_evt.wait(), 'hi') @silence_warnings def test_two_bogus_waiters(self): def do_receive(q, evt): eventlet.Timeout(0, RuntimeError()) try: result = q.wait() evt.send(result) except RuntimeError: evt.send('timed out') q = coros.queue() e1 = Event() e2 = Event() spawn(do_receive, q, e1) spawn(do_receive, q, e2) sleep(0) q.send('sent') self.assertEquals(e1.wait(), 'timed out') self.assertEquals(e2.wait(), 'timed out') self.assertEquals(q.wait(), 'sent') @silence_warnings def test_waiting(self): def do_wait(q, evt): result = q.wait() evt.send(result) q = coros.queue() e1 = Event() spawn(do_wait, q, e1) sleep(0) self.assertEquals(1, q.waiting()) q.send('hi') sleep(0) self.assertEquals(0, q.waiting()) self.assertEquals('hi', e1.wait()) self.assertEquals(0, q.waiting()) class TestChannel(LimitedTestCase): @silence_warnings def test_send(self): sleep(0.1) channel = coros.queue(0) events = [] def another_greenlet(): events.append(channel.wait()) events.append(channel.wait()) spawn(another_greenlet) events.append('sending') channel.send('hello') events.append('sent hello') channel.send('world') events.append('sent world') self.assertEqual(['sending', 'hello', 'sent hello', 'world', 'sent world'], events) @silence_warnings def test_wait(self): sleep(0.1) channel = coros.queue(0) events = [] def another_greenlet(): events.append('sending hello') channel.send('hello') events.append('sending world') channel.send('world') events.append('sent world') spawn(another_greenlet) events.append('waiting') events.append(channel.wait()) events.append(channel.wait()) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world'], events) sleep(0) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world', 'sent world'], events) @silence_warnings def test_waiters(self): c = coros.Channel() w1 = eventlet.spawn(c.wait) w2 = eventlet.spawn(c.wait) w3 = eventlet.spawn(c.wait) sleep(0) self.assertEquals(c.waiting(), 3) s1 = eventlet.spawn(c.send, 1) s2 = eventlet.spawn(c.send, 2) s3 = eventlet.spawn(c.send, 3) sleep(0) # this gets all the sends into a waiting state self.assertEquals(c.waiting(), 0) s1.wait() s2.wait() s3.wait() # NOTE: we don't guarantee that waiters are served in order results = sorted([w1.wait(), w2.wait(), w3.wait()]) self.assertEquals(results, [1,2,3]) if __name__=='__main__': main() eventlet-0.13.0/tests/timeout_test.py0000644000175000017500000000336712164577340020544 0ustar temototemoto00000000000000from tests import LimitedTestCase from eventlet import timeout from eventlet import greenthread DELAY = 0.01 class TestDirectRaise(LimitedTestCase): def test_direct_raise_class(self): try: raise timeout.Timeout except timeout.Timeout, t: assert not t.pending, repr(t) def test_direct_raise_instance(self): tm = timeout.Timeout() try: raise tm except timeout.Timeout, t: assert tm is t, (tm, t) assert not t.pending, repr(t) def test_repr(self): # just verify these don't crash tm = timeout.Timeout(1) greenthread.sleep(0) repr(tm) str(tm) tm.cancel() tm = timeout.Timeout(None, RuntimeError) repr(tm) str(tm) tm = timeout.Timeout(None, False) repr(tm) str(tm) class TestWithTimeout(LimitedTestCase): def test_with_timeout(self): self.assertRaises(timeout.Timeout, timeout.with_timeout, DELAY, greenthread.sleep, DELAY*10) X = object() r = timeout.with_timeout(DELAY, greenthread.sleep, DELAY*10, timeout_value=X) self.assert_(r is X, (r, X)) r = timeout.with_timeout(DELAY*10, greenthread.sleep, DELAY, timeout_value=X) self.assert_(r is None, r) def test_with_outer_timer(self): def longer_timeout(): # this should not catch the outer timeout's exception return timeout.with_timeout(DELAY * 10, greenthread.sleep, DELAY * 20, timeout_value='b') self.assertRaises(timeout.Timeout, timeout.with_timeout, DELAY, longer_timeout) eventlet-0.13.0/tests/timeout_test_with_statement.py0000644000175000017500000001007512164577340023655 0ustar temototemoto00000000000000""" Tests with-statement behavior of Timeout class. Don't import when using Python 2.4. """ from __future__ import with_statement import sys import unittest import weakref import time from eventlet import sleep from eventlet.timeout import Timeout from tests import LimitedTestCase DELAY = 0.01 class Error(Exception): pass class Test(LimitedTestCase): def test_cancellation(self): # Nothing happens if with-block finishes before the timeout expires t = Timeout(DELAY*2) sleep(0) # make it pending assert t.pending, repr(t) with t: assert t.pending, repr(t) sleep(DELAY) # check if timer was actually cancelled assert not t.pending, repr(t) sleep(DELAY*2) def test_raising_self(self): # An exception will be raised if it's not try: with Timeout(DELAY) as t: sleep(DELAY*2) except Timeout, ex: assert ex is t, (ex, t) else: raise AssertionError('must raise Timeout') def test_raising_self_true(self): # specifying True as the exception raises self as well try: with Timeout(DELAY, True) as t: sleep(DELAY*2) except Timeout, ex: assert ex is t, (ex, t) else: raise AssertionError('must raise Timeout') def test_raising_custom_exception(self): # You can customize the exception raised: try: with Timeout(DELAY, IOError("Operation takes way too long")): sleep(DELAY*2) except IOError, ex: assert str(ex)=="Operation takes way too long", repr(ex) def test_raising_exception_class(self): # Providing classes instead of values should be possible too: try: with Timeout(DELAY, ValueError): sleep(DELAY*2) except ValueError: pass def test_raising_exc_tuple(self): try: 1//0 except: try: with Timeout(DELAY, sys.exc_info()[0]): sleep(DELAY*2) raise AssertionError('should not get there') raise AssertionError('should not get there') except ZeroDivisionError: pass else: raise AssertionError('should not get there') def test_cancel_timer_inside_block(self): # It's possible to cancel the timer inside the block: with Timeout(DELAY) as timer: timer.cancel() sleep(DELAY*2) def test_silent_block(self): # To silence the exception before exiting the block, pass # False as second parameter. XDELAY=0.1 start = time.time() with Timeout(XDELAY, False): sleep(XDELAY*2) delta = (time.time()-start) assert delta>> import socket >>> sock = bufsized(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) """ sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, size) sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, size) return sock def min_buf_size(): """Return the minimum buffer size that the platform supports.""" test_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) test_sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 1) return test_sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF) def using_epoll_hub(_f): try: return 'epolls' in type(get_hub()).__module__ except Exception: return False def using_kqueue_hub(_f): try: return 'kqueue' in type(get_hub()).__module__ except Exception: return False class TestGreenSocket(LimitedTestCase): def assertWriteToClosedFileRaises(self, fd): if sys.version_info[0] < 3: # 2.x socket._fileobjects are odd: writes don't check # whether the socket is closed or not, and you get an # AttributeError during flush if it is closed fd.write('a') self.assertRaises(Exception, fd.flush) else: # 3.x io write to closed file-like pbject raises ValueError self.assertRaises(ValueError, fd.write, 'a') def test_connect_timeout(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(0.1) gs = greenio.GreenSocket(s) try: gs.connect(('192.0.2.1', 80)) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') except socket.error, e: # unreachable is also a valid outcome if not get_errno(e) in (errno.EHOSTUNREACH, errno.ENETUNREACH): raise def test_accept_timeout(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('', 0)) s.listen(50) s.settimeout(0.1) gs = greenio.GreenSocket(s) try: gs.accept() self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') def test_connect_ex_timeout(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(0.1) gs = greenio.GreenSocket(s) e = gs.connect_ex(('192.0.2.1', 80)) if not e in (errno.EHOSTUNREACH, errno.ENETUNREACH): self.assertEquals(e, errno.EAGAIN) def test_recv_timeout(self): listener = greenio.GreenSocket(socket.socket()) listener.bind(('', 0)) listener.listen(50) evt = event.Event() def server(): # accept the connection in another greenlet sock, addr = listener.accept() evt.wait() gt = eventlet.spawn(server) addr = listener.getsockname() client = greenio.GreenSocket(socket.socket()) client.settimeout(0.1) client.connect(addr) try: client.recv(8192) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') evt.send() gt.wait() def test_recvfrom_timeout(self): gs = greenio.GreenSocket( socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) gs.settimeout(.1) gs.bind(('', 0)) try: gs.recvfrom(8192) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') def test_recvfrom_into_timeout(self): buf = buffer(array.array('B')) gs = greenio.GreenSocket( socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) gs.settimeout(.1) gs.bind(('', 0)) try: gs.recvfrom_into(buf) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') def test_recv_into_timeout(self): buf = buffer(array.array('B')) listener = greenio.GreenSocket(socket.socket()) listener.bind(('', 0)) listener.listen(50) evt = event.Event() def server(): # accept the connection in another greenlet sock, addr = listener.accept() evt.wait() gt = eventlet.spawn(server) addr = listener.getsockname() client = greenio.GreenSocket(socket.socket()) client.settimeout(0.1) client.connect(addr) try: client.recv_into(buf) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') evt.send() gt.wait() def test_send_timeout(self): self.reset_timeout(2) listener = bufsized(eventlet.listen(('', 0))) evt = event.Event() def server(): # accept the connection in another greenlet sock, addr = listener.accept() sock = bufsized(sock) evt.wait() gt = eventlet.spawn(server) addr = listener.getsockname() client = bufsized(greenio.GreenSocket(socket.socket())) client.connect(addr) try: client.settimeout(0.00001) msg = s2b("A") * 100000 # large enough number to overwhelm most buffers total_sent = 0 # want to exceed the size of the OS buffer so it'll block in a # single send for x in range(10): total_sent += client.send(msg) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') evt.send() gt.wait() def test_sendall_timeout(self): listener = greenio.GreenSocket(socket.socket()) listener.bind(('', 0)) listener.listen(50) evt = event.Event() def server(): # accept the connection in another greenlet sock, addr = listener.accept() evt.wait() gt = eventlet.spawn(server) addr = listener.getsockname() client = greenio.GreenSocket(socket.socket()) client.settimeout(0.1) client.connect(addr) try: msg = s2b("A") * (8 << 20) # want to exceed the size of the OS buffer so it'll block client.sendall(msg) self.fail("socket.timeout not raised") except socket.timeout, e: self.assert_(hasattr(e, 'args')) self.assertEqual(e.args[0], 'timed out') evt.send() gt.wait() def test_close_with_makefile(self): def accept_close_early(listener): # verify that the makefile and the socket are truly independent # by closing the socket prior to using the made file try: conn, addr = listener.accept() fd = conn.makefile('w') conn.close() fd.write('hello\n') fd.close() self.assertWriteToClosedFileRaises(fd) self.assertRaises(socket.error, conn.send, s2b('b')) finally: listener.close() def accept_close_late(listener): # verify that the makefile and the socket are truly independent # by closing the made file and then sending a character try: conn, addr = listener.accept() fd = conn.makefile('w') fd.write('hello') fd.close() conn.send(s2b('\n')) conn.close() self.assertWriteToClosedFileRaises(fd) self.assertRaises(socket.error, conn.send, s2b('b')) finally: listener.close() def did_it_work(server): client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', server.getsockname()[1])) fd = client.makefile() client.close() assert fd.readline() == 'hello\n' assert fd.read() == '' fd.close() server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server.bind(('0.0.0.0', 0)) server.listen(50) killer = eventlet.spawn(accept_close_early, server) did_it_work(server) killer.wait() server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server.bind(('0.0.0.0', 0)) server.listen(50) killer = eventlet.spawn(accept_close_late, server) did_it_work(server) killer.wait() def test_del_closes_socket(self): def accept_once(listener): # delete/overwrite the original conn # object, only keeping the file object around # closing the file object should close everything try: conn, addr = listener.accept() conn = conn.makefile('w') conn.write('hello\n') conn.close() self.assertWriteToClosedFileRaises(conn) finally: listener.close() server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server.bind(('127.0.0.1', 0)) server.listen(50) killer = eventlet.spawn(accept_once, server) client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', server.getsockname()[1])) fd = client.makefile() client.close() assert fd.read() == 'hello\n' assert fd.read() == '' killer.wait() def test_full_duplex(self): large_data = s2b('*') * 10 * min_buf_size() listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(('127.0.0.1', 0)) listener.listen(50) bufsized(listener) def send_large(sock): sock.sendall(large_data) def read_large(sock): result = sock.recv(len(large_data)) while len(result) < len(large_data): result += sock.recv(len(large_data)) self.assertEquals(result, large_data) def server(): (sock, addr) = listener.accept() sock = bufsized(sock) send_large_coro = eventlet.spawn(send_large, sock) eventlet.sleep(0) result = sock.recv(10) expected = s2b('hello world') while len(result) < len(expected): result += sock.recv(10) self.assertEquals(result, expected) send_large_coro.wait() server_evt = eventlet.spawn(server) client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', listener.getsockname()[1])) bufsized(client) large_evt = eventlet.spawn(read_large, client) eventlet.sleep(0) client.sendall(s2b('hello world')) server_evt.wait() large_evt.wait() client.close() def test_sendall(self): # test adapted from Marcus Cavanaugh's email # it may legitimately take a while, but will eventually complete self.timer.cancel() second_bytes = 10 def test_sendall_impl(many_bytes): bufsize = max(many_bytes // 15, 2) def sender(listener): (sock, addr) = listener.accept() sock = bufsized(sock, size=bufsize) sock.sendall(s2b('x') * many_bytes) sock.sendall(s2b('y') * second_bytes) listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(("", 0)) listener.listen(50) sender_coro = eventlet.spawn(sender, listener) client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', listener.getsockname()[1])) bufsized(client, size=bufsize) total = 0 while total < many_bytes: data = client.recv(min(many_bytes - total, many_bytes // 10)) if not data: break total += len(data) total2 = 0 while total < second_bytes: data = client.recv(second_bytes) if not data: break total2 += len(data) sender_coro.wait() client.close() for how_many in (1000, 10000, 100000, 1000000): test_sendall_impl(how_many) def test_wrap_socket(self): try: import ssl except ImportError: pass # pre-2.6 else: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('127.0.0.1', 0)) sock.listen(50) ssl.wrap_socket(sock) def test_timeout_and_final_write(self): # This test verifies that a write on a socket that we've # stopped listening for doesn't result in an incorrect switch server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server.bind(('127.0.0.1', 0)) server.listen(50) bound_port = server.getsockname()[1] def sender(evt): s2, addr = server.accept() wrap_wfile = s2.makefile('w') eventlet.sleep(0.02) wrap_wfile.write('hi') s2.close() evt.send('sent via event') evt = event.Event() eventlet.spawn(sender, evt) eventlet.sleep(0) # lets the socket enter accept mode, which # is necessary for connect to succeed on windows try: # try and get some data off of this pipe # but bail before any is sent eventlet.Timeout(0.01) client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', bound_port)) wrap_rfile = client.makefile() wrap_rfile.read(1) self.fail() except eventlet.TimeoutError: pass result = evt.wait() self.assertEquals(result, 'sent via event') server.close() client.close() @skip_with_pyevent def test_raised_multiple_readers(self): debug.hub_prevent_multiple_readers(True) def handle(sock, addr): sock.recv(1) sock.sendall("a") raise eventlet.StopServe() listener = eventlet.listen(('127.0.0.1', 0)) eventlet.spawn(eventlet.serve, listener, handle) def reader(s): s.recv(1) s = eventlet.connect(('127.0.0.1', listener.getsockname()[1])) a = eventlet.spawn(reader, s) eventlet.sleep(0) self.assertRaises(RuntimeError, s.recv, 1) s.sendall('b') a.wait() @skip_with_pyevent @skip_if(using_epoll_hub) @skip_if(using_kqueue_hub) def test_closure(self): def spam_to_me(address): sock = eventlet.connect(address) while True: try: sock.sendall('hello world') except socket.error, e: if get_errno(e) == errno.EPIPE: return raise server = eventlet.listen(('127.0.0.1', 0)) sender = eventlet.spawn(spam_to_me, server.getsockname()) client, address = server.accept() server.close() def reader(): try: while True: data = client.recv(1024) self.assert_(data) except socket.error, e: # we get an EBADF because client is closed in the same process # (but a different greenthread) if get_errno(e) != errno.EBADF: raise def closer(): client.close() reader = eventlet.spawn(reader) eventlet.spawn_n(closer) reader.wait() sender.wait() def test_invalid_connection(self): # find an unused port by creating a socket then closing it port = eventlet.listen(('127.0.0.1', 0)).getsockname()[1] self.assertRaises(socket.error, eventlet.connect, ('127.0.0.1', port)) def test_zero_timeout_and_back(self): listen = eventlet.listen(('', 0)) # Keep reference to server side of socket server = eventlet.spawn(listen.accept) client = eventlet.connect(listen.getsockname()) client.settimeout(0.05) # Now must raise socket.timeout self.assertRaises(socket.timeout, client.recv, 1) client.settimeout(0) # Now must raise socket.error with EAGAIN try: client.recv(1) assert False except socket.error, e: assert get_errno(e) == errno.EAGAIN client.settimeout(0.05) # Now socket.timeout again self.assertRaises(socket.timeout, client.recv, 1) server.wait() def test_default_nonblocking(self): sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) flags = fcntl.fcntl(sock1.fd.fileno(), fcntl.F_GETFL) assert flags & os.O_NONBLOCK sock2 = socket.socket(sock1.fd) flags = fcntl.fcntl(sock2.fd.fileno(), fcntl.F_GETFL) assert flags & os.O_NONBLOCK def test_dup_nonblocking(self): sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) flags = fcntl.fcntl(sock1.fd.fileno(), fcntl.F_GETFL) assert flags & os.O_NONBLOCK sock2 = sock1.dup() flags = fcntl.fcntl(sock2.fd.fileno(), fcntl.F_GETFL) assert flags & os.O_NONBLOCK def test_skip_nonblocking(self): sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock1.fd.fileno() flags = fcntl.fcntl(fd, fcntl.F_GETFL) flags = fcntl.fcntl(fd, fcntl.F_SETFL, flags & ~os.O_NONBLOCK) assert flags & os.O_NONBLOCK == 0 sock2 = socket.socket(sock1.fd, set_nonblocking=False) flags = fcntl.fcntl(sock2.fd.fileno(), fcntl.F_GETFL) assert flags & os.O_NONBLOCK == 0 def test_sockopt_interface(self): sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) assert sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 0 assert sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) == '\000' sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) def test_socketpair_select(self): # https://github.com/eventlet/eventlet/pull/25 s1, s2 = socket.socketpair() assert select.select([], [s1], [], 0) == ([], [s1], []) assert select.select([], [s1], [], 0) == ([], [s1], []) class TestGreenPipe(LimitedTestCase): @skip_on_windows def setUp(self): super(self.__class__, self).setUp() self.tempdir = tempfile.mkdtemp('_green_pipe_test') def tearDown(self): shutil.rmtree(self.tempdir) super(self.__class__, self).tearDown() def test_pipe(self): r, w = os.pipe() rf = greenio.GreenPipe(r, 'r') wf = greenio.GreenPipe(w, 'w', 0) def sender(f, content): for ch in content: eventlet.sleep(0.0001) f.write(ch) f.close() one_line = "12345\n" eventlet.spawn(sender, wf, one_line * 5) for i in xrange(5): line = rf.readline() eventlet.sleep(0.01) self.assertEquals(line, one_line) self.assertEquals(rf.readline(), '') def test_pipe_read(self): # ensure that 'readline' works properly on GreenPipes when data is not # immediately available (fd is nonblocking, was raising EAGAIN) # also ensures that readline() terminates on '\n' and '\r\n' r, w = os.pipe() r = greenio.GreenPipe(r) w = greenio.GreenPipe(w, 'w') def writer(): eventlet.sleep(.1) w.write('line\n') w.flush() w.write('line\r\n') w.flush() gt = eventlet.spawn(writer) eventlet.sleep(0) line = r.readline() self.assertEquals(line, 'line\n') line = r.readline() self.assertEquals(line, 'line\r\n') gt.wait() def test_pipe_writes_large_messages(self): r, w = os.pipe() r = greenio.GreenPipe(r) w = greenio.GreenPipe(w, 'w') large_message = "".join([1024 * chr(i) for i in xrange(65)]) def writer(): w.write(large_message) w.close() gt = eventlet.spawn(writer) for i in xrange(65): buf = r.read(1024) expected = 1024 * chr(i) self.assertEquals(buf, expected, "expected=%r..%r, found=%r..%r iter=%d" % (expected[:4], expected[-4:], buf[:4], buf[-4:], i)) gt.wait() def test_seek_on_buffered_pipe(self): f = greenio.GreenPipe(self.tempdir + "/TestFile", 'w+', 1024) self.assertEquals(f.tell(), 0) f.seek(0, 2) self.assertEquals(f.tell(), 0) f.write('1234567890') f.seek(0, 2) self.assertEquals(f.tell(), 10) f.seek(0) value = f.read(1) self.assertEqual(value, '1') self.assertEquals(f.tell(), 1) value = f.read(1) self.assertEqual(value, '2') self.assertEquals(f.tell(), 2) f.seek(0, 1) self.assertEqual(f.readline(), '34567890') f.seek(-5, 1) self.assertEqual(f.readline(), '67890') f.seek(0) self.assertEqual(f.readline(), '1234567890') f.seek(0, 2) self.assertEqual(f.readline(), '') def test_truncate(self): f = greenio.GreenPipe(self.tempdir + "/TestFile", 'w+', 1024) f.write('1234567890') f.truncate(9) self.assertEquals(f.tell(), 9) class TestGreenIoLong(LimitedTestCase): TEST_TIMEOUT = 10 # the test here might take a while depending on the OS @skip_with_pyevent def test_multiple_readers(self, clibufsize=False): debug.hub_prevent_multiple_readers(False) recvsize = 2 * min_buf_size() sendsize = 10 * recvsize # test that we can have multiple coroutines reading # from the same fd. We make no guarantees about which one gets which # bytes, but they should both get at least some def reader(sock, results): while True: data = sock.recv(recvsize) if not data: break results.append(data) results1 = [] results2 = [] listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(('127.0.0.1', 0)) listener.listen(50) def server(): (sock, addr) = listener.accept() sock = bufsized(sock) try: c1 = eventlet.spawn(reader, sock, results1) c2 = eventlet.spawn(reader, sock, results2) try: c1.wait() c2.wait() finally: c1.kill() c2.kill() finally: sock.close() server_coro = eventlet.spawn(server) client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', listener.getsockname()[1])) if clibufsize: bufsized(client, size=sendsize) else: bufsized(client) client.sendall(s2b('*') * sendsize) client.close() server_coro.wait() listener.close() self.assert_(len(results1) > 0) self.assert_(len(results2) > 0) debug.hub_prevent_multiple_readers() @skipped # by rdw because it fails but it's not clear how to make it pass @skip_with_pyevent def test_multiple_readers2(self): self.test_multiple_readers(clibufsize=True) class TestGreenIoStarvation(LimitedTestCase): # fixme: this doesn't succeed, because of eventlet's predetermined # ordering. two processes, one with server, one with client eventlets # might be more reliable? TEST_TIMEOUT = 300 # the test here might take a while depending on the OS @skipped # by rdw, because it fails but it's not clear how to make it pass @skip_with_pyevent def test_server_starvation(self, sendloops=15): recvsize = 2 * min_buf_size() sendsize = 10000 * recvsize results = [[] for i in xrange(5)] listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.bind(('127.0.0.1', 0)) port = listener.getsockname()[1] listener.listen(50) base_time = time.time() def server(my_results): sock, addr = listener.accept() datasize = 0 t1 = None t2 = None try: while True: data = sock.recv(recvsize) if not t1: t1 = time.time() - base_time if not data: t2 = time.time() - base_time my_results.append(datasize) my_results.append((t1, t2)) break datasize += len(data) finally: sock.close() def client(): pid = os.fork() if pid: return pid client = _orig_sock.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('127.0.0.1', port)) bufsized(client, size=sendsize) for i in range(sendloops): client.sendall(s2b('*') * sendsize) client.close() os._exit(0) clients = [] servers = [] for r in results: servers.append(eventlet.spawn(server, r)) for r in results: clients.append(client()) for s in servers: s.wait() for c in clients: os.waitpid(c, 0) listener.close() # now test that all of the server receive intervals overlap, and # that there were no errors. for r in results: assert len(r) == 2, "length is %d not 2!: %s\n%s" % (len(r), r, results) assert r[0] == sendsize * sendloops assert len(r[1]) == 2 assert r[1][0] is not None assert r[1][1] is not None starttimes = sorted(r[1][0] for r in results) endtimes = sorted(r[1][1] for r in results) runlengths = sorted(r[1][1] - r[1][0] for r in results) # assert that the last task started before the first task ended # (our no-starvation condition) assert starttimes[-1] < endtimes[0], "Not overlapping: starts %s ends %s" % (starttimes, endtimes) maxstartdiff = starttimes[-1] - starttimes[0] assert maxstartdiff * 2 < runlengths[0], "Largest difference in starting times more than twice the shortest running time!" assert runlengths[0] * 2 > runlengths[-1], "Longest runtime more than twice as long as shortest!" def test_set_nonblocking(): sock = _orig_sock.socket(socket.AF_INET, socket.SOCK_DGRAM) fileno = sock.fileno() orig_flags = fcntl.fcntl(fileno, fcntl.F_GETFL) assert orig_flags & os.O_NONBLOCK == 0 greenio.set_nonblocking(sock) new_flags = fcntl.fcntl(fileno, fcntl.F_GETFL) assert new_flags == (orig_flags | os.O_NONBLOCK) if __name__ == '__main__': main() eventlet-0.13.0/tests/fork_test.py0000644000175000017500000000233712164577340020013 0ustar temototemoto00000000000000from tests.patcher_test import ProcessBase class ForkTest(ProcessBase): def test_simple(self): newmod = ''' import eventlet import os import sys import signal mydir = %r signal_file = os.path.join(mydir, "output.txt") pid = os.fork() if (pid != 0): eventlet.Timeout(10) try: port = None while True: try: contents = open(signal_file, "rb").read() port = int(contents.split()[0]) break except (IOError, IndexError, ValueError, TypeError): eventlet.sleep(0.1) eventlet.connect(('127.0.0.1', port)) while True: try: contents = open(signal_file, "rb").read() result = contents.split()[1] break except (IOError, IndexError): eventlet.sleep(0.1) print 'result', result finally: os.kill(pid, signal.SIGTERM) else: try: s = eventlet.listen(('', 0)) fd = open(signal_file, "wb") fd.write(str(s.getsockname()[1])) fd.write("\\n") fd.flush() s.accept() fd.write("done") fd.flush() finally: fd.close() ''' self.write_to_tempfile("newmod", newmod % self.tempdir) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(lines[0], "result done", output) eventlet-0.13.0/tests/test__refcount.py0000644000175000017500000000436212164577340021036 0ustar temototemoto00000000000000"""This test checks that socket instances (not GreenSockets but underlying sockets) are not leaked by the hub. """ import sys import unittest from pprint import pformat from eventlet.support import clear_sys_exc_info from eventlet.green import socket from eventlet.green.thread import start_new_thread from eventlet.green.time import sleep import weakref import gc SOCKET_TIMEOUT = 0.1 def init_server(): s = socket.socket() s.settimeout(SOCKET_TIMEOUT) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('localhost', 0)) s.listen(5) return s, s.getsockname()[1] def handle_request(s, raise_on_timeout): try: conn, address = s.accept() except socket.timeout: if raise_on_timeout: raise else: return #print 'handle_request - accepted' res = conn.recv(100) assert res == 'hello', repr(res) #print 'handle_request - recvd %r' % res res = conn.send('bye') #print 'handle_request - sent %r' % res #print 'handle_request - conn refcount: %s' % sys.getrefcount(conn) #conn.close() def make_request(port): #print 'make_request' s = socket.socket() s.connect(('localhost', port)) #print 'make_request - connected' res = s.send('hello') #print 'make_request - sent %s' % res res = s.recv(100) assert res == 'bye', repr(res) #print 'make_request - recvd %r' % res #s.close() def run_interaction(run_client): s, port = init_server() start_new_thread(handle_request, (s, run_client)) if run_client: start_new_thread(make_request, (port,)) sleep(0.1+SOCKET_TIMEOUT) #print sys.getrefcount(s.fd) #s.close() return weakref.ref(s.fd) def run_and_check(run_client): w = run_interaction(run_client=run_client) clear_sys_exc_info() if w(): print pformat(gc.get_referrers(w())) for x in gc.get_referrers(w()): print pformat(x) for y in gc.get_referrers(x): print '-', pformat(y) raise AssertionError('server should be dead by now') def test_clean_exit(): run_and_check(True) run_and_check(True) def test_timeout_exit(): run_and_check(False) run_and_check(False) if __name__=='__main__': unittest.main() eventlet-0.13.0/tests/greenpipe_test_with_statement.py0000644000175000017500000000101112164577340024133 0ustar temototemoto00000000000000from __future__ import with_statement import os from tests import LimitedTestCase from eventlet import greenio class TestGreenPipeWithStatement(LimitedTestCase): def test_pipe_context(self): # ensure using a pipe as a context actually closes it. r, w = os.pipe() r = greenio.GreenPipe(r) w = greenio.GreenPipe(w, 'w') with r: pass assert r.closed and not w.closed with w as f: assert f == w assert r.closed and w.closed eventlet-0.13.0/tests/test_server.crt0000644000175000017500000000156712164577340020524 0ustar temototemoto00000000000000-----BEGIN CERTIFICATE----- MIICYzCCAcwCCQD5jx1Aa0dytjANBgkqhkiG9w0BAQQFADB2MQswCQYDVQQGEwJU UzENMAsGA1UECBMEVGVzdDENMAsGA1UEBxMEVGVzdDEWMBQGA1UEChMNVGVzdCBF dmVudGxldDENMAsGA1UECxMEVGVzdDENMAsGA1UEAxMEVGVzdDETMBEGCSqGSIb3 DQEJARYEVGVzdDAeFw0wODA3MDgyMTExNDJaFw0xMDAyMDgwODE1MTBaMHYxCzAJ BgNVBAYTAlRTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MRYwFAYDVQQK Ew1UZXN0IEV2ZW50bGV0MQ0wCwYDVQQLEwRUZXN0MQ0wCwYDVQQDEwRUZXN0MRMw EQYJKoZIhvcNAQkBFgRUZXN0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDM WcyeIiHQuEGQxgTIvu0aOW4iRFAyUEi8pLWNCxMEHglF8k6OxFVq7XWZMDnDFVnb ZjmQh5Tc21Ae6cXzxXln578fROXHEzXo3Is8HUlq3ug1yYOGHjxw++Opjf1uoHwP EBUKsz/flS7knuscgFM9FO05KSPn2wHnZeIDta4yTwIDAQABMA0GCSqGSIb3DQEB BAUAA4GBAKM71aP0r26gEEEBzovfXm1IwKav6R9/xiWsJ4pFsUXVotcaIjcVBDG1 Z7tz688hokb+GNxsTI2gNfqanqUnfP9wZxnKRmfTSOvb5aWHIiaiMXSgjiPlqBcm 6mnSeEbSMM9cw479wWhh1YqY8tf3gYJa+sxznVWLSfVLpsjRMphe -----END CERTIFICATE----- eventlet-0.13.0/tests/patcher_test.py0000644000175000017500000004023612164577340020500 0ustar temototemoto00000000000000import os import shutil import subprocess import sys import tempfile from tests import LimitedTestCase, main, skip_with_pyevent base_module_contents = """ import socket import urllib print "base", socket, urllib """ patching_module_contents = """ from eventlet.green import socket from eventlet.green import urllib from eventlet import patcher print 'patcher', socket, urllib patcher.inject('base', globals(), ('socket', socket), ('urllib', urllib)) del patcher """ import_module_contents = """ import patching import socket print "importing", patching, socket, patching.socket, patching.urllib """ class ProcessBase(LimitedTestCase): TEST_TIMEOUT=3 # starting processes is time-consuming def setUp(self): self._saved_syspath = sys.path self.tempdir = tempfile.mkdtemp('_patcher_test') def tearDown(self): sys.path = self._saved_syspath shutil.rmtree(self.tempdir) def write_to_tempfile(self, name, contents): filename = os.path.join(self.tempdir, name + '.py') fd = open(filename, "w") fd.write(contents) fd.close() def launch_subprocess(self, filename): python_path = os.pathsep.join(sys.path + [self.tempdir]) new_env = os.environ.copy() new_env['PYTHONPATH'] = python_path if not filename.endswith('.py'): filename = filename + '.py' p = subprocess.Popen([sys.executable, os.path.join(self.tempdir, filename)], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=new_env) output, _ = p.communicate() lines = output.split("\n") return output, lines def run_script(self, contents, modname=None): if modname is None: modname = "testmod" self.write_to_tempfile(modname, contents) return self.launch_subprocess(modname) class ImportPatched(ProcessBase): def test_patch_a_module(self): self.write_to_tempfile("base", base_module_contents) self.write_to_tempfile("patching", patching_module_contents) self.write_to_tempfile("importing", import_module_contents) output, lines = self.launch_subprocess('importing.py') self.assert_(lines[0].startswith('patcher'), repr(output)) self.assert_(lines[1].startswith('base'), repr(output)) self.assert_(lines[2].startswith('importing'), repr(output)) self.assert_('eventlet.green.socket' in lines[1], repr(output)) self.assert_('eventlet.green.urllib' in lines[1], repr(output)) self.assert_('eventlet.green.socket' in lines[2], repr(output)) self.assert_('eventlet.green.urllib' in lines[2], repr(output)) self.assert_('eventlet.green.httplib' not in lines[2], repr(output)) def test_import_patched_defaults(self): self.write_to_tempfile("base", base_module_contents) new_mod = """ from eventlet import patcher base = patcher.import_patched('base') print "newmod", base, base.socket, base.urllib.socket.socket """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assert_(lines[0].startswith('base'), repr(output)) self.assert_(lines[1].startswith('newmod'), repr(output)) self.assert_('eventlet.green.socket' in lines[1], repr(output)) self.assert_('GreenSocket' in lines[1], repr(output)) class MonkeyPatch(ProcessBase): def test_patched_modules(self): new_mod = """ from eventlet import patcher patcher.monkey_patch() import socket import urllib print "newmod", socket.socket, urllib.socket.socket """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assert_(lines[0].startswith('newmod'), repr(output)) self.assertEqual(lines[0].count('GreenSocket'), 2, repr(output)) def test_early_patching(self): new_mod = """ from eventlet import patcher patcher.monkey_patch() import eventlet eventlet.sleep(0.01) print "newmod" """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, repr(output)) self.assert_(lines[0].startswith('newmod'), repr(output)) def test_late_patching(self): new_mod = """ import eventlet eventlet.sleep(0.01) from eventlet import patcher patcher.monkey_patch() eventlet.sleep(0.01) print "newmod" """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, repr(output)) self.assert_(lines[0].startswith('newmod'), repr(output)) def test_typeerror(self): new_mod = """ from eventlet import patcher patcher.monkey_patch(finagle=True) """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assert_(lines[-2].startswith('TypeError'), repr(output)) self.assert_('finagle' in lines[-2], repr(output)) def assert_boolean_logic(self, call, expected, not_expected=''): expected_list = ", ".join(['"%s"' % x for x in expected.split(',') if len(x)]) not_expected_list = ", ".join(['"%s"' % x for x in not_expected.split(',') if len(x)]) new_mod = """ from eventlet import patcher %s for mod in [%s]: assert patcher.is_monkey_patched(mod), mod for mod in [%s]: assert not patcher.is_monkey_patched(mod), mod print "already_patched", ",".join(sorted(patcher.already_patched.keys())) """ % (call, expected_list, not_expected_list) self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') ap = 'already_patched' self.assert_(lines[0].startswith(ap), repr(output)) patched_modules = lines[0][len(ap):].strip() # psycopg might or might not be patched based on installed modules patched_modules = patched_modules.replace("psycopg,", "") # ditto for MySQLdb patched_modules = patched_modules.replace("MySQLdb,", "") self.assertEqual(patched_modules, expected, "Logic:%s\nExpected: %s != %s" %(call, expected, patched_modules)) def test_boolean(self): self.assert_boolean_logic("patcher.monkey_patch()", 'os,select,socket,thread,time') def test_boolean_all(self): self.assert_boolean_logic("patcher.monkey_patch(all=True)", 'os,select,socket,thread,time') def test_boolean_all_single(self): self.assert_boolean_logic("patcher.monkey_patch(all=True, socket=True)", 'os,select,socket,thread,time') def test_boolean_all_negative(self): self.assert_boolean_logic("patcher.monkey_patch(all=False, "\ "socket=False, select=True)", 'select') def test_boolean_single(self): self.assert_boolean_logic("patcher.monkey_patch(socket=True)", 'socket') def test_boolean_double(self): self.assert_boolean_logic("patcher.monkey_patch(socket=True,"\ " select=True)", 'select,socket') def test_boolean_negative(self): self.assert_boolean_logic("patcher.monkey_patch(socket=False)", 'os,select,thread,time') def test_boolean_negative2(self): self.assert_boolean_logic("patcher.monkey_patch(socket=False,"\ "time=False)", 'os,select,thread') def test_conflicting_specifications(self): self.assert_boolean_logic("patcher.monkey_patch(socket=False, "\ "select=True)", 'select') test_monkey_patch_threading = """ def test_monkey_patch_threading(): tickcount = [0] def tick(): for i in xrange(1000): tickcount[0] += 1 eventlet.sleep() def do_sleep(): tpool.execute(time.sleep, 0.5) eventlet.spawn(tick) w1 = eventlet.spawn(do_sleep) w1.wait() print tickcount[0] assert tickcount[0] > 900 tpool.killall() """ class Tpool(ProcessBase): TEST_TIMEOUT=3 @skip_with_pyevent def test_simple(self): new_mod = """ import eventlet from eventlet import patcher patcher.monkey_patch() from eventlet import tpool print "newmod", tpool.execute(len, "hi") print "newmod", tpool.execute(len, "hi2") tpool.killall() """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 3, output) self.assert_(lines[0].startswith('newmod'), repr(output)) self.assert_('2' in lines[0], repr(output)) self.assert_('3' in lines[1], repr(output)) @skip_with_pyevent def test_unpatched_thread(self): new_mod = """import eventlet eventlet.monkey_patch(time=False, thread=False) from eventlet import tpool import time """ new_mod += test_monkey_patch_threading new_mod += "\ntest_monkey_patch_threading()\n" self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, lines) @skip_with_pyevent def test_patched_thread(self): new_mod = """import eventlet eventlet.monkey_patch(time=False, thread=True) from eventlet import tpool import time """ new_mod += test_monkey_patch_threading new_mod += "\ntest_monkey_patch_threading()\n" self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod.py') self.assertEqual(len(lines), 2, "\n".join(lines)) class Subprocess(ProcessBase): def test_monkeypatched_subprocess(self): new_mod = """import eventlet eventlet.monkey_patch() from eventlet.green import subprocess subprocess.Popen(['true'], stdin=subprocess.PIPE) print "done" """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod') self.assertEqual(output, "done\n", output) class Threading(ProcessBase): def test_orig_thread(self): new_mod = """import eventlet eventlet.monkey_patch() from eventlet import patcher import threading _threading = patcher.original('threading') def test(): print repr(threading.currentThread()) t = _threading.Thread(target=test) t.start() t.join() print len(threading._active) print len(_threading._active) """ self.write_to_tempfile("newmod", new_mod) output, lines = self.launch_subprocess('newmod') self.assertEqual(len(lines), 4, "\n".join(lines)) self.assert_(lines[0].startswith('