urllib3-1.7.1/0000755000076500000240000000000012220605014013522 5ustar shazowstaff00000000000000urllib3-1.7.1/CHANGES.rst0000644000076500000240000002160512220604742015337 0ustar shazowstaff00000000000000Changes ======= 1.7.1 (2013-09-25) ++++++++++++++++++ * Added granular timeout support with new `urllib3.util.Timeout` class. (Issue #231) * Fixed Python 3.4 support. (Issue #238) 1.7 (2013-08-14) ++++++++++++++++ * More exceptions are now pickle-able, with tests. (Issue #174) * Fixed redirecting with relative URLs in Location header. (Issue #178) * Support for relative urls in ``Location: ...`` header. (Issue #179) * ``urllib3.response.HTTPResponse`` now inherits from ``io.IOBase`` for bonus file-like functionality. (Issue #187) * Passing ``assert_hostname=False`` when creating a HTTPSConnectionPool will skip hostname verification for SSL connections. (Issue #194) * New method ``urllib3.response.HTTPResponse.stream(...)`` which acts as a generator wrapped around ``.read(...)``. (Issue #198) * IPv6 url parsing enforces brackets around the hostname. (Issue #199) * Fixed thread race condition in ``urllib3.poolmanager.PoolManager.connection_from_host(...)`` (Issue #204) * ``ProxyManager`` requests now include non-default port in ``Host: ...`` header. (Issue #217) * Added HTTPS proxy support in ``ProxyManager``. (Issue #170 #139) * New ``RequestField`` object can be passed to the ``fields=...`` param which can specify headers. (Issue #220) * Raise ``urllib3.exceptions.ProxyError`` when connecting to proxy fails. (Issue #221) * Use international headers when posting file names. (Issue #119) * Improved IPv6 support. (Issue #203) 1.6 (2013-04-25) ++++++++++++++++ * Contrib: Optional SNI support for Py2 using PyOpenSSL. (Issue #156) * ``ProxyManager`` automatically adds ``Host: ...`` header if not given. * Improved SSL-related code. ``cert_req`` now optionally takes a string like "REQUIRED" or "NONE". Same with ``ssl_version`` takes strings like "SSLv23" The string values reflect the suffix of the respective constant variable. (Issue #130) * Vendored ``socksipy`` now based on Anorov's fork which handles unexpectedly closed proxy connections and larger read buffers. (Issue #135) * Ensure the connection is closed if no data is received, fixes connection leak on some platforms. (Issue #133) * Added SNI support for SSL/TLS connections on Py32+. (Issue #89) * Tests fixed to be compatible with Py26 again. (Issue #125) * Added ability to choose SSL version by passing an ``ssl.PROTOCOL_*`` constant to the ``ssl_version`` parameter of ``HTTPSConnectionPool``. (Issue #109) * Allow an explicit content type to be specified when encoding file fields. (Issue #126) * Exceptions are now pickleable, with tests. (Issue #101) * Fixed default headers not getting passed in some cases. (Issue #99) * Treat "content-encoding" header value as case-insensitive, per RFC 2616 Section 3.5. (Issue #110) * "Connection Refused" SocketErrors will get retried rather than raised. (Issue #92) * Updated vendored ``six``, no longer overrides the global ``six`` module namespace. (Issue #113) * ``urllib3.exceptions.MaxRetryError`` contains a ``reason`` property holding the exception that prompted the final retry. If ``reason is None`` then it was due to a redirect. (Issue #92, #114) * Fixed ``PoolManager.urlopen()`` from not redirecting more than once. (Issue #149) * Don't assume ``Content-Type: text/plain`` for multi-part encoding parameters that are not files. (Issue #111) * Pass `strict` param down to ``httplib.HTTPConnection``. (Issue #122) * Added mechanism to verify SSL certificates by fingerprint (md5, sha1) or against an arbitrary hostname (when connecting by IP or for misconfigured servers). (Issue #140) * Streaming decompression support. (Issue #159) 1.5 (2012-08-02) ++++++++++++++++ * Added ``urllib3.add_stderr_logger()`` for quickly enabling STDERR debug logging in urllib3. * Native full URL parsing (including auth, path, query, fragment) available in ``urllib3.util.parse_url(url)``. * Built-in redirect will switch method to 'GET' if status code is 303. (Issue #11) * ``urllib3.PoolManager`` strips the scheme and host before sending the request uri. (Issue #8) * New ``urllib3.exceptions.DecodeError`` exception for when automatic decoding, based on the Content-Type header, fails. * Fixed bug with pool depletion and leaking connections (Issue #76). Added explicit connection closing on pool eviction. Added ``urllib3.PoolManager.clear()``. * 99% -> 100% unit test coverage. 1.4 (2012-06-16) ++++++++++++++++ * Minor AppEngine-related fixes. * Switched from ``mimetools.choose_boundary`` to ``uuid.uuid4()``. * Improved url parsing. (Issue #73) * IPv6 url support. (Issue #72) 1.3 (2012-03-25) ++++++++++++++++ * Removed pre-1.0 deprecated API. * Refactored helpers into a ``urllib3.util`` submodule. * Fixed multipart encoding to support list-of-tuples for keys with multiple values. (Issue #48) * Fixed multiple Set-Cookie headers in response not getting merged properly in Python 3. (Issue #53) * AppEngine support with Py27. (Issue #61) * Minor ``encode_multipart_formdata`` fixes related to Python 3 strings vs bytes. 1.2.2 (2012-02-06) ++++++++++++++++++ * Fixed packaging bug of not shipping ``test-requirements.txt``. (Issue #47) 1.2.1 (2012-02-05) ++++++++++++++++++ * Fixed another bug related to when ``ssl`` module is not available. (Issue #41) * Location parsing errors now raise ``urllib3.exceptions.LocationParseError`` which inherits from ``ValueError``. 1.2 (2012-01-29) ++++++++++++++++ * Added Python 3 support (tested on 3.2.2) * Dropped Python 2.5 support (tested on 2.6.7, 2.7.2) * Use ``select.poll`` instead of ``select.select`` for platforms that support it. * Use ``Queue.LifoQueue`` instead of ``Queue.Queue`` for more aggressive connection reusing. Configurable by overriding ``ConnectionPool.QueueCls``. * Fixed ``ImportError`` during install when ``ssl`` module is not available. (Issue #41) * Fixed ``PoolManager`` redirects between schemes (such as HTTP -> HTTPS) not completing properly. (Issue #28, uncovered by Issue #10 in v1.1) * Ported ``dummyserver`` to use ``tornado`` instead of ``webob`` + ``eventlet``. Removed extraneous unsupported dummyserver testing backends. Added socket-level tests. * More tests. Achievement Unlocked: 99% Coverage. 1.1 (2012-01-07) ++++++++++++++++ * Refactored ``dummyserver`` to its own root namespace module (used for testing). * Added hostname verification for ``VerifiedHTTPSConnection`` by vendoring in Py32's ``ssl_match_hostname``. (Issue #25) * Fixed cross-host HTTP redirects when using ``PoolManager``. (Issue #10) * Fixed ``decode_content`` being ignored when set through ``urlopen``. (Issue #27) * Fixed timeout-related bugs. (Issues #17, #23) 1.0.2 (2011-11-04) ++++++++++++++++++ * Fixed typo in ``VerifiedHTTPSConnection`` which would only present as a bug if you're using the object manually. (Thanks pyos) * Made RecentlyUsedContainer (and consequently PoolManager) more thread-safe by wrapping the access log in a mutex. (Thanks @christer) * Made RecentlyUsedContainer more dict-like (corrected ``__delitem__`` and ``__getitem__`` behaviour), with tests. Shouldn't affect core urllib3 code. 1.0.1 (2011-10-10) ++++++++++++++++++ * Fixed a bug where the same connection would get returned into the pool twice, causing extraneous "HttpConnectionPool is full" log warnings. 1.0 (2011-10-08) ++++++++++++++++ * Added ``PoolManager`` with LRU expiration of connections (tested and documented). * Added ``ProxyManager`` (needs tests, docs, and confirmation that it works with HTTPS proxies). * Added optional partial-read support for responses when ``preload_content=False``. You can now make requests and just read the headers without loading the content. * Made response decoding optional (default on, same as before). * Added optional explicit boundary string for ``encode_multipart_formdata``. * Convenience request methods are now inherited from ``RequestMethods``. Old helpers like ``get_url`` and ``post_url`` should be abandoned in favour of the new ``request(method, url, ...)``. * Refactored code to be even more decoupled, reusable, and extendable. * License header added to ``.py`` files. * Embiggened the documentation: Lots of Sphinx-friendly docstrings in the code and docs in ``docs/`` and on urllib3.readthedocs.org. * Embettered all the things! * Started writing this file. 0.4.1 (2011-07-17) ++++++++++++++++++ * Minor bug fixes, code cleanup. 0.4 (2011-03-01) ++++++++++++++++ * Better unicode support. * Added ``VerifiedHTTPSConnection``. * Added ``NTLMConnectionPool`` in contrib. * Minor improvements. 0.3.1 (2010-07-13) ++++++++++++++++++ * Added ``assert_host_name`` optional parameter. Now compatible with proxies. 0.3 (2009-12-10) ++++++++++++++++ * Added HTTPS support. * Minor bug fixes. * Refactored, broken backwards compatibility with 0.2. * API to be treated as stable from this version forward. 0.2 (2008-11-17) ++++++++++++++++ * Added unit tests. * Bug fixes. 0.1 (2008-11-16) ++++++++++++++++ * First release. urllib3-1.7.1/CONTRIBUTORS.txt0000644000076500000240000000545012220604305016226 0ustar shazowstaff00000000000000# Contributions to the urllib3 project ## Creator & Maintainer * Andrey Petrov ## Contributors In chronological order: * victor.vde * HTTPS patch (which inspired HTTPSConnectionPool) * erikcederstrand * NTLM-authenticated HTTPSConnectionPool * Basic-authenticated HTTPSConnectionPool (merged into make_headers) * niphlod * Client-verified SSL certificates for HTTPSConnectionPool * Response gzip and deflate encoding support * Better unicode support for filepost using StringIO buffers * btoconnor * Non-multipart encoding for POST requests * p.dobrogost * Code review, PEP8 compliance, benchmark fix * kennethreitz * Bugfixes, suggestions, Requests integration * georgemarshall * Bugfixes, Improvements and Test coverage * Thomas Kluyver * Python 3 support * brandon-rhodes * Design review, bugfixes, test coverage. * studer * IPv6 url support and test coverage * Shivaram Lingamneni * Support for explicitly closing pooled connections * hartator * Corrected multipart behavior for params * Thomas Weißschuh * Support for TLS SNI * API unification of ssl_version/cert_reqs * SSL fingerprint and alternative hostname verification * Bugfixes in testsuite * Sune Kirkeby * Optional SNI-support for Python 2 via PyOpenSSL. * Marc Schlaich * Various bugfixes and test improvements. * Bryce Boe * Correct six.moves conflict * Fixed pickle support of some exceptions * Boris Figovsky * Allowed to skip SSL hostname verification * Cory Benfield * Stream method for Response objects. * Return native strings in header values. * Generate 'Host' header when using proxies. * Jason Robinson * Add missing WrappedSocket.fileno method in PyOpenSSL * Audrius Butkevicius * Fixed a race condition * Stanislav Vitkovskiy * Added HTTPS (CONNECT) proxy support * Stephen Holsapple * Added abstraction for granular control of request fields * Martin von Gagern * Support for non-ASCII header parameters * Kevin Burke and Pavel Kirichenko * Support for separate connect and request timeouts * [Your name or handle] <[email or website]> * [Brief summary of your changes] urllib3-1.7.1/dummyserver/0000755000076500000240000000000012220605014016104 5ustar shazowstaff00000000000000urllib3-1.7.1/dummyserver/__init__.py0000644000076500000240000000000011670757706020233 0ustar shazowstaff00000000000000urllib3-1.7.1/dummyserver/handlers.py0000644000076500000240000001571112202774751020302 0ustar shazowstaff00000000000000from __future__ import print_function import gzip import json import logging import sys import time import zlib from io import BytesIO from tornado.wsgi import HTTPRequest try: from urllib.parse import urlsplit except ImportError: from urlparse import urlsplit log = logging.getLogger(__name__) class Response(object): def __init__(self, body='', status='200 OK', headers=None): if not isinstance(body, bytes): body = body.encode('utf8') self.body = body self.status = status self.headers = headers or [("Content-type", "text/plain")] def __call__(self, environ, start_response): start_response(self.status, self.headers) return [self.body] class WSGIHandler(object): pass class TestingApp(WSGIHandler): """ Simple app that performs various operations, useful for testing an HTTP library. Given any path, it will attempt to convert it will load a corresponding local method if it exists. Status code 200 indicates success, 400 indicates failure. Each method has its own conditions for success/failure. """ def __call__(self, environ, start_response): req = HTTPRequest(environ) req.params = {} for k, v in req.arguments.items(): req.params[k] = next(iter(v)) path = req.path[:] if not path.startswith('/'): path = urlsplit(path).path target = path[1:].replace('/', '_') method = getattr(self, target, self.index) resp = method(req) if dict(resp.headers).get('Connection') == 'close': # FIXME: Can we kill the connection somehow? pass return resp(environ, start_response) def index(self, _request): "Render simple message" return Response("Dummy server!") def set_up(self, request): test_type = request.params.get('test_type') test_id = request.params.get('test_id') if test_id: print('\nNew test %s: %s' % (test_type, test_id)) else: print('\nNew test %s' % test_type) return Response("Dummy server is ready!") def specific_method(self, request): "Confirm that the request matches the desired method type" method = request.params.get('method') if method and not isinstance(method, str): method = method.decode('utf8') if request.method != method: return Response("Wrong method: %s != %s" % (method, request.method), status='400 Bad Request') return Response() def upload(self, request): "Confirm that the uploaded file conforms to specification" # FIXME: This is a huge broken mess param = request.params.get('upload_param', 'myfile').decode('ascii') filename = request.params.get('upload_filename', '').decode('utf-8') size = int(request.params.get('upload_size', '0')) files_ = request.files.get(param) if len(files_) != 1: return Response("Expected 1 file for '%s', not %d" %(param, len(files_)), status='400 Bad Request') file_ = files_[0] data = file_['body'] if int(size) != len(data): return Response("Wrong size: %d != %d" % (size, len(data)), status='400 Bad Request') if filename != file_['filename']: return Response("Wrong filename: %s != %s" % (filename, file_.filename), status='400 Bad Request') return Response() def redirect(self, request): "Perform a redirect to ``target``" target = request.params.get('target', '/') headers = [('Location', target)] return Response(status='303 See Other', headers=headers) def keepalive(self, request): if request.params.get('close', b'0') == b'1': headers = [('Connection', 'close')] return Response('Closing', headers=headers) headers = [('Connection', 'keep-alive')] return Response('Keeping alive', headers=headers) def sleep(self, request): "Sleep for a specified amount of ``seconds``" seconds = float(request.params.get('seconds', '1')) time.sleep(seconds) return Response() def echo(self, request): "Echo back the params" if request.method == 'GET': return Response(request.query) return Response(request.body) def encodingrequest(self, request): "Check for UA accepting gzip/deflate encoding" data = b"hello, world!" encoding = request.headers.get('Accept-Encoding', '') headers = None if encoding == 'gzip': headers = [('Content-Encoding', 'gzip')] file_ = BytesIO() zipfile = gzip.GzipFile('', mode='w', fileobj=file_) zipfile.write(data) zipfile.close() data = file_.getvalue() elif encoding == 'deflate': headers = [('Content-Encoding', 'deflate')] data = zlib.compress(data) elif encoding == 'garbage-gzip': headers = [('Content-Encoding', 'gzip')] data = 'garbage' elif encoding == 'garbage-deflate': headers = [('Content-Encoding', 'deflate')] data = 'garbage' return Response(data, headers=headers) def headers(self, request): return Response(json.dumps(request.headers)) def shutdown(self, request): sys.exit() # RFC2231-aware replacement of internal tornado function def _parse_header(line): r"""Parse a Content-type like header. Return the main content-type and a dictionary of options. >>> d = _parse_header("CD: fd; foo=\"bar\"; file*=utf-8''T%C3%A4st")[1] >>> d['file'] == 'T\u00e4st' True >>> d['foo'] 'bar' """ import tornado.httputil import email.utils from urllib3.packages import six if not six.PY3: line = line.encode('utf-8') parts = tornado.httputil._parseparam(';' + line) key = next(parts) # decode_params treats first argument special, but we already stripped key params = [('Dummy', 'value')] for p in parts: i = p.find('=') if i >= 0: name = p[:i].strip().lower() value = p[i + 1:].strip() params.append((name, value)) params = email.utils.decode_params(params) params.pop(0) # get rid of the dummy again pdict = {} for name, value in params: print(repr(value)) value = email.utils.collapse_rfc2231_value(value) if len(value) >= 2 and value[0] == '"' and value[-1] == '"': value = value[1:-1] pdict[name] = value return key, pdict # TODO: make the following conditional as soon as we know a version # which does not require this fix. # See https://github.com/facebook/tornado/issues/868 if True: import tornado.httputil tornado.httputil._parse_header = _parse_header urllib3-1.7.1/dummyserver/proxy.py0000755000076500000240000001114312202774751017661 0ustar shazowstaff00000000000000#!/usr/bin/env python # # Simple asynchronous HTTP proxy with tunnelling (CONNECT). # # GET/POST proxying based on # http://groups.google.com/group/python-tornado/msg/7bea08e7a049cf26 # # Copyright (C) 2012 Senko Rasic # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import sys import socket import tornado.httpserver import tornado.ioloop import tornado.iostream import tornado.web import tornado.httpclient __all__ = ['ProxyHandler', 'run_proxy'] class ProxyHandler(tornado.web.RequestHandler): SUPPORTED_METHODS = ['GET', 'POST', 'CONNECT'] @tornado.web.asynchronous def get(self): def handle_response(response): if response.error and not isinstance(response.error, tornado.httpclient.HTTPError): self.set_status(500) self.write('Internal server error:\n' + str(response.error)) self.finish() else: self.set_status(response.code) for header in ('Date', 'Cache-Control', 'Server', 'Content-Type', 'Location'): v = response.headers.get(header) if v: self.set_header(header, v) if response.body: self.write(response.body) self.finish() req = tornado.httpclient.HTTPRequest(url=self.request.uri, method=self.request.method, body=self.request.body, headers=self.request.headers, follow_redirects=False, allow_nonstandard_methods=True) client = tornado.httpclient.AsyncHTTPClient() try: client.fetch(req, handle_response) except tornado.httpclient.HTTPError as e: if hasattr(e, 'response') and e.response: self.handle_response(e.response) else: self.set_status(500) self.write('Internal server error:\n' + str(e)) self.finish() @tornado.web.asynchronous def post(self): return self.get() @tornado.web.asynchronous def connect(self): host, port = self.request.uri.split(':') client = self.request.connection.stream def read_from_client(data): upstream.write(data) def read_from_upstream(data): client.write(data) def client_close(data=None): if upstream.closed(): return if data: upstream.write(data) upstream.close() def upstream_close(data=None): if client.closed(): return if data: client.write(data) client.close() def start_tunnel(): client.read_until_close(client_close, read_from_client) upstream.read_until_close(upstream_close, read_from_upstream) client.write(b'HTTP/1.0 200 Connection established\r\n\r\n') s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) upstream = tornado.iostream.IOStream(s) upstream.connect((host, int(port)), start_tunnel) def run_proxy(port, start_ioloop=True): """ Run proxy on the specified port. If start_ioloop is True (default), the tornado IOLoop will be started immediately. """ app = tornado.web.Application([ (r'.*', ProxyHandler), ]) app.listen(port) ioloop = tornado.ioloop.IOLoop.instance() if start_ioloop: ioloop.start() if __name__ == '__main__': port = 8888 if len(sys.argv) > 1: port = int(sys.argv[1]) print ("Starting HTTP proxy on port %d" % port) run_proxy(port) urllib3-1.7.1/dummyserver/server.py0000755000076500000240000000673212202774751020016 0ustar shazowstaff00000000000000#!/usr/bin/env python """ Dummy server used for unit testing. """ from __future__ import print_function import logging import os import sys import threading import socket from tornado import netutil import tornado.wsgi import tornado.httpserver import tornado.ioloop import tornado.web from dummyserver.handlers import TestingApp from dummyserver.proxy import ProxyHandler log = logging.getLogger(__name__) CERTS_PATH = os.path.join(os.path.dirname(__file__), 'certs') DEFAULT_CERTS = { 'certfile': os.path.join(CERTS_PATH, 'server.crt'), 'keyfile': os.path.join(CERTS_PATH, 'server.key'), } DEFAULT_CA = os.path.join(CERTS_PATH, 'cacert.pem') DEFAULT_CA_BAD = os.path.join(CERTS_PATH, 'client_bad.pem') # Different types of servers we have: class SocketServerThread(threading.Thread): """ :param socket_handler: Callable which receives a socket argument for one request. :param ready_event: Event which gets set when the socket handler is ready to receive requests. """ def __init__(self, socket_handler, host='localhost', port=8081, ready_event=None): threading.Thread.__init__(self) self.socket_handler = socket_handler self.host = host self.ready_event = ready_event def _start_server(self): sock = socket.socket() if sys.platform != 'win32': sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind((self.host, 0)) self.port = sock.getsockname()[1] # Once listen() returns, the server socket is ready sock.listen(1) if self.ready_event: self.ready_event.set() self.socket_handler(sock) sock.close() def run(self): self.server = self._start_server() class TornadoServerThread(threading.Thread): app = tornado.wsgi.WSGIContainer(TestingApp()) def __init__(self, host='localhost', scheme='http', certs=None, ready_event=None): threading.Thread.__init__(self) self.host = host self.scheme = scheme self.certs = certs self.ready_event = ready_event def _start_server(self): if self.scheme == 'https': http_server = tornado.httpserver.HTTPServer(self.app, ssl_options=self.certs) else: http_server = tornado.httpserver.HTTPServer(self.app) family = socket.AF_INET6 if ':' in self.host else socket.AF_INET sock, = netutil.bind_sockets(None, address=self.host, family=family) self.port = sock.getsockname()[1] http_server.add_sockets([sock]) return http_server def run(self): self.ioloop = tornado.ioloop.IOLoop.instance() self.server = self._start_server() if self.ready_event: self.ready_event.set() self.ioloop.start() def stop(self): self.ioloop.add_callback(self.server.stop) self.ioloop.add_callback(self.ioloop.stop) class ProxyServerThread(TornadoServerThread): app = tornado.web.Application([(r'.*', ProxyHandler)]) if __name__ == '__main__': log.setLevel(logging.DEBUG) log.addHandler(logging.StreamHandler(sys.stderr)) from urllib3 import get_host url = "http://localhost:8081" if len(sys.argv) > 1: url = sys.argv[1] print("Starting WSGI server at: %s" % url) scheme, host, port = get_host(url) t = TornadoServerThread(scheme=scheme, host=host, port=port) t.start() urllib3-1.7.1/dummyserver/testcase.py0000644000076500000240000000650212202774751020313 0ustar shazowstaff00000000000000import unittest import socket import threading from nose.plugins.skip import SkipTest from dummyserver.server import ( TornadoServerThread, SocketServerThread, DEFAULT_CERTS, ProxyServerThread, ) has_ipv6 = hasattr(socket, 'has_ipv6') class SocketDummyServerTestCase(unittest.TestCase): """ A simple socket-based server is created for this class that is good for exactly one request. """ scheme = 'http' host = 'localhost' @classmethod def _start_server(cls, socket_handler): ready_event = threading.Event() cls.server_thread = SocketServerThread(socket_handler=socket_handler, ready_event=ready_event, host=cls.host) cls.server_thread.start() ready_event.wait() cls.port = cls.server_thread.port @classmethod def tearDownClass(cls): if hasattr(cls, 'server_thread'): cls.server_thread.join() class HTTPDummyServerTestCase(unittest.TestCase): scheme = 'http' host = 'localhost' host_alt = '127.0.0.1' # Some tests need two hosts certs = DEFAULT_CERTS @classmethod def _start_server(cls): ready_event = threading.Event() cls.server_thread = TornadoServerThread(host=cls.host, scheme=cls.scheme, certs=cls.certs, ready_event=ready_event) cls.server_thread.start() ready_event.wait() cls.port = cls.server_thread.port @classmethod def _stop_server(cls): cls.server_thread.stop() cls.server_thread.join() @classmethod def setUpClass(cls): cls._start_server() @classmethod def tearDownClass(cls): cls._stop_server() class HTTPSDummyServerTestCase(HTTPDummyServerTestCase): scheme = 'https' host = 'localhost' certs = DEFAULT_CERTS class HTTPDummyProxyTestCase(unittest.TestCase): http_host = 'localhost' http_host_alt = '127.0.0.1' https_host = 'localhost' https_host_alt = '127.0.0.1' https_certs = DEFAULT_CERTS proxy_host = 'localhost' proxy_host_alt = '127.0.0.1' @classmethod def setUpClass(cls): cls.http_thread = TornadoServerThread(host=cls.http_host, scheme='http') cls.http_thread._start_server() cls.http_port = cls.http_thread.port cls.https_thread = TornadoServerThread( host=cls.https_host, scheme='https', certs=cls.https_certs) cls.https_thread._start_server() cls.https_port = cls.https_thread.port ready_event = threading.Event() cls.proxy_thread = ProxyServerThread( host=cls.proxy_host, ready_event=ready_event) cls.proxy_thread.start() ready_event.wait() cls.proxy_port = cls.proxy_thread.port @classmethod def tearDownClass(cls): cls.proxy_thread.stop() cls.proxy_thread.join() class IPv6HTTPDummyServerTestCase(HTTPDummyServerTestCase): host = '::1' @classmethod def setUpClass(cls): if not has_ipv6: raise SkipTest('IPv6 not available') else: super(IPv6HTTPDummyServerTestCase, cls).setUpClass() urllib3-1.7.1/LICENSE.txt0000644000076500000240000000222712162632565015370 0ustar shazowstaff00000000000000This is the MIT license: http://www.opensource.org/licenses/mit-license.php Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. urllib3-1.7.1/MANIFEST.in0000644000076500000240000000012211713774564015302 0ustar shazowstaff00000000000000include README.rst CHANGES.rst LICENSE.txt CONTRIBUTORS.txt test-requirements.txt urllib3-1.7.1/PKG-INFO0000644000076500000240000004126712220605014014631 0ustar shazowstaff00000000000000Metadata-Version: 1.0 Name: urllib3 Version: 1.7.1 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: http://urllib3.readthedocs.org/ Author: Andrey Petrov Author-email: andrey.petrov@shazow.net License: MIT Description: ======= urllib3 ======= .. image:: https://travis-ci.org/shazow/urllib3.png?branch=master :target: https://travis-ci.org/shazow/urllib3 Highlights ========== - Re-use the same socket connection for multiple requests (``HTTPConnectionPool`` and ``HTTPSConnectionPool``) (with optional client-side certificate verification). - File posting (``encode_multipart_formdata``). - Built-in redirection and retries (optional). - Supports gzip and deflate decoding. - Thread-safe and sanity-safe. - Works with AppEngine, gevent, and eventlib. - Tested on Python 2.6+ and Python 3.2+, 100% unit test coverage. - Small and easy to understand codebase perfect for extending and building upon. For a more comprehensive solution, have a look at `Requests `_ which is also powered by urllib3. What's wrong with urllib and urllib2? ===================================== There are two critical features missing from the Python standard library: Connection re-using/pooling and file posting. It's not terribly hard to implement these yourself, but it's much easier to use a module that already did the work for you. The Python standard libraries ``urllib`` and ``urllib2`` have little to do with each other. They were designed to be independent and standalone, each solving a different scope of problems, and ``urllib3`` follows in a similar vein. Why do I want to reuse connections? =================================== Performance. When you normally do a urllib call, a separate socket connection is created with each request. By reusing existing sockets (supported since HTTP 1.1), the requests will take up less resources on the server's end, and also provide a faster response time at the client's end. With some simple benchmarks (see `test/benchmark.py `_ ), downloading 15 URLs from google.com is about twice as fast when using HTTPConnectionPool (which uses 1 connection) than using plain urllib (which uses 15 connections). This library is perfect for: - Talking to an API - Crawling a website - Any situation where being able to post files, handle redirection, and retrying is useful. It's relatively lightweight, so it can be used for anything! Examples ======== Go to `urllib3.readthedocs.org `_ for more nice syntax-highlighted examples. But, long story short:: import urllib3 http = urllib3.PoolManager() r = http.request('GET', 'http://google.com/') print r.status, r.data The ``PoolManager`` will take care of reusing connections for you whenever you request the same host. For more fine-grained control of your connection pools, you should look at `ConnectionPool `_. Run the tests ============= We use some external dependencies, multiple interpreters and code coverage analysis while running test suite. Easiest way to run the tests is thusly the ``tox`` utility: :: $ tox # [..] py26: commands succeeded py27: commands succeeded py32: commands succeeded py33: commands succeeded Note that code coverage less than 100% is regarded as a failing run. Contributing ============ #. `Check for open issues `_ or open a fresh issue to start a discussion around a feature idea or a bug. There is a *Contributor Friendly* tag for issues that should be ideal for people who are not very familiar with the codebase yet. #. Fork the `urllib3 repository on Github `_ to start making your changes. #. Write a test which shows that the bug was fixed or that the feature works as expected. #. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to ``CONTRIBUTORS.txt``. Changes ======= 1.7.1 (2013-09-25) ++++++++++++++++++ * Added granular timeout support with new `urllib3.util.Timeout` class. (Issue #231) * Fixed Python 3.4 support. (Issue #238) 1.7 (2013-08-14) ++++++++++++++++ * More exceptions are now pickle-able, with tests. (Issue #174) * Fixed redirecting with relative URLs in Location header. (Issue #178) * Support for relative urls in ``Location: ...`` header. (Issue #179) * ``urllib3.response.HTTPResponse`` now inherits from ``io.IOBase`` for bonus file-like functionality. (Issue #187) * Passing ``assert_hostname=False`` when creating a HTTPSConnectionPool will skip hostname verification for SSL connections. (Issue #194) * New method ``urllib3.response.HTTPResponse.stream(...)`` which acts as a generator wrapped around ``.read(...)``. (Issue #198) * IPv6 url parsing enforces brackets around the hostname. (Issue #199) * Fixed thread race condition in ``urllib3.poolmanager.PoolManager.connection_from_host(...)`` (Issue #204) * ``ProxyManager`` requests now include non-default port in ``Host: ...`` header. (Issue #217) * Added HTTPS proxy support in ``ProxyManager``. (Issue #170 #139) * New ``RequestField`` object can be passed to the ``fields=...`` param which can specify headers. (Issue #220) * Raise ``urllib3.exceptions.ProxyError`` when connecting to proxy fails. (Issue #221) * Use international headers when posting file names. (Issue #119) * Improved IPv6 support. (Issue #203) 1.6 (2013-04-25) ++++++++++++++++ * Contrib: Optional SNI support for Py2 using PyOpenSSL. (Issue #156) * ``ProxyManager`` automatically adds ``Host: ...`` header if not given. * Improved SSL-related code. ``cert_req`` now optionally takes a string like "REQUIRED" or "NONE". Same with ``ssl_version`` takes strings like "SSLv23" The string values reflect the suffix of the respective constant variable. (Issue #130) * Vendored ``socksipy`` now based on Anorov's fork which handles unexpectedly closed proxy connections and larger read buffers. (Issue #135) * Ensure the connection is closed if no data is received, fixes connection leak on some platforms. (Issue #133) * Added SNI support for SSL/TLS connections on Py32+. (Issue #89) * Tests fixed to be compatible with Py26 again. (Issue #125) * Added ability to choose SSL version by passing an ``ssl.PROTOCOL_*`` constant to the ``ssl_version`` parameter of ``HTTPSConnectionPool``. (Issue #109) * Allow an explicit content type to be specified when encoding file fields. (Issue #126) * Exceptions are now pickleable, with tests. (Issue #101) * Fixed default headers not getting passed in some cases. (Issue #99) * Treat "content-encoding" header value as case-insensitive, per RFC 2616 Section 3.5. (Issue #110) * "Connection Refused" SocketErrors will get retried rather than raised. (Issue #92) * Updated vendored ``six``, no longer overrides the global ``six`` module namespace. (Issue #113) * ``urllib3.exceptions.MaxRetryError`` contains a ``reason`` property holding the exception that prompted the final retry. If ``reason is None`` then it was due to a redirect. (Issue #92, #114) * Fixed ``PoolManager.urlopen()`` from not redirecting more than once. (Issue #149) * Don't assume ``Content-Type: text/plain`` for multi-part encoding parameters that are not files. (Issue #111) * Pass `strict` param down to ``httplib.HTTPConnection``. (Issue #122) * Added mechanism to verify SSL certificates by fingerprint (md5, sha1) or against an arbitrary hostname (when connecting by IP or for misconfigured servers). (Issue #140) * Streaming decompression support. (Issue #159) 1.5 (2012-08-02) ++++++++++++++++ * Added ``urllib3.add_stderr_logger()`` for quickly enabling STDERR debug logging in urllib3. * Native full URL parsing (including auth, path, query, fragment) available in ``urllib3.util.parse_url(url)``. * Built-in redirect will switch method to 'GET' if status code is 303. (Issue #11) * ``urllib3.PoolManager`` strips the scheme and host before sending the request uri. (Issue #8) * New ``urllib3.exceptions.DecodeError`` exception for when automatic decoding, based on the Content-Type header, fails. * Fixed bug with pool depletion and leaking connections (Issue #76). Added explicit connection closing on pool eviction. Added ``urllib3.PoolManager.clear()``. * 99% -> 100% unit test coverage. 1.4 (2012-06-16) ++++++++++++++++ * Minor AppEngine-related fixes. * Switched from ``mimetools.choose_boundary`` to ``uuid.uuid4()``. * Improved url parsing. (Issue #73) * IPv6 url support. (Issue #72) 1.3 (2012-03-25) ++++++++++++++++ * Removed pre-1.0 deprecated API. * Refactored helpers into a ``urllib3.util`` submodule. * Fixed multipart encoding to support list-of-tuples for keys with multiple values. (Issue #48) * Fixed multiple Set-Cookie headers in response not getting merged properly in Python 3. (Issue #53) * AppEngine support with Py27. (Issue #61) * Minor ``encode_multipart_formdata`` fixes related to Python 3 strings vs bytes. 1.2.2 (2012-02-06) ++++++++++++++++++ * Fixed packaging bug of not shipping ``test-requirements.txt``. (Issue #47) 1.2.1 (2012-02-05) ++++++++++++++++++ * Fixed another bug related to when ``ssl`` module is not available. (Issue #41) * Location parsing errors now raise ``urllib3.exceptions.LocationParseError`` which inherits from ``ValueError``. 1.2 (2012-01-29) ++++++++++++++++ * Added Python 3 support (tested on 3.2.2) * Dropped Python 2.5 support (tested on 2.6.7, 2.7.2) * Use ``select.poll`` instead of ``select.select`` for platforms that support it. * Use ``Queue.LifoQueue`` instead of ``Queue.Queue`` for more aggressive connection reusing. Configurable by overriding ``ConnectionPool.QueueCls``. * Fixed ``ImportError`` during install when ``ssl`` module is not available. (Issue #41) * Fixed ``PoolManager`` redirects between schemes (such as HTTP -> HTTPS) not completing properly. (Issue #28, uncovered by Issue #10 in v1.1) * Ported ``dummyserver`` to use ``tornado`` instead of ``webob`` + ``eventlet``. Removed extraneous unsupported dummyserver testing backends. Added socket-level tests. * More tests. Achievement Unlocked: 99% Coverage. 1.1 (2012-01-07) ++++++++++++++++ * Refactored ``dummyserver`` to its own root namespace module (used for testing). * Added hostname verification for ``VerifiedHTTPSConnection`` by vendoring in Py32's ``ssl_match_hostname``. (Issue #25) * Fixed cross-host HTTP redirects when using ``PoolManager``. (Issue #10) * Fixed ``decode_content`` being ignored when set through ``urlopen``. (Issue #27) * Fixed timeout-related bugs. (Issues #17, #23) 1.0.2 (2011-11-04) ++++++++++++++++++ * Fixed typo in ``VerifiedHTTPSConnection`` which would only present as a bug if you're using the object manually. (Thanks pyos) * Made RecentlyUsedContainer (and consequently PoolManager) more thread-safe by wrapping the access log in a mutex. (Thanks @christer) * Made RecentlyUsedContainer more dict-like (corrected ``__delitem__`` and ``__getitem__`` behaviour), with tests. Shouldn't affect core urllib3 code. 1.0.1 (2011-10-10) ++++++++++++++++++ * Fixed a bug where the same connection would get returned into the pool twice, causing extraneous "HttpConnectionPool is full" log warnings. 1.0 (2011-10-08) ++++++++++++++++ * Added ``PoolManager`` with LRU expiration of connections (tested and documented). * Added ``ProxyManager`` (needs tests, docs, and confirmation that it works with HTTPS proxies). * Added optional partial-read support for responses when ``preload_content=False``. You can now make requests and just read the headers without loading the content. * Made response decoding optional (default on, same as before). * Added optional explicit boundary string for ``encode_multipart_formdata``. * Convenience request methods are now inherited from ``RequestMethods``. Old helpers like ``get_url`` and ``post_url`` should be abandoned in favour of the new ``request(method, url, ...)``. * Refactored code to be even more decoupled, reusable, and extendable. * License header added to ``.py`` files. * Embiggened the documentation: Lots of Sphinx-friendly docstrings in the code and docs in ``docs/`` and on urllib3.readthedocs.org. * Embettered all the things! * Started writing this file. 0.4.1 (2011-07-17) ++++++++++++++++++ * Minor bug fixes, code cleanup. 0.4 (2011-03-01) ++++++++++++++++ * Better unicode support. * Added ``VerifiedHTTPSConnection``. * Added ``NTLMConnectionPool`` in contrib. * Minor improvements. 0.3.1 (2010-07-13) ++++++++++++++++++ * Added ``assert_host_name`` optional parameter. Now compatible with proxies. 0.3 (2009-12-10) ++++++++++++++++ * Added HTTPS support. * Minor bug fixes. * Refactored, broken backwards compatibility with 0.2. * API to be treated as stable from this version forward. 0.2 (2008-11-17) ++++++++++++++++ * Added unit tests. * Bug fixes. 0.1 (2008-11-16) ++++++++++++++++ * First release. Keywords: urllib httplib threadsafe filepost http https ssl pooling Platform: UNKNOWN Classifier: Environment :: Web Environment Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Internet :: WWW/HTTP Classifier: Topic :: Software Development :: Libraries urllib3-1.7.1/README.rst0000644000076500000240000000740712220604305015223 0ustar shazowstaff00000000000000======= urllib3 ======= .. image:: https://travis-ci.org/shazow/urllib3.png?branch=master :target: https://travis-ci.org/shazow/urllib3 Highlights ========== - Re-use the same socket connection for multiple requests (``HTTPConnectionPool`` and ``HTTPSConnectionPool``) (with optional client-side certificate verification). - File posting (``encode_multipart_formdata``). - Built-in redirection and retries (optional). - Supports gzip and deflate decoding. - Thread-safe and sanity-safe. - Works with AppEngine, gevent, and eventlib. - Tested on Python 2.6+ and Python 3.2+, 100% unit test coverage. - Small and easy to understand codebase perfect for extending and building upon. For a more comprehensive solution, have a look at `Requests `_ which is also powered by urllib3. What's wrong with urllib and urllib2? ===================================== There are two critical features missing from the Python standard library: Connection re-using/pooling and file posting. It's not terribly hard to implement these yourself, but it's much easier to use a module that already did the work for you. The Python standard libraries ``urllib`` and ``urllib2`` have little to do with each other. They were designed to be independent and standalone, each solving a different scope of problems, and ``urllib3`` follows in a similar vein. Why do I want to reuse connections? =================================== Performance. When you normally do a urllib call, a separate socket connection is created with each request. By reusing existing sockets (supported since HTTP 1.1), the requests will take up less resources on the server's end, and also provide a faster response time at the client's end. With some simple benchmarks (see `test/benchmark.py `_ ), downloading 15 URLs from google.com is about twice as fast when using HTTPConnectionPool (which uses 1 connection) than using plain urllib (which uses 15 connections). This library is perfect for: - Talking to an API - Crawling a website - Any situation where being able to post files, handle redirection, and retrying is useful. It's relatively lightweight, so it can be used for anything! Examples ======== Go to `urllib3.readthedocs.org `_ for more nice syntax-highlighted examples. But, long story short:: import urllib3 http = urllib3.PoolManager() r = http.request('GET', 'http://google.com/') print r.status, r.data The ``PoolManager`` will take care of reusing connections for you whenever you request the same host. For more fine-grained control of your connection pools, you should look at `ConnectionPool `_. Run the tests ============= We use some external dependencies, multiple interpreters and code coverage analysis while running test suite. Easiest way to run the tests is thusly the ``tox`` utility: :: $ tox # [..] py26: commands succeeded py27: commands succeeded py32: commands succeeded py33: commands succeeded Note that code coverage less than 100% is regarded as a failing run. Contributing ============ #. `Check for open issues `_ or open a fresh issue to start a discussion around a feature idea or a bug. There is a *Contributor Friendly* tag for issues that should be ideal for people who are not very familiar with the codebase yet. #. Fork the `urllib3 repository on Github `_ to start making your changes. #. Write a test which shows that the bug was fixed or that the feature works as expected. #. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to ``CONTRIBUTORS.txt``. urllib3-1.7.1/setup.cfg0000644000076500000240000000030112220605014015335 0ustar shazowstaff00000000000000[nosetests] logging-clear-handlers = true with-coverage = true cover-package = urllib3 cover-min-percentage = 100 cover-erase = true [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 urllib3-1.7.1/setup.py0000644000076500000240000000327012162632565015256 0ustar shazowstaff00000000000000#!/usr/bin/env python from distutils.core import setup import os import re try: import setuptools except ImportError: pass # No 'develop' command, oh well. base_path = os.path.dirname(__file__) # Get the version (borrowed from SQLAlchemy) fp = open(os.path.join(base_path, 'urllib3', '__init__.py')) VERSION = re.compile(r".*__version__ = '(.*?)'", re.S).match(fp.read()).group(1) fp.close() version = VERSION requirements = [] tests_requirements = requirements + open('test-requirements.txt').readlines() setup(name='urllib3', version=version, description="HTTP library with thread-safe connection pooling, file post, and more.", long_description=open('README.rst').read() + '\n\n' + open('CHANGES.rst').read(), classifiers=[ 'Environment :: Web Environment', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 3', 'Topic :: Internet :: WWW/HTTP', 'Topic :: Software Development :: Libraries', ], keywords='urllib httplib threadsafe filepost http https ssl pooling', author='Andrey Petrov', author_email='andrey.petrov@shazow.net', url='http://urllib3.readthedocs.org/', license='MIT', packages=['urllib3', 'dummyserver', 'urllib3.packages', 'urllib3.packages.ssl_match_hostname', 'urllib3.contrib', ], requires=requirements, tests_require=tests_requirements, test_suite='test', ) urllib3-1.7.1/test/0000755000076500000240000000000012220605014014501 5ustar shazowstaff00000000000000urllib3-1.7.1/test/__init__.py0000644000076500000240000000000011635707107016617 0ustar shazowstaff00000000000000urllib3-1.7.1/test/benchmark.py0000644000076500000240000000402411711335041017011 0ustar shazowstaff00000000000000#!/usr/bin/env python """ Really simple rudimentary benchmark to compare ConnectionPool versus standard urllib to demonstrate the usefulness of connection re-using. """ from __future__ import print_function import sys import time import urllib sys.path.append('../') import urllib3 # URLs to download. Doesn't matter as long as they're from the same host, so we # can take advantage of connection re-using. TO_DOWNLOAD = [ 'http://code.google.com/apis/apps/', 'http://code.google.com/apis/base/', 'http://code.google.com/apis/blogger/', 'http://code.google.com/apis/calendar/', 'http://code.google.com/apis/codesearch/', 'http://code.google.com/apis/contact/', 'http://code.google.com/apis/books/', 'http://code.google.com/apis/documents/', 'http://code.google.com/apis/finance/', 'http://code.google.com/apis/health/', 'http://code.google.com/apis/notebook/', 'http://code.google.com/apis/picasaweb/', 'http://code.google.com/apis/spreadsheets/', 'http://code.google.com/apis/webmastertools/', 'http://code.google.com/apis/youtube/', ] def urllib_get(url_list): assert url_list for url in url_list: now = time.time() r = urllib.urlopen(url) elapsed = time.time() - now print("Got in %0.3f: %s" % (elapsed, url)) def pool_get(url_list): assert url_list pool = urllib3.connection_from_url(url_list[0]) for url in url_list: now = time.time() r = pool.get_url(url) elapsed = time.time() - now print("Got in %0.3fs: %s" % (elapsed, url)) if __name__ == '__main__': print("Running pool_get ...") now = time.time() pool_get(TO_DOWNLOAD) pool_elapsed = time.time() - now print("Running urllib_get ...") now = time.time() urllib_get(TO_DOWNLOAD) urllib_elapsed = time.time() - now print("Completed pool_get in %0.3fs" % pool_elapsed) print("Completed urllib_get in %0.3fs" % urllib_elapsed) """ Example results: Completed pool_get in 1.163s Completed urllib_get in 2.318s """ urllib3-1.7.1/test/test_collections.py0000644000076500000240000000531112162632565020450 0ustar shazowstaff00000000000000import unittest from urllib3._collections import RecentlyUsedContainer as Container from urllib3.packages import six xrange = six.moves.xrange class TestLRUContainer(unittest.TestCase): def test_maxsize(self): d = Container(5) for i in xrange(5): d[i] = str(i) self.assertEqual(len(d), 5) for i in xrange(5): self.assertEqual(d[i], str(i)) d[i+1] = str(i+1) self.assertEqual(len(d), 5) self.assertFalse(0 in d) self.assertTrue(i+1 in d) def test_expire(self): d = Container(5) for i in xrange(5): d[i] = str(i) for i in xrange(5): d.get(0) # Add one more entry d[5] = '5' # Check state self.assertEqual(list(d.keys()), [2, 3, 4, 0, 5]) def test_same_key(self): d = Container(5) for i in xrange(10): d['foo'] = i self.assertEqual(list(d.keys()), ['foo']) self.assertEqual(len(d), 1) def test_access_ordering(self): d = Container(5) for i in xrange(10): d[i] = True # Keys should be ordered by access time self.assertEqual(list(d.keys()), [5, 6, 7, 8, 9]) new_order = [7,8,6,9,5] for k in new_order: d[k] self.assertEqual(list(d.keys()), new_order) def test_delete(self): d = Container(5) for i in xrange(5): d[i] = True del d[0] self.assertFalse(0 in d) d.pop(1) self.assertFalse(1 in d) d.pop(1, None) def test_get(self): d = Container(5) for i in xrange(5): d[i] = True r = d.get(4) self.assertEqual(r, True) r = d.get(5) self.assertEqual(r, None) r = d.get(5, 42) self.assertEqual(r, 42) self.assertRaises(KeyError, lambda: d[5]) def test_disposal(self): evicted_items = [] def dispose_func(arg): # Save the evicted datum for inspection evicted_items.append(arg) d = Container(5, dispose_func=dispose_func) for i in xrange(5): d[i] = i self.assertEqual(list(d.keys()), list(xrange(5))) self.assertEqual(evicted_items, []) # Nothing disposed d[5] = 5 self.assertEqual(list(d.keys()), list(xrange(1, 6))) self.assertEqual(evicted_items, [0]) del d[1] self.assertEqual(evicted_items, [0, 1]) d.clear() self.assertEqual(evicted_items, [0, 1, 2, 3, 4, 5]) def test_iter(self): d = Container() self.assertRaises(NotImplementedError, d.__iter__) if __name__ == '__main__': unittest.main() urllib3-1.7.1/test/test_connectionpool.py0000644000076500000240000001465712220604305021162 0ustar shazowstaff00000000000000import unittest from urllib3.connectionpool import ( connection_from_url, HTTPConnection, HTTPConnectionPool, ) from urllib3.util import Timeout from urllib3.packages.ssl_match_hostname import CertificateError from urllib3.exceptions import ( ClosedPoolError, EmptyPoolError, HostChangedError, MaxRetryError, SSLError, ReadTimeoutError, ) from socket import error as SocketError, timeout as SocketTimeout from ssl import SSLError as BaseSSLError try: # Python 3 from queue import Empty from http.client import HTTPException except ImportError: from Queue import Empty from httplib import HTTPException class TestConnectionPool(unittest.TestCase): """ Tests in this suite should exercise the ConnectionPool functionality without actually making any network requests or connections. """ def test_same_host(self): same_host = [ ('http://google.com/', '/'), ('http://google.com/', 'http://google.com/'), ('http://google.com/', 'http://google.com'), ('http://google.com/', 'http://google.com/abra/cadabra'), ('http://google.com:42/', 'http://google.com:42/abracadabra'), ] for a, b in same_host: c = connection_from_url(a) self.assertTrue(c.is_same_host(b), "%s =? %s" % (a, b)) not_same_host = [ ('https://google.com/', 'http://google.com/'), ('http://google.com/', 'https://google.com/'), ('http://yahoo.com/', 'http://google.com/'), ('http://google.com:42', 'https://google.com/abracadabra'), ('http://google.com', 'https://google.net/'), ] for a, b in not_same_host: c = connection_from_url(a) self.assertFalse(c.is_same_host(b), "%s =? %s" % (a, b)) def test_max_connections(self): pool = HTTPConnectionPool(host='localhost', maxsize=1, block=True) pool._get_conn(timeout=0.01) try: pool._get_conn(timeout=0.01) self.fail("Managed to get a connection without EmptyPoolError") except EmptyPoolError: pass try: pool.request('GET', '/', pool_timeout=0.01) self.fail("Managed to get a connection without EmptyPoolError") except EmptyPoolError: pass self.assertEqual(pool.num_connections, 1) def test_pool_edgecases(self): pool = HTTPConnectionPool(host='localhost', maxsize=1, block=False) conn1 = pool._get_conn() conn2 = pool._get_conn() # New because block=False pool._put_conn(conn1) pool._put_conn(conn2) # Should be discarded self.assertEqual(conn1, pool._get_conn()) self.assertNotEqual(conn2, pool._get_conn()) self.assertEqual(pool.num_connections, 3) def test_exception_str(self): self.assertEqual( str(EmptyPoolError(HTTPConnectionPool(host='localhost'), "Test.")), "HTTPConnectionPool(host='localhost', port=None): Test.") def test_retry_exception_str(self): self.assertEqual( str(MaxRetryError( HTTPConnectionPool(host='localhost'), "Test.", None)), "HTTPConnectionPool(host='localhost', port=None): " "Max retries exceeded with url: Test. (Caused by redirect)") err = SocketError("Test") # using err.__class__ here, as socket.error is an alias for OSError # since Py3.3 and gets printed as this self.assertEqual( str(MaxRetryError( HTTPConnectionPool(host='localhost'), "Test.", err)), "HTTPConnectionPool(host='localhost', port=None): " "Max retries exceeded with url: Test. " "(Caused by {0}: Test)".format(str(err.__class__))) def test_pool_size(self): POOL_SIZE = 1 pool = HTTPConnectionPool(host='localhost', maxsize=POOL_SIZE, block=True) def _raise(ex): raise ex() def _test(exception, expect): pool._make_request = lambda *args, **kwargs: _raise(exception) self.assertRaises(expect, pool.request, 'GET', '/') self.assertEqual(pool.pool.qsize(), POOL_SIZE) #make sure that all of the exceptions return the connection to the pool _test(Empty, ReadTimeoutError) _test(SocketTimeout, ReadTimeoutError) _test(BaseSSLError, SSLError) _test(CertificateError, SSLError) # The pool should never be empty, and with these two exceptions being raised, # a retry will be triggered, but that retry will fail, eventually raising # MaxRetryError, not EmptyPoolError # See: https://github.com/shazow/urllib3/issues/76 pool._make_request = lambda *args, **kwargs: _raise(HTTPException) self.assertRaises(MaxRetryError, pool.request, 'GET', '/', retries=1, pool_timeout=0.01) self.assertEqual(pool.pool.qsize(), POOL_SIZE) def test_assert_same_host(self): c = connection_from_url('http://google.com:80') self.assertRaises(HostChangedError, c.request, 'GET', 'http://yahoo.com:80', assert_same_host=True) def test_pool_close(self): pool = connection_from_url('http://google.com:80') # Populate with some connections conn1 = pool._get_conn() conn2 = pool._get_conn() conn3 = pool._get_conn() pool._put_conn(conn1) pool._put_conn(conn2) old_pool_queue = pool.pool pool.close() self.assertEqual(pool.pool, None) self.assertRaises(ClosedPoolError, pool._get_conn) pool._put_conn(conn3) self.assertRaises(ClosedPoolError, pool._get_conn) self.assertRaises(Empty, old_pool_queue.get, block=False) def test_pool_timeouts(self): pool = HTTPConnectionPool(host='localhost') conn = pool._new_conn() self.assertEqual(conn.__class__, HTTPConnection) self.assertEqual(pool.timeout.__class__, Timeout) self.assertEqual(pool.timeout._read, Timeout.DEFAULT_TIMEOUT) self.assertEqual(pool.timeout._connect, Timeout.DEFAULT_TIMEOUT) self.assertEqual(pool.timeout.total, None) pool = HTTPConnectionPool(host='localhost', timeout=3) self.assertEqual(pool.timeout._read, 3) self.assertEqual(pool.timeout._connect, 3) self.assertEqual(pool.timeout.total, None) if __name__ == '__main__': unittest.main() urllib3-1.7.1/test/test_exceptions.py0000644000076500000240000000272412220604305020302 0ustar shazowstaff00000000000000import unittest import pickle from urllib3.exceptions import (HTTPError, MaxRetryError, LocationParseError, ClosedPoolError, EmptyPoolError, HostChangedError, ReadTimeoutError, ConnectTimeoutError) from urllib3.connectionpool import HTTPConnectionPool class TestPickle(unittest.TestCase): def cycle(self, item): return pickle.loads(pickle.dumps(item)) def test_exceptions(self): assert self.cycle(HTTPError(None)) assert self.cycle(MaxRetryError(None, None, None)) assert self.cycle(LocationParseError(None)) assert self.cycle(ConnectTimeoutError(None)) def test_exceptions_with_objects(self): assert self.cycle(HTTPError('foo')) assert self.cycle(MaxRetryError(HTTPConnectionPool('localhost'), '/', None)) assert self.cycle(LocationParseError('fake location')) assert self.cycle(ClosedPoolError(HTTPConnectionPool('localhost'), None)) assert self.cycle(EmptyPoolError(HTTPConnectionPool('localhost'), None)) assert self.cycle(HostChangedError(HTTPConnectionPool('localhost'), '/', None)) assert self.cycle(ReadTimeoutError(HTTPConnectionPool('localhost'), '/', None)) urllib3-1.7.1/test/test_fields.py0000644000076500000240000000355012202774751017402 0ustar shazowstaff00000000000000import unittest from urllib3.fields import guess_content_type, RequestField from urllib3.packages.six import b, u class TestRequestField(unittest.TestCase): def test_guess_content_type(self): self.assertEqual(guess_content_type('image.jpg'), 'image/jpeg') self.assertEqual(guess_content_type('notsure'), 'application/octet-stream') self.assertEqual(guess_content_type(None), 'application/octet-stream') def test_create(self): simple_field = RequestField('somename', 'data') self.assertEqual(simple_field.render_headers(), '\r\n') filename_field = RequestField('somename', 'data', filename='somefile.txt') self.assertEqual(filename_field.render_headers(), '\r\n') headers_field = RequestField('somename', 'data', headers={'Content-Length': 4}) self.assertEqual(headers_field.render_headers(), 'Content-Length: 4\r\n' '\r\n') def test_make_multipart(self): field = RequestField('somename', 'data') field.make_multipart(content_type='image/jpg', content_location='/test') self.assertEqual(field.render_headers(), 'Content-Disposition: form-data; name="somename"\r\n' 'Content-Type: image/jpg\r\n' 'Content-Location: /test\r\n' '\r\n') def test_render_parts(self): field = RequestField('somename', 'data') parts = field._render_parts({'name': 'value', 'filename': 'value'}) self.assertTrue('name="value"' in parts) self.assertTrue('filename="value"' in parts) parts = field._render_parts([('name', 'value'), ('filename', 'value')]) self.assertEqual(parts, 'name="value"; filename="value"') def test_render_part(self): field = RequestField('somename', 'data') param = field._render_part('filename', u('n\u00e4me')) self.assertEqual(param, "filename*=utf-8''n%C3%A4me") urllib3-1.7.1/test/test_filepost.py0000644000076500000240000001013712202774751017760 0ustar shazowstaff00000000000000import unittest from urllib3.filepost import encode_multipart_formdata, iter_fields from urllib3.fields import RequestField from urllib3.packages.six import b, u BOUNDARY = '!! test boundary !!' class TestIterfields(unittest.TestCase): def test_dict(self): for fieldname, value in iter_fields(dict(a='b')): self.assertEqual((fieldname, value), ('a', 'b')) self.assertEqual( list(sorted(iter_fields(dict(a='b', c='d')))), [('a', 'b'), ('c', 'd')]) def test_tuple_list(self): for fieldname, value in iter_fields([('a', 'b')]): self.assertEqual((fieldname, value), ('a', 'b')) self.assertEqual( list(iter_fields([('a', 'b'), ('c', 'd')])), [('a', 'b'), ('c', 'd')]) class TestMultipartEncoding(unittest.TestCase): def test_input_datastructures(self): fieldsets = [ dict(k='v', k2='v2'), [('k', 'v'), ('k2', 'v2')], ] for fields in fieldsets: encoded, _ = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded.count(b(BOUNDARY)), 3) def test_field_encoding(self): fieldsets = [ [('k', 'v'), ('k2', 'v2')], [('k', b'v'), (u('k2'), b'v2')], [('k', b'v'), (u('k2'), 'v2')], ] for fields in fieldsets: encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded, b'--' + b(BOUNDARY) + b'\r\n' b'Content-Disposition: form-data; name="k"\r\n' b'\r\n' b'v\r\n' b'--' + b(BOUNDARY) + b'\r\n' b'Content-Disposition: form-data; name="k2"\r\n' b'\r\n' b'v2\r\n' b'--' + b(BOUNDARY) + b'--\r\n' , fields) self.assertEqual(content_type, 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_filename(self): fields = [('k', ('somename', b'v'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded, b'--' + b(BOUNDARY) + b'\r\n' b'Content-Disposition: form-data; name="k"; filename="somename"\r\n' b'Content-Type: application/octet-stream\r\n' b'\r\n' b'v\r\n' b'--' + b(BOUNDARY) + b'--\r\n' ) self.assertEqual(content_type, 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_textplain(self): fields = [('k', ('somefile.txt', b'v'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded, b'--' + b(BOUNDARY) + b'\r\n' b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' b'Content-Type: text/plain\r\n' b'\r\n' b'v\r\n' b'--' + b(BOUNDARY) + b'--\r\n' ) self.assertEqual(content_type, 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_explicit(self): fields = [('k', ('somefile.txt', b'v', 'image/jpeg'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded, b'--' + b(BOUNDARY) + b'\r\n' b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' b'Content-Type: image/jpeg\r\n' b'\r\n' b'v\r\n' b'--' + b(BOUNDARY) + b'--\r\n' ) self.assertEqual(content_type, 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_request_fields(self): fields = [RequestField('k', b'v', filename='somefile.txt', headers={'Content-Type': 'image/jpeg'})] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEquals(encoded, b'--' + b(BOUNDARY) + b'\r\n' b'Content-Type: image/jpeg\r\n' b'\r\n' b'v\r\n' b'--' + b(BOUNDARY) + b'--\r\n' ) urllib3-1.7.1/test/test_poolmanager.py0000644000076500000240000000342412162632565020441 0ustar shazowstaff00000000000000import unittest from urllib3.poolmanager import PoolManager from urllib3 import connection_from_url from urllib3.exceptions import ClosedPoolError class TestPoolManager(unittest.TestCase): def test_same_url(self): # Convince ourselves that normally we don't get the same object conn1 = connection_from_url('http://localhost:8081/foo') conn2 = connection_from_url('http://localhost:8081/bar') self.assertNotEqual(conn1, conn2) # Now try again using the PoolManager p = PoolManager(1) conn1 = p.connection_from_url('http://localhost:8081/foo') conn2 = p.connection_from_url('http://localhost:8081/bar') self.assertEqual(conn1, conn2) def test_many_urls(self): urls = [ "http://localhost:8081/foo", "http://www.google.com/mail", "http://localhost:8081/bar", "https://www.google.com/", "https://www.google.com/mail", "http://yahoo.com", "http://bing.com", "http://yahoo.com/", ] connections = set() p = PoolManager(10) for url in urls: conn = p.connection_from_url(url) connections.add(conn) self.assertEqual(len(connections), 5) def test_manager_clear(self): p = PoolManager(5) conn_pool = p.connection_from_url('http://google.com') self.assertEqual(len(p.pools), 1) conn = conn_pool._get_conn() p.clear() self.assertEqual(len(p.pools), 0) self.assertRaises(ClosedPoolError, conn_pool._get_conn) conn_pool._put_conn(conn) self.assertRaises(ClosedPoolError, conn_pool._get_conn) self.assertEqual(len(p.pools), 0) if __name__ == '__main__': unittest.main() urllib3-1.7.1/test/test_proxymanager.py0000644000076500000240000000270012202774751020644 0ustar shazowstaff00000000000000import unittest from urllib3.poolmanager import ProxyManager class TestProxyManager(unittest.TestCase): def test_proxy_headers(self): p = ProxyManager('http://something:1234') url = 'http://pypi.python.org/test' # Verify default headers default_headers = {'Accept': '*/*', 'Host': 'pypi.python.org'} headers = p._set_proxy_headers(url) self.assertEqual(headers, default_headers) # Verify default headers don't overwrite provided headers provided_headers = {'Accept': 'application/json', 'custom': 'header', 'Host': 'test.python.org'} headers = p._set_proxy_headers(url, provided_headers) self.assertEqual(headers, provided_headers) # Verify proxy with nonstandard port provided_headers = {'Accept': 'application/json'} expected_headers = provided_headers.copy() expected_headers.update({'Host': 'pypi.python.org:8080'}) url_with_port = 'http://pypi.python.org:8080/test' headers = p._set_proxy_headers(url_with_port, provided_headers) self.assertEqual(headers, expected_headers) def test_default_port(self): p = ProxyManager('http://something') self.assertEqual(p.proxy.port, 80) p = ProxyManager('https://something') self.assertEqual(p.proxy.port, 443) if __name__ == '__main__': unittest.main() urllib3-1.7.1/test/test_response.py0000644000076500000240000001716612202774751020002 0ustar shazowstaff00000000000000import unittest from io import BytesIO, BufferedReader from urllib3.response import HTTPResponse from urllib3.exceptions import DecodeError class TestLegacyResponse(unittest.TestCase): def test_getheaders(self): headers = {'host': 'example.com'} r = HTTPResponse(headers=headers) self.assertEqual(r.getheaders(), headers) def test_getheader(self): headers = {'host': 'example.com'} r = HTTPResponse(headers=headers) self.assertEqual(r.getheader('host'), 'example.com') class TestResponse(unittest.TestCase): def test_cache_content(self): r = HTTPResponse('foo') self.assertEqual(r.data, 'foo') self.assertEqual(r._body, 'foo') def test_default(self): r = HTTPResponse() self.assertEqual(r.data, None) def test_none(self): r = HTTPResponse(None) self.assertEqual(r.data, None) def test_preload(self): fp = BytesIO(b'foo') r = HTTPResponse(fp, preload_content=True) self.assertEqual(fp.tell(), len(b'foo')) self.assertEqual(r.data, b'foo') def test_no_preload(self): fp = BytesIO(b'foo') r = HTTPResponse(fp, preload_content=False) self.assertEqual(fp.tell(), 0) self.assertEqual(r.data, b'foo') self.assertEqual(fp.tell(), len(b'foo')) def test_decode_bad_data(self): fp = BytesIO(b'\x00' * 10) self.assertRaises(DecodeError, HTTPResponse, fp, headers={ 'content-encoding': 'deflate' }) def test_decode_deflate(self): import zlib data = zlib.compress(b'foo') fp = BytesIO(data) r = HTTPResponse(fp, headers={'content-encoding': 'deflate'}) self.assertEqual(r.data, b'foo') def test_decode_deflate_case_insensitve(self): import zlib data = zlib.compress(b'foo') fp = BytesIO(data) r = HTTPResponse(fp, headers={'content-encoding': 'DeFlAtE'}) self.assertEqual(r.data, b'foo') def test_chunked_decoding_deflate(self): import zlib data = zlib.compress(b'foo') fp = BytesIO(data) r = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, preload_content=False) self.assertEqual(r.read(3), b'') self.assertEqual(r.read(1), b'f') self.assertEqual(r.read(2), b'oo') def test_chunked_decoding_deflate2(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS) data = compress.compress(b'foo') data += compress.flush() fp = BytesIO(data) r = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, preload_content=False) self.assertEqual(r.read(1), b'') self.assertEqual(r.read(1), b'f') self.assertEqual(r.read(2), b'oo') def test_chunked_decoding_gzip(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, 16 + zlib.MAX_WBITS) data = compress.compress(b'foo') data += compress.flush() fp = BytesIO(data) r = HTTPResponse(fp, headers={'content-encoding': 'gzip'}, preload_content=False) self.assertEqual(r.read(11), b'') self.assertEqual(r.read(1), b'f') self.assertEqual(r.read(2), b'oo') def test_io(self): import socket try: from http.client import HTTPResponse as OldHTTPResponse except: from httplib import HTTPResponse as OldHTTPResponse fp = BytesIO(b'foo') resp = HTTPResponse(fp, preload_content=False) self.assertEqual(resp.closed, False) self.assertEqual(resp.readable(), True) self.assertEqual(resp.writable(), False) self.assertRaises(IOError, resp.fileno) resp.close() self.assertEqual(resp.closed, True) # Try closing with an `httplib.HTTPResponse`, because it has an # `isclosed` method. hlr = OldHTTPResponse(socket.socket()) resp2 = HTTPResponse(hlr, preload_content=False) self.assertEqual(resp2.closed, False) resp2.close() self.assertEqual(resp2.closed, True) #also try when only data is present. resp3 = HTTPResponse('foodata') self.assertRaises(IOError, resp3.fileno) resp3._fp = 2 # A corner case where _fp is present but doesn't have `closed`, # `isclosed`, or `fileno`. Unlikely, but possible. self.assertEqual(resp3.closed, True) self.assertRaises(IOError, resp3.fileno) def test_io_bufferedreader(self): fp = BytesIO(b'foo') resp = HTTPResponse(fp, preload_content=False) br = BufferedReader(resp) self.assertEqual(br.read(), b'foo') br.close() self.assertEqual(resp.closed, True) def test_streaming(self): fp = BytesIO(b'foo') resp = HTTPResponse(fp, preload_content=False) stream = resp.stream(2, decode_content=False) self.assertEqual(next(stream), b'fo') self.assertEqual(next(stream), b'o') self.assertRaises(StopIteration, next, stream) def test_gzipped_streaming(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, 16 + zlib.MAX_WBITS) data = compress.compress(b'foo') data += compress.flush() fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'gzip'}, preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') self.assertEqual(next(stream), b'oo') self.assertRaises(StopIteration, next, stream) def test_deflate_streaming(self): import zlib data = zlib.compress(b'foo') fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') self.assertEqual(next(stream), b'oo') self.assertRaises(StopIteration, next, stream) def test_deflate2_streaming(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS) data = compress.compress(b'foo') data += compress.flush() fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') self.assertEqual(next(stream), b'oo') self.assertRaises(StopIteration, next, stream) def test_empty_stream(self): fp = BytesIO(b'') resp = HTTPResponse(fp, preload_content=False) stream = resp.stream(2, decode_content=False) self.assertRaises(StopIteration, next, stream) def test_mock_httpresponse_stream(self): # Mock out a HTTP Request that does enough to make it through urllib3's # read() and close() calls, and also exhausts and underlying file # object. class MockHTTPRequest(object): self.fp = None def read(self, amt): data = self.fp.read(amt) if not data: self.fp = None return data def close(self): self.fp = None bio = BytesIO(b'foo') fp = MockHTTPRequest() fp.fp = bio resp = HTTPResponse(fp, preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'fo') self.assertEqual(next(stream), b'o') self.assertRaises(StopIteration, next, stream) if __name__ == '__main__': unittest.main() urllib3-1.7.1/test/test_util.py0000644000076500000240000002577012220604305017104 0ustar shazowstaff00000000000000import logging import unittest from mock import patch from urllib3 import add_stderr_logger from urllib3.util import ( get_host, make_headers, split_first, parse_url, Timeout, Url, ) from urllib3.exceptions import LocationParseError, TimeoutStateError # This number represents a time in seconds, it doesn't mean anything in # isolation. Setting to a high-ish value to avoid conflicts with the smaller # numbers used for timeouts TIMEOUT_EPOCH = 1000 class TestUtil(unittest.TestCase): def test_get_host(self): url_host_map = { # Hosts 'http://google.com/mail': ('http', 'google.com', None), 'http://google.com/mail/': ('http', 'google.com', None), 'google.com/mail': ('http', 'google.com', None), 'http://google.com/': ('http', 'google.com', None), 'http://google.com': ('http', 'google.com', None), 'http://www.google.com': ('http', 'www.google.com', None), 'http://mail.google.com': ('http', 'mail.google.com', None), 'http://google.com:8000/mail/': ('http', 'google.com', 8000), 'http://google.com:8000': ('http', 'google.com', 8000), 'https://google.com': ('https', 'google.com', None), 'https://google.com:8000': ('https', 'google.com', 8000), 'http://user:password@127.0.0.1:1234': ('http', '127.0.0.1', 1234), 'http://google.com/foo=http://bar:42/baz': ('http', 'google.com', None), 'http://google.com?foo=http://bar:42/baz': ('http', 'google.com', None), 'http://google.com#foo=http://bar:42/baz': ('http', 'google.com', None), # IPv4 '173.194.35.7': ('http', '173.194.35.7', None), 'http://173.194.35.7': ('http', '173.194.35.7', None), 'http://173.194.35.7/test': ('http', '173.194.35.7', None), 'http://173.194.35.7:80': ('http', '173.194.35.7', 80), 'http://173.194.35.7:80/test': ('http', '173.194.35.7', 80), # IPv6 '[2a00:1450:4001:c01::67]': ('http', '[2a00:1450:4001:c01::67]', None), 'http://[2a00:1450:4001:c01::67]': ('http', '[2a00:1450:4001:c01::67]', None), 'http://[2a00:1450:4001:c01::67]/test': ('http', '[2a00:1450:4001:c01::67]', None), 'http://[2a00:1450:4001:c01::67]:80': ('http', '[2a00:1450:4001:c01::67]', 80), 'http://[2a00:1450:4001:c01::67]:80/test': ('http', '[2a00:1450:4001:c01::67]', 80), # More IPv6 from http://www.ietf.org/rfc/rfc2732.txt 'http://[FEDC:BA98:7654:3210:FEDC:BA98:7654:3210]:8000/index.html': ('http', '[FEDC:BA98:7654:3210:FEDC:BA98:7654:3210]', 8000), 'http://[1080:0:0:0:8:800:200C:417A]/index.html': ('http', '[1080:0:0:0:8:800:200C:417A]', None), 'http://[3ffe:2a00:100:7031::1]': ('http', '[3ffe:2a00:100:7031::1]', None), 'http://[1080::8:800:200C:417A]/foo': ('http', '[1080::8:800:200C:417A]', None), 'http://[::192.9.5.5]/ipng': ('http', '[::192.9.5.5]', None), 'http://[::FFFF:129.144.52.38]:42/index.html': ('http', '[::FFFF:129.144.52.38]', 42), 'http://[2010:836B:4179::836B:4179]': ('http', '[2010:836B:4179::836B:4179]', None), } for url, expected_host in url_host_map.items(): returned_host = get_host(url) self.assertEquals(returned_host, expected_host) def test_invalid_host(self): # TODO: Add more tests invalid_host = [ 'http://google.com:foo', 'http://::1/', 'http://::1:80/', ] for location in invalid_host: self.assertRaises(LocationParseError, get_host, location) def test_parse_url(self): url_host_map = { 'http://google.com/mail': Url('http', host='google.com', path='/mail'), 'http://google.com/mail/': Url('http', host='google.com', path='/mail/'), 'google.com/mail': Url(host='google.com', path='/mail'), 'http://google.com/': Url('http', host='google.com', path='/'), 'http://google.com': Url('http', host='google.com'), 'http://google.com?foo': Url('http', host='google.com', path='', query='foo'), '': Url(), '/': Url(path='/'), '?': Url(path='', query=''), '#': Url(path='', fragment=''), '#?/!google.com/?foo#bar': Url(path='', fragment='?/!google.com/?foo#bar'), '/foo': Url(path='/foo'), '/foo?bar=baz': Url(path='/foo', query='bar=baz'), '/foo?bar=baz#banana?apple/orange': Url(path='/foo', query='bar=baz', fragment='banana?apple/orange'), } for url, expected_url in url_host_map.items(): returned_url = parse_url(url) self.assertEquals(returned_url, expected_url) def test_parse_url_invalid_IPv6(self): self.assertRaises(ValueError, parse_url, '[::1') def test_request_uri(self): url_host_map = { 'http://google.com/mail': '/mail', 'http://google.com/mail/': '/mail/', 'http://google.com/': '/', 'http://google.com': '/', '': '/', '/': '/', '?': '/?', '#': '/', '/foo?bar=baz': '/foo?bar=baz', } for url, expected_request_uri in url_host_map.items(): returned_url = parse_url(url) self.assertEquals(returned_url.request_uri, expected_request_uri) def test_netloc(self): url_netloc_map = { 'http://google.com/mail': 'google.com', 'http://google.com:80/mail': 'google.com:80', 'google.com/foobar': 'google.com', 'google.com:12345': 'google.com:12345', } for url, expected_netloc in url_netloc_map.items(): self.assertEquals(parse_url(url).netloc, expected_netloc) def test_make_headers(self): self.assertEqual( make_headers(accept_encoding=True), {'accept-encoding': 'gzip,deflate'}) self.assertEqual( make_headers(accept_encoding='foo,bar'), {'accept-encoding': 'foo,bar'}) self.assertEqual( make_headers(accept_encoding=['foo', 'bar']), {'accept-encoding': 'foo,bar'}) self.assertEqual( make_headers(accept_encoding=True, user_agent='banana'), {'accept-encoding': 'gzip,deflate', 'user-agent': 'banana'}) self.assertEqual( make_headers(user_agent='banana'), {'user-agent': 'banana'}) self.assertEqual( make_headers(keep_alive=True), {'connection': 'keep-alive'}) self.assertEqual( make_headers(basic_auth='foo:bar'), {'authorization': 'Basic Zm9vOmJhcg=='}) def test_split_first(self): test_cases = { ('abcd', 'b'): ('a', 'cd', 'b'), ('abcd', 'cb'): ('a', 'cd', 'b'), ('abcd', ''): ('abcd', '', None), ('abcd', 'a'): ('', 'bcd', 'a'), ('abcd', 'ab'): ('', 'bcd', 'a'), } for input, expected in test_cases.items(): output = split_first(*input) self.assertEqual(output, expected) def test_add_stderr_logger(self): handler = add_stderr_logger(level=logging.INFO) # Don't actually print debug logger = logging.getLogger('urllib3') self.assertTrue(handler in logger.handlers) logger.debug('Testing add_stderr_logger') logger.removeHandler(handler) def _make_time_pass(self, seconds, timeout, time_mock): """ Make some time pass for the timeout object """ time_mock.return_value = TIMEOUT_EPOCH timeout.start_connect() time_mock.return_value = TIMEOUT_EPOCH + seconds return timeout def test_invalid_timeouts(self): try: Timeout(total=-1) self.fail("negative value should throw exception") except ValueError as e: self.assertTrue('less than' in str(e)) try: Timeout(connect=2, total=-1) self.fail("negative value should throw exception") except ValueError as e: self.assertTrue('less than' in str(e)) try: Timeout(read=-1) self.fail("negative value should throw exception") except ValueError as e: self.assertTrue('less than' in str(e)) # Booleans are allowed also by socket.settimeout and converted to the # equivalent float (1.0 for True, 0.0 for False) Timeout(connect=False, read=True) try: Timeout(read="foo") self.fail("string value should not be allowed") except ValueError as e: self.assertTrue('int or float' in str(e)) @patch('urllib3.util.current_time') def test_timeout(self, current_time): timeout = Timeout(total=3) # make 'no time' elapse timeout = self._make_time_pass(seconds=0, timeout=timeout, time_mock=current_time) self.assertEqual(timeout.read_timeout, 3) self.assertEqual(timeout.connect_timeout, 3) timeout = Timeout(total=3, connect=2) self.assertEqual(timeout.connect_timeout, 2) timeout = Timeout() self.assertEqual(timeout.connect_timeout, Timeout.DEFAULT_TIMEOUT) # Connect takes 5 seconds, leaving 5 seconds for read timeout = Timeout(total=10, read=7) timeout = self._make_time_pass(seconds=5, timeout=timeout, time_mock=current_time) self.assertEqual(timeout.read_timeout, 5) # Connect takes 2 seconds, read timeout still 7 seconds timeout = Timeout(total=10, read=7) timeout = self._make_time_pass(seconds=2, timeout=timeout, time_mock=current_time) self.assertEqual(timeout.read_timeout, 7) timeout = Timeout(total=10, read=7) self.assertEqual(timeout.read_timeout, 7) timeout = Timeout(total=None, read=None, connect=None) self.assertEqual(timeout.connect_timeout, None) self.assertEqual(timeout.read_timeout, None) self.assertEqual(timeout.total, None) def test_timeout_str(self): timeout = Timeout(connect=1, read=2, total=3) self.assertEqual(str(timeout), "Timeout(connect=1, read=2, total=3)") timeout = Timeout(connect=1, read=None, total=3) self.assertEqual(str(timeout), "Timeout(connect=1, read=None, total=3)") @patch('urllib3.util.current_time') def test_timeout_elapsed(self, current_time): current_time.return_value = TIMEOUT_EPOCH timeout = Timeout(total=3) self.assertRaises(TimeoutStateError, timeout.get_connect_duration) timeout.start_connect() self.assertRaises(TimeoutStateError, timeout.start_connect) current_time.return_value = TIMEOUT_EPOCH + 2 self.assertEqual(timeout.get_connect_duration(), 2) current_time.return_value = TIMEOUT_EPOCH + 37 self.assertEqual(timeout.get_connect_duration(), 37) urllib3-1.7.1/test-requirements.txt0000644000076500000240000000006312220604305017764 0ustar shazowstaff00000000000000nose==1.3 mock==1.0.1 tornado==2.4.1 coverage==3.6 urllib3-1.7.1/urllib3/0000755000076500000240000000000012220605014015076 5ustar shazowstaff00000000000000urllib3-1.7.1/urllib3/__init__.py0000644000076500000240000000324712220604742017224 0ustar shazowstaff00000000000000# urllib3/__init__.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php """ urllib3 - Thread-safe connection pooling and re-using. """ __author__ = 'Andrey Petrov (andrey.petrov@shazow.net)' __license__ = 'MIT' __version__ = '1.7.1' from .connectionpool import ( HTTPConnectionPool, HTTPSConnectionPool, connection_from_url ) from . import exceptions from .filepost import encode_multipart_formdata from .poolmanager import PoolManager, ProxyManager, proxy_from_url from .response import HTTPResponse from .util import make_headers, get_host, Timeout # Set default logging handler to avoid "No handler found" warnings. import logging try: # Python 2.7+ from logging import NullHandler except ImportError: class NullHandler(logging.Handler): def emit(self, record): pass logging.getLogger(__name__).addHandler(NullHandler()) def add_stderr_logger(level=logging.DEBUG): """ Helper for quickly adding a StreamHandler to the logger. Useful for debugging. Returns the handler after adding it. """ # This method needs to be in this __init__.py to get the __name__ correct # even if urllib3 is vendored within another package. logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s')) logger.addHandler(handler) logger.setLevel(level) logger.debug('Added an stderr logging handler to logger: %s' % __name__) return handler # ... Clean up. del NullHandler urllib3-1.7.1/urllib3/_collections.py0000644000076500000240000000552212202774751020150 0ustar shazowstaff00000000000000# urllib3/_collections.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php from collections import MutableMapping from threading import RLock try: # Python 2.7+ from collections import OrderedDict except ImportError: from .packages.ordered_dict import OrderedDict __all__ = ['RecentlyUsedContainer'] _Null = object() class RecentlyUsedContainer(MutableMapping): """ Provides a thread-safe dict-like container which maintains up to ``maxsize`` keys while throwing away the least-recently-used keys beyond ``maxsize``. :param maxsize: Maximum number of recent elements to retain. :param dispose_func: Every time an item is evicted from the container, ``dispose_func(value)`` is called. Callback which will get called """ ContainerCls = OrderedDict def __init__(self, maxsize=10, dispose_func=None): self._maxsize = maxsize self.dispose_func = dispose_func self._container = self.ContainerCls() self.lock = RLock() def __getitem__(self, key): # Re-insert the item, moving it to the end of the eviction line. with self.lock: item = self._container.pop(key) self._container[key] = item return item def __setitem__(self, key, value): evicted_value = _Null with self.lock: # Possibly evict the existing value of 'key' evicted_value = self._container.get(key, _Null) self._container[key] = value # If we didn't evict an existing value, we might have to evict the # least recently used item from the beginning of the container. if len(self._container) > self._maxsize: _key, evicted_value = self._container.popitem(last=False) if self.dispose_func and evicted_value is not _Null: self.dispose_func(evicted_value) def __delitem__(self, key): with self.lock: value = self._container.pop(key) if self.dispose_func: self.dispose_func(value) def __len__(self): with self.lock: return len(self._container) def __iter__(self): raise NotImplementedError('Iteration over this class is unlikely to be threadsafe.') def clear(self): with self.lock: # Copy pointers to all values, then wipe the mapping # under Python 2, this copies the list of values twice :-| values = list(self._container.values()) self._container.clear() if self.dispose_func: for value in values: self.dispose_func(value) def keys(self): with self.lock: return self._container.keys() urllib3-1.7.1/urllib3/connectionpool.py0000644000076500000240000006607412220604305020520 0ustar shazowstaff00000000000000# urllib3/connectionpool.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import errno import logging from socket import error as SocketError, timeout as SocketTimeout import socket try: # Python 3 from http.client import HTTPConnection, HTTPException from http.client import HTTP_PORT, HTTPS_PORT except ImportError: from httplib import HTTPConnection, HTTPException from httplib import HTTP_PORT, HTTPS_PORT try: # Python 3 from queue import LifoQueue, Empty, Full except ImportError: from Queue import LifoQueue, Empty, Full import Queue as _ # Platform-specific: Windows try: # Compiled with SSL? HTTPSConnection = object class BaseSSLError(BaseException): pass ssl = None try: # Python 3 from http.client import HTTPSConnection except ImportError: from httplib import HTTPSConnection import ssl BaseSSLError = ssl.SSLError except (ImportError, AttributeError): # Platform-specific: No SSL. pass from .exceptions import ( ClosedPoolError, ConnectTimeoutError, EmptyPoolError, HostChangedError, MaxRetryError, SSLError, ReadTimeoutError, ProxyError, ) from .packages.ssl_match_hostname import CertificateError, match_hostname from .packages import six from .request import RequestMethods from .response import HTTPResponse from .util import ( assert_fingerprint, get_host, is_connection_dropped, resolve_cert_reqs, resolve_ssl_version, ssl_wrap_socket, Timeout, ) xrange = six.moves.xrange log = logging.getLogger(__name__) _Default = object() port_by_scheme = { 'http': HTTP_PORT, 'https': HTTPS_PORT, } ## Connection objects (extension of httplib) class VerifiedHTTPSConnection(HTTPSConnection): """ Based on httplib.HTTPSConnection but wraps the socket with SSL certification. """ cert_reqs = None ca_certs = None ssl_version = None def set_cert(self, key_file=None, cert_file=None, cert_reqs=None, ca_certs=None, assert_hostname=None, assert_fingerprint=None): self.key_file = key_file self.cert_file = cert_file self.cert_reqs = cert_reqs self.ca_certs = ca_certs self.assert_hostname = assert_hostname self.assert_fingerprint = assert_fingerprint def connect(self): # Add certificate verification try: sock = socket.create_connection( address=(self.host, self.port), timeout=self.timeout) except SocketTimeout: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, self.timeout)) resolved_cert_reqs = resolve_cert_reqs(self.cert_reqs) resolved_ssl_version = resolve_ssl_version(self.ssl_version) if self._tunnel_host: self.sock = sock # Calls self._set_hostport(), so self.host is # self._tunnel_host below. self._tunnel() # Wrap socket using verification with the root certs in # trusted_root_certs self.sock = ssl_wrap_socket(sock, self.key_file, self.cert_file, cert_reqs=resolved_cert_reqs, ca_certs=self.ca_certs, server_hostname=self.host, ssl_version=resolved_ssl_version) if resolved_cert_reqs != ssl.CERT_NONE: if self.assert_fingerprint: assert_fingerprint(self.sock.getpeercert(binary_form=True), self.assert_fingerprint) elif self.assert_hostname is not False: match_hostname(self.sock.getpeercert(), self.assert_hostname or self.host) ## Pool objects class ConnectionPool(object): """ Base class for all connection pools, such as :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. """ scheme = None QueueCls = LifoQueue def __init__(self, host, port=None): # httplib doesn't like it when we include brackets in ipv6 addresses host = host.strip('[]') self.host = host self.port = port def __str__(self): return '%s(host=%r, port=%r)' % (type(self).__name__, self.host, self.port) # This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 _blocking_errnos = set([errno.EAGAIN, errno.EWOULDBLOCK]) class HTTPConnectionPool(ConnectionPool, RequestMethods): """ Thread-safe connection pool for one host. :param host: Host used for this HTTP Connection (e.g. "localhost"), passed into :class:`httplib.HTTPConnection`. :param port: Port used for this HTTP Connection (None is equivalent to 80), passed into :class:`httplib.HTTPConnection`. :param strict: Causes BadStatusLine to be raised if the status line can't be parsed as a valid HTTP/1.0 or 1.1 status line, passed into :class:`httplib.HTTPConnection`. .. note:: Only works in Python 2. This parameter is ignored in Python 3. :param timeout: Socket timeout in seconds for each individual connection. This can be a float or integer, which sets the timeout for the HTTP request, or an instance of :class:`urllib3.util.Timeout` which gives you more fine-grained control over request timeouts. After the constructor has been parsed, this is always a `urllib3.util.Timeout` object. :param maxsize: Number of connections to save that can be reused. More than 1 is useful in multithreaded situations. If ``block`` is set to false, more connections will be created but they will not be saved once they've been used. :param block: If set to True, no more than ``maxsize`` connections will be used at a time. When no free connections are available, the call will block until a connection has been released. This is a useful side effect for particular multithreaded situations where one does not want to use more than maxsize connections per host to prevent flooding. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param _proxy: Parsed proxy URL, should not be used directly, instead, see :class:`urllib3.connectionpool.ProxyManager`" :param _proxy_headers: A dictionary with proxy headers, should not be used directly, instead, see :class:`urllib3.connectionpool.ProxyManager`" """ scheme = 'http' def __init__(self, host, port=None, strict=False, timeout=Timeout.DEFAULT_TIMEOUT, maxsize=1, block=False, headers=None, _proxy=None, _proxy_headers=None): ConnectionPool.__init__(self, host, port) RequestMethods.__init__(self, headers) self.strict = strict # This is for backwards compatibility and can be removed once a timeout # can only be set to a Timeout object if not isinstance(timeout, Timeout): timeout = Timeout.from_float(timeout) self.timeout = timeout self.pool = self.QueueCls(maxsize) self.block = block self.proxy = _proxy self.proxy_headers = _proxy_headers or {} # Fill the queue up so that doing get() on it will block properly for _ in xrange(maxsize): self.pool.put(None) # These are mostly for testing and debugging purposes. self.num_connections = 0 self.num_requests = 0 def _new_conn(self): """ Return a fresh :class:`httplib.HTTPConnection`. """ self.num_connections += 1 log.info("Starting new HTTP connection (%d): %s" % (self.num_connections, self.host)) extra_params = {} if not six.PY3: # Python 2 extra_params['strict'] = self.strict return HTTPConnection(host=self.host, port=self.port, timeout=self.timeout.connect_timeout, **extra_params) def _get_conn(self, timeout=None): """ Get a connection. Will return a pooled connection if one is available. If no connections are available and :prop:`.block` is ``False``, then a fresh connection is returned. :param timeout: Seconds to wait before giving up and raising :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and :prop:`.block` is ``True``. """ conn = None try: conn = self.pool.get(block=self.block, timeout=timeout) except AttributeError: # self.pool is None raise ClosedPoolError(self, "Pool is closed.") except Empty: if self.block: raise EmptyPoolError(self, "Pool reached maximum size and no more " "connections are allowed.") pass # Oh well, we'll create a new connection then # If this is a persistent connection, check if it got disconnected if conn and is_connection_dropped(conn): log.info("Resetting dropped connection: %s" % self.host) conn.close() return conn or self._new_conn() def _put_conn(self, conn): """ Put a connection back into the pool. :param conn: Connection object for the current host and port as returned by :meth:`._new_conn` or :meth:`._get_conn`. If the pool is already full, the connection is closed and discarded because we exceeded maxsize. If connections are discarded frequently, then maxsize should be increased. If the pool is closed, then the connection will be closed and discarded. """ try: self.pool.put(conn, block=False) return # Everything is dandy, done. except AttributeError: # self.pool is None. pass except Full: # This should never happen if self.block == True log.warning("HttpConnectionPool is full, discarding connection: %s" % self.host) # Connection never got put back into the pool, close it. if conn: conn.close() def _get_timeout(self, timeout): """ Helper that always returns a :class:`urllib3.util.Timeout` """ if timeout is _Default: return self.timeout.clone() if isinstance(timeout, Timeout): return timeout.clone() else: # User passed us an int/float. This is for backwards compatibility, # can be removed later return Timeout.from_float(timeout) def _make_request(self, conn, method, url, timeout=_Default, **httplib_request_kw): """ Perform a request on a given httplib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) try: timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # conn.request() calls httplib.*.request, not the method in # request.py. It also calls makefile (recv) on the socket conn.request(method, url, **httplib_request_kw) except SocketTimeout: raise ConnectTimeoutError( self, "Connection to %s timed out. (connect timeout=%s)" % (self.host, timeout_obj.connect_timeout)) # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout log.debug("Setting read timeout to %s" % read_timeout) # App Engine doesn't have a sock attr if hasattr(conn, 'sock') and \ read_timeout is not None and \ read_timeout is not Timeout.DEFAULT_TIMEOUT: # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout) conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7+, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 2.6 and older httplib_response = conn.getresponse() except SocketTimeout: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout) except SocketError as e: # Platform-specific: Python 2 # See the above comment about EAGAIN in Python 3. In Python 2 we # have to specifically catch it and throw the timeout error if e.errno in _blocking_errnos: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout) raise # AppEngine doesn't have a version attr. http_version = getattr(conn, '_http_vsn_str', 'HTTP/?') log.debug("\"%s %s %s\" %s %s" % (method, url, http_version, httplib_response.status, httplib_response.length)) return httplib_response def close(self): """ Close all pooled connections and disable the pool. """ # Disable access to the pool old_pool, self.pool = self.pool, None try: while True: conn = old_pool.get(block=False) if conn: conn.close() except Empty: pass # Done. def is_same_host(self, url): """ Check if the given ``url`` is a member of the same host as this connection pool. """ if url.startswith('/'): return True # TODO: Add optional support for socket.gethostbyname checking. scheme, host, port = get_host(url) if self.port and not port: # Use explicit default port for comparison when none is given. port = port_by_scheme.get(scheme) return (scheme, host, port) == (self.scheme, self.host, self.port) def urlopen(self, method, url, body=None, headers=None, retries=3, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param body: Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience). :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Number of retries to allow before raising a MaxRetryError exception. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param \**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ if headers is None: headers = self.headers if retries < 0: raise MaxRetryError(self, url) if release_conn is None: release_conn = response_kw.get('preload_content', True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries - 1) conn = None try: # Request a connection from the queue conn = self._get_conn(timeout=pool_timeout) # Make the request on the httplib connection object httplib_response = self._make_request(conn, method, url, timeout=timeout, body=body, headers=headers) # If we're going to release the connection in ``finally:``, then # the request doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = not release_conn and conn # Import httplib's response into our own wrapper object response = HTTPResponse.from_httplib(httplib_response, pool=self, connection=response_conn, **response_kw) # else: # The connection will be put back into the pool when # ``response.release_conn()`` is called (implicitly by # ``response.read()``) except Empty: # Timed out by queue raise ReadTimeoutError( self, url, "Read timed out, no pool connections are available.") except SocketTimeout: # Timed out by socket raise ReadTimeoutError(self, url, "Read timed out.") except BaseSSLError as e: # SSL certificate error if 'timed out' in str(e) or \ 'did not complete (read)' in str(e): # Platform-specific: Python 2.6 raise ReadTimeoutError(self, url, "Read timed out.") raise SSLError(e) except CertificateError as e: # Name mismatch raise SSLError(e) except (HTTPException, SocketError) as e: if isinstance(e, SocketError) and self.proxy is not None: raise ProxyError('Cannot connect to proxy. ' 'Socket error: %s.' % e) # Connection broken, discard. It will be replaced next _get_conn(). conn = None # This is necessary so we can access e below err = e if retries == 0: raise MaxRetryError(self, url, e) finally: if release_conn: # Put the connection back to be reused. If the connection is # expired then it will be None, which will get replaced with a # fresh connection during _get_conn. self._put_conn(conn) if not conn: # Try again log.warn("Retrying (%d attempts remain) after connection " "broken by '%r': %s" % (retries, err, url)) return self.urlopen(method, url, body, headers, retries - 1, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, **response_kw) # Handle redirect? redirect_location = redirect and response.get_redirect_location() if redirect_location: if response.status == 303: method = 'GET' log.info("Redirecting %s -> %s" % (url, redirect_location)) return self.urlopen(method, redirect_location, body, headers, retries - 1, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, release_conn=release_conn, **response_kw) return response class HTTPSConnectionPool(HTTPConnectionPool): """ Same as :class:`.HTTPConnectionPool`, but HTTPS. When Python is compiled with the :mod:`ssl` module, then :class:`.VerifiedHTTPSConnection` is used, which *can* verify certificates, instead of :class:`httplib.HTTPSConnection`. :class:`.VerifiedHTTPSConnection` uses one of ``assert_fingerprint``, ``assert_hostname`` and ``host`` in this order to verify connections. If ``assert_hostname`` is False, no verification is done. The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs`` and ``ssl_version`` are only used if :mod:`ssl` is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade the connection socket into an SSL socket. """ scheme = 'https' def __init__(self, host, port=None, strict=False, timeout=None, maxsize=1, block=False, headers=None, _proxy=None, _proxy_headers=None, key_file=None, cert_file=None, cert_reqs=None, ca_certs=None, ssl_version=None, assert_hostname=None, assert_fingerprint=None): HTTPConnectionPool.__init__(self, host, port, strict, timeout, maxsize, block, headers, _proxy, _proxy_headers) self.key_file = key_file self.cert_file = cert_file self.cert_reqs = cert_reqs self.ca_certs = ca_certs self.ssl_version = ssl_version self.assert_hostname = assert_hostname self.assert_fingerprint = assert_fingerprint def _prepare_conn(self, connection): """ Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` and establish the tunnel if proxy is used. """ if isinstance(connection, VerifiedHTTPSConnection): connection.set_cert(key_file=self.key_file, cert_file=self.cert_file, cert_reqs=self.cert_reqs, ca_certs=self.ca_certs, assert_hostname=self.assert_hostname, assert_fingerprint=self.assert_fingerprint) connection.ssl_version = self.ssl_version if self.proxy is not None: # Python 2.7+ try: set_tunnel = connection.set_tunnel except AttributeError: # Platform-specific: Python 2.6 set_tunnel = connection._set_tunnel set_tunnel(self.host, self.port, self.proxy_headers) # Establish tunnel connection early, because otherwise httplib # would improperly set Host: header to proxy's IP:port. connection.connect() return connection def _new_conn(self): """ Return a fresh :class:`httplib.HTTPSConnection`. """ self.num_connections += 1 log.info("Starting new HTTPS connection (%d): %s" % (self.num_connections, self.host)) actual_host = self.host actual_port = self.port if self.proxy is not None: actual_host = self.proxy.host actual_port = self.proxy.port if not ssl: # Platform-specific: Python compiled without +ssl if not HTTPSConnection or HTTPSConnection is object: raise SSLError("Can't connect to HTTPS URL because the SSL " "module is not available.") connection_class = HTTPSConnection else: connection_class = VerifiedHTTPSConnection extra_params = {} if not six.PY3: # Python 2 extra_params['strict'] = self.strict connection = connection_class(host=actual_host, port=actual_port, timeout=self.timeout.connect_timeout, **extra_params) return self._prepare_conn(connection) def connection_from_url(url, **kw): """ Given a url, return an :class:`.ConnectionPool` instance of its host. This is a shortcut for not having to parse out the scheme, host, and port of the url before creating an :class:`.ConnectionPool` instance. :param url: Absolute URL string that must include the scheme. Port is optional. :param \**kw: Passes additional parameters to the constructor of the appropriate :class:`.ConnectionPool`. Useful for specifying things like timeout, maxsize, headers, etc. Example: :: >>> conn = connection_from_url('http://google.com/') >>> r = conn.request('GET', '/') """ scheme, host, port = get_host(url) if scheme == 'https': return HTTPSConnectionPool(host, port=port, **kw) else: return HTTPConnectionPool(host, port=port, **kw) urllib3-1.7.1/urllib3/contrib/0000755000076500000240000000000012220605014016536 5ustar shazowstaff00000000000000urllib3-1.7.1/urllib3/contrib/__init__.py0000644000076500000240000000000011635707107020654 0ustar shazowstaff00000000000000urllib3-1.7.1/urllib3/contrib/ntlmpool.py0000644000076500000240000001120512202774751020772 0ustar shazowstaff00000000000000# urllib3/contrib/ntlmpool.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php """ NTLM authenticating pool, contributed by erikcederstran Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10 """ try: from http.client import HTTPSConnection except ImportError: from httplib import HTTPSConnection from logging import getLogger from ntlm import ntlm from urllib3 import HTTPSConnectionPool log = getLogger(__name__) class NTLMConnectionPool(HTTPSConnectionPool): """ Implements an NTLM authentication version of an urllib3 connection pool """ scheme = 'https' def __init__(self, user, pw, authurl, *args, **kwargs): """ authurl is a random URL on the server that is protected by NTLM. user is the Windows user, probably in the DOMAIN\\username format. pw is the password for the user. """ super(NTLMConnectionPool, self).__init__(*args, **kwargs) self.authurl = authurl self.rawuser = user user_parts = user.split('\\', 1) self.domain = user_parts[0].upper() self.user = user_parts[1] self.pw = pw def _new_conn(self): # Performs the NTLM handshake that secures the connection. The socket # must be kept open while requests are performed. self.num_connections += 1 log.debug('Starting NTLM HTTPS connection no. %d: https://%s%s' % (self.num_connections, self.host, self.authurl)) headers = {} headers['Connection'] = 'Keep-Alive' req_header = 'Authorization' resp_header = 'www-authenticate' conn = HTTPSConnection(host=self.host, port=self.port) # Send negotiation message headers[req_header] = ( 'NTLM %s' % ntlm.create_NTLM_NEGOTIATE_MESSAGE(self.rawuser)) log.debug('Request headers: %s' % headers) conn.request('GET', self.authurl, None, headers) res = conn.getresponse() reshdr = dict(res.getheaders()) log.debug('Response status: %s %s' % (res.status, res.reason)) log.debug('Response headers: %s' % reshdr) log.debug('Response data: %s [...]' % res.read(100)) # Remove the reference to the socket, so that it can not be closed by # the response object (we want to keep the socket open) res.fp = None # Server should respond with a challenge message auth_header_values = reshdr[resp_header].split(', ') auth_header_value = None for s in auth_header_values: if s[:5] == 'NTLM ': auth_header_value = s[5:] if auth_header_value is None: raise Exception('Unexpected %s response header: %s' % (resp_header, reshdr[resp_header])) # Send authentication message ServerChallenge, NegotiateFlags = \ ntlm.parse_NTLM_CHALLENGE_MESSAGE(auth_header_value) auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE(ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags) headers[req_header] = 'NTLM %s' % auth_msg log.debug('Request headers: %s' % headers) conn.request('GET', self.authurl, None, headers) res = conn.getresponse() log.debug('Response status: %s %s' % (res.status, res.reason)) log.debug('Response headers: %s' % dict(res.getheaders())) log.debug('Response data: %s [...]' % res.read()[:100]) if res.status != 200: if res.status == 401: raise Exception('Server rejected request: wrong ' 'username or password') raise Exception('Wrong server response: %s %s' % (res.status, res.reason)) res.fp = None log.debug('Connection established') return conn def urlopen(self, method, url, body=None, headers=None, retries=3, redirect=True, assert_same_host=True): if headers is None: headers = {} headers['Connection'] = 'Keep-Alive' return super(NTLMConnectionPool, self).urlopen(method, url, body, headers, retries, redirect, assert_same_host) urllib3-1.7.1/urllib3/contrib/pyopenssl.py0000644000076500000240000002734412220604305021160 0ustar shazowstaff00000000000000'''SSL with SNI-support for Python 2. This needs the following packages installed: * pyOpenSSL (tested with 0.13) * ndg-httpsclient (tested with 0.3.2) * pyasn1 (tested with 0.1.6) To activate it call :func:`~urllib3.contrib.pyopenssl.inject_into_urllib3`. This can be done in a ``sitecustomize`` module, or at any other time before your application begins using ``urllib3``, like this:: try: import urllib3.contrib.pyopenssl urllib3.contrib.pyopenssl.inject_into_urllib3() except ImportError: pass Now you can use :mod:`urllib3` as you normally would, and it will support SNI when the required modules are installed. ''' from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT from ndg.httpsclient.subj_alt_name import SubjectAltName import OpenSSL.SSL from pyasn1.codec.der import decoder as der_decoder from socket import _fileobject import ssl from cStringIO import StringIO from .. import connectionpool from .. import util __all__ = ['inject_into_urllib3', 'extract_from_urllib3'] # SNI only *really* works if we can read the subjectAltName of certificates. HAS_SNI = SUBJ_ALT_NAME_SUPPORT # Map from urllib3 to PyOpenSSL compatible parameter-values. _openssl_versions = { ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD, ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD, ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, } _openssl_verify = { ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE, ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER, ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT, } orig_util_HAS_SNI = util.HAS_SNI orig_connectionpool_ssl_wrap_socket = connectionpool.ssl_wrap_socket def inject_into_urllib3(): 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.' connectionpool.ssl_wrap_socket = ssl_wrap_socket util.HAS_SNI = HAS_SNI def extract_from_urllib3(): 'Undo monkey-patching by :func:`inject_into_urllib3`.' connectionpool.ssl_wrap_socket = orig_connectionpool_ssl_wrap_socket util.HAS_SNI = orig_util_HAS_SNI ### Note: This is a slightly bug-fixed version of same from ndg-httpsclient. def get_subj_alt_name(peer_cert): # Search through extensions dns_name = [] if not SUBJ_ALT_NAME_SUPPORT: return dns_name general_names = SubjectAltName() for i in range(peer_cert.get_extension_count()): ext = peer_cert.get_extension(i) ext_name = ext.get_short_name() if ext_name != 'subjectAltName': continue # PyOpenSSL returns extension data in ASN.1 encoded form ext_dat = ext.get_data() decoded_dat = der_decoder.decode(ext_dat, asn1Spec=general_names) for name in decoded_dat: if not isinstance(name, SubjectAltName): continue for entry in range(len(name)): component = name.getComponentByPosition(entry) if component.getName() != 'dNSName': continue dns_name.append(str(component.getComponent())) return dns_name class fileobject(_fileobject): def read(self, size=-1): # Use max, disallow tiny reads in a loop as they are very inefficient. # We never leave read() with any leftover data from a new recv() call # in our internal buffer. rbufsize = max(self._rbufsize, self.default_bufsize) # Our use of StringIO rather than lists of string objects returned by # recv() minimizes memory usage and fragmentation that occurs when # rbufsize is large compared to the typical return value of recv(). buf = self._rbuf buf.seek(0, 2) # seek end if size < 0: # Read until EOF self._rbuf = StringIO() # reset _rbuf. we consume it via buf. while True: try: data = self._sock.recv(rbufsize) except OpenSSL.SSL.WantReadError: continue if not data: break buf.write(data) return buf.getvalue() else: # Read until size bytes or EOF seen, whichever comes first buf_len = buf.tell() if buf_len >= size: # Already have size bytes in our buffer? Extract and return. buf.seek(0) rv = buf.read(size) self._rbuf = StringIO() self._rbuf.write(buf.read()) return rv self._rbuf = StringIO() # reset _rbuf. we consume it via buf. while True: left = size - buf_len # recv() will malloc the amount of memory given as its # parameter even though it often returns much less data # than that. The returned data string is short lived # as we copy it into a StringIO and free it. This avoids # fragmentation issues on many platforms. try: data = self._sock.recv(left) except OpenSSL.SSL.WantReadError: continue if not data: break n = len(data) if n == size and not buf_len: # Shortcut. Avoid buffer data copies when: # - We have no data in our buffer. # AND # - Our call to recv returned exactly the # number of bytes we were asked to read. return data if n == left: buf.write(data) del data # explicit free break assert n <= left, "recv(%d) returned %d bytes" % (left, n) buf.write(data) buf_len += n del data # explicit free #assert buf_len == buf.tell() return buf.getvalue() def readline(self, size=-1): buf = self._rbuf buf.seek(0, 2) # seek end if buf.tell() > 0: # check if we already have it in our buffer buf.seek(0) bline = buf.readline(size) if bline.endswith('\n') or len(bline) == size: self._rbuf = StringIO() self._rbuf.write(buf.read()) return bline del bline if size < 0: # Read until \n or EOF, whichever comes first if self._rbufsize <= 1: # Speed up unbuffered case buf.seek(0) buffers = [buf.read()] self._rbuf = StringIO() # reset _rbuf. we consume it via buf. data = None recv = self._sock.recv while True: try: while data != "\n": data = recv(1) if not data: break buffers.append(data) except OpenSSL.SSL.WantReadError: continue break return "".join(buffers) buf.seek(0, 2) # seek end self._rbuf = StringIO() # reset _rbuf. we consume it via buf. while True: try: data = self._sock.recv(self._rbufsize) except OpenSSL.SSL.WantReadError: continue if not data: break nl = data.find('\n') if nl >= 0: nl += 1 buf.write(data[:nl]) self._rbuf.write(data[nl:]) del data break buf.write(data) return buf.getvalue() else: # Read until size bytes or \n or EOF seen, whichever comes first buf.seek(0, 2) # seek end buf_len = buf.tell() if buf_len >= size: buf.seek(0) rv = buf.read(size) self._rbuf = StringIO() self._rbuf.write(buf.read()) return rv self._rbuf = StringIO() # reset _rbuf. we consume it via buf. while True: try: data = self._sock.recv(self._rbufsize) except OpenSSL.SSL.WantReadError: continue if not data: break left = size - buf_len # did we just receive a newline? nl = data.find('\n', 0, left) if nl >= 0: nl += 1 # save the excess data to _rbuf self._rbuf.write(data[nl:]) if buf_len: buf.write(data[:nl]) break else: # Shortcut. Avoid data copy through buf when returning # a substring of our first recv(). return data[:nl] n = len(data) if n == size and not buf_len: # Shortcut. Avoid data copy through buf when # returning exactly all of our first recv(). return data if n >= left: buf.write(data[:left]) self._rbuf.write(data[left:]) break buf.write(data) buf_len += n #assert buf_len == buf.tell() return buf.getvalue() class WrappedSocket(object): '''API-compatibility wrapper for Python OpenSSL's Connection-class.''' def __init__(self, connection, socket): self.connection = connection self.socket = socket def fileno(self): return self.socket.fileno() def makefile(self, mode, bufsize=-1): return fileobject(self.connection, mode, bufsize) def settimeout(self, timeout): return self.socket.settimeout(timeout) def sendall(self, data): return self.connection.sendall(data) def close(self): return self.connection.shutdown() def getpeercert(self, binary_form=False): x509 = self.connection.get_peer_certificate() if not x509: return x509 if binary_form: return OpenSSL.crypto.dump_certificate( OpenSSL.crypto.FILETYPE_ASN1, x509) return { 'subject': ( (('commonName', x509.get_subject().CN),), ), 'subjectAltName': [ ('DNS', value) for value in get_subj_alt_name(x509) ] } def _verify_callback(cnx, x509, err_no, err_depth, return_code): return err_no == 0 def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None, ca_certs=None, server_hostname=None, ssl_version=None): ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version]) if certfile: ctx.use_certificate_file(certfile) if keyfile: ctx.use_privatekey_file(keyfile) if cert_reqs != ssl.CERT_NONE: ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback) if ca_certs: try: ctx.load_verify_locations(ca_certs, None) except OpenSSL.SSL.Error as e: raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e) cnx = OpenSSL.SSL.Connection(ctx, sock) cnx.set_tlsext_host_name(server_hostname) cnx.set_connect_state() while True: try: cnx.do_handshake() except OpenSSL.SSL.WantReadError: continue except OpenSSL.SSL.Error as e: raise ssl.SSLError('bad handshake', e) break return WrappedSocket(cnx, sock) urllib3-1.7.1/urllib3/exceptions.py0000644000076500000240000000631212220604305017635 0ustar shazowstaff00000000000000# urllib3/exceptions.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php ## Base Exceptions class HTTPError(Exception): "Base exception used by this module." pass class PoolError(HTTPError): "Base exception for errors caused within a pool." def __init__(self, pool, message): self.pool = pool HTTPError.__init__(self, "%s: %s" % (pool, message)) def __reduce__(self): # For pickling purposes. return self.__class__, (None, None) class RequestError(PoolError): "Base exception for PoolErrors that have associated URLs." def __init__(self, pool, url, message): self.url = url PoolError.__init__(self, pool, message) def __reduce__(self): # For pickling purposes. return self.__class__, (None, self.url, None) class SSLError(HTTPError): "Raised when SSL certificate fails in an HTTPS connection." pass class ProxyError(HTTPError): "Raised when the connection to a proxy fails." pass class DecodeError(HTTPError): "Raised when automatic decoding based on Content-Type fails." pass ## Leaf Exceptions class MaxRetryError(RequestError): "Raised when the maximum number of retries is exceeded." def __init__(self, pool, url, reason=None): self.reason = reason message = "Max retries exceeded with url: %s" % url if reason: message += " (Caused by %s: %s)" % (type(reason), reason) else: message += " (Caused by redirect)" RequestError.__init__(self, pool, url, message) class HostChangedError(RequestError): "Raised when an existing pool gets a request for a foreign host." def __init__(self, pool, url, retries=3): message = "Tried to open a foreign host with url: %s" % url RequestError.__init__(self, pool, url, message) self.retries = retries class TimeoutStateError(HTTPError): """ Raised when passing an invalid state to a timeout """ pass class TimeoutError(HTTPError): """ Raised when a socket timeout error occurs. Catching this error will catch both :exc:`ReadTimeoutErrors ` and :exc:`ConnectTimeoutErrors `. """ pass class ReadTimeoutError(TimeoutError, RequestError): "Raised when a socket timeout occurs while receiving data from a server" pass # This timeout error does not have a URL attached and needs to inherit from the # base HTTPError class ConnectTimeoutError(TimeoutError): "Raised when a socket timeout occurs while connecting to a server" pass class EmptyPoolError(PoolError): "Raised when a pool runs out of connections and no more are allowed." pass class ClosedPoolError(PoolError): "Raised when a request enters a pool after the pool has been closed." pass class LocationParseError(ValueError, HTTPError): "Raised when get_host or similar fails to parse the URL input." def __init__(self, location): message = "Failed to parse: %s" % location HTTPError.__init__(self, message) self.location = location urllib3-1.7.1/urllib3/fields.py0000644000076500000240000001353012202774751016737 0ustar shazowstaff00000000000000# urllib3/fields.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import email.utils import mimetypes from .packages import six def guess_content_type(filename, default='application/octet-stream'): """ Guess the "Content-Type" of a file. :param filename: The filename to guess the "Content-Type" of using :mod:`mimetimes`. :param default: If no "Content-Type" can be guessed, default to `default`. """ if filename: return mimetypes.guess_type(filename)[0] or default return default def format_header_param(name, value): """ Helper function to format and quote a single header parameter. Particularly useful for header parameters which might contain non-ASCII values, like file names. This follows RFC 2231, as suggested by RFC 2388 Section 4.4. :param name: The name of the parameter, a string expected to be ASCII only. :param value: The value of the parameter, provided as a unicode string. """ if not any(ch in value for ch in '"\\\r\n'): result = '%s="%s"' % (name, value) try: result.encode('ascii') except UnicodeEncodeError: pass else: return result if not six.PY3: # Python 2: value = value.encode('utf-8') value = email.utils.encode_rfc2231(value, 'utf-8') value = '%s*=%s' % (name, value) return value class RequestField(object): """ A data container for request body parameters. :param name: The name of this request field. :param data: The data/value body. :param filename: An optional filename of the request field. :param headers: An optional dict-like object of headers to initially use for the field. """ def __init__(self, name, data, filename=None, headers=None): self._name = name self._filename = filename self.data = data self.headers = {} if headers: self.headers = dict(headers) @classmethod def from_tuples(cls, fieldname, value): """ A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters. Supports constructing :class:`~urllib3.fields.RequestField` from parameter of key/value strings AND key/filetuple. A filetuple is a (filename, data, MIME type) tuple where the MIME type is optional. For example: :: 'foo': 'bar', 'fakefile': ('foofile.txt', 'contents of foofile'), 'realfile': ('barfile.txt', open('realfile').read()), 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), 'nonamefile': 'contents of nonamefile field', Field names and filenames must be unicode. """ if isinstance(value, tuple): if len(value) == 3: filename, data, content_type = value else: filename, data = value content_type = guess_content_type(filename) else: filename = None content_type = None data = value request_param = cls(fieldname, data, filename=filename) request_param.make_multipart(content_type=content_type) return request_param def _render_part(self, name, value): """ Overridable helper function to format a single header parameter. :param name: The name of the parameter, a string expected to be ASCII only. :param value: The value of the parameter, provided as a unicode string. """ return format_header_param(name, value) def _render_parts(self, header_parts): """ Helper function to format and quote a single header. Useful for single headers that are composed of multiple items. E.g., 'Content-Disposition' fields. :param header_parts: A sequence of (k, v) typles or a :class:`dict` of (k, v) to format as `k1="v1"; k2="v2"; ...`. """ parts = [] iterable = header_parts if isinstance(header_parts, dict): iterable = header_parts.items() for name, value in iterable: if value: parts.append(self._render_part(name, value)) return '; '.join(parts) def render_headers(self): """ Renders the headers for this request field. """ lines = [] sort_keys = ['Content-Disposition', 'Content-Type', 'Content-Location'] for sort_key in sort_keys: if self.headers.get(sort_key, False): lines.append('%s: %s' % (sort_key, self.headers[sort_key])) for header_name, header_value in self.headers.items(): if header_name not in sort_keys: if header_value: lines.append('%s: %s' % (header_name, header_value)) lines.append('\r\n') return '\r\n'.join(lines) def make_multipart(self, content_disposition=None, content_type=None, content_location=None): """ Makes this request field into a multipart request field. This method overrides "Content-Disposition", "Content-Type" and "Content-Location" headers to the request parameter. :param content_type: The 'Content-Type' of the request body. :param content_location: The 'Content-Location' of the request body. """ self.headers['Content-Disposition'] = content_disposition or 'form-data' self.headers['Content-Disposition'] += '; '.join(['', self._render_parts((('name', self._name), ('filename', self._filename)))]) self.headers['Content-Type'] = content_type self.headers['Content-Location'] = content_location urllib3-1.7.1/urllib3/filepost.py0000644000076500000240000000471712202774751017325 0ustar shazowstaff00000000000000# urllib3/filepost.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import codecs import mimetypes from uuid import uuid4 from io import BytesIO from .packages import six from .packages.six import b from .fields import RequestField writer = codecs.lookup('utf-8')[3] def choose_boundary(): """ Our embarassingly-simple replacement for mimetools.choose_boundary. """ return uuid4().hex def iter_field_objects(fields): """ Iterate over fields. Supports list of (k, v) tuples and dicts, and lists of :class:`~urllib3.fields.RequestField`. """ if isinstance(fields, dict): i = six.iteritems(fields) else: i = iter(fields) for field in i: if isinstance(field, RequestField): yield field else: yield RequestField.from_tuples(*field) def iter_fields(fields): """ Iterate over fields. .. deprecated :: The addition of `~urllib3.fields.RequestField` makes this function obsolete. Instead, use :func:`iter_field_objects`, which returns `~urllib3.fields.RequestField` objects, instead. Supports list of (k, v) tuples and dicts. """ if isinstance(fields, dict): return ((k, v) for k, v in six.iteritems(fields)) return ((k, v) for k, v in fields) def encode_multipart_formdata(fields, boundary=None): """ Encode a dictionary of ``fields`` using the multipart/form-data MIME format. :param fields: Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). :param boundary: If not specified, then a random boundary will be generated using :func:`mimetools.choose_boundary`. """ body = BytesIO() if boundary is None: boundary = choose_boundary() for field in iter_field_objects(fields): body.write(b('--%s\r\n' % (boundary))) writer(body).write(field.render_headers()) data = field.data if isinstance(data, int): data = str(data) # Backwards compatibility if isinstance(data, six.text_type): writer(body).write(data) else: body.write(data) body.write(b'\r\n') body.write(b('--%s--\r\n' % (boundary))) content_type = str('multipart/form-data; boundary=%s' % boundary) return body.getvalue(), content_type urllib3-1.7.1/urllib3/packages/0000755000076500000240000000000012220605014016654 5ustar shazowstaff00000000000000urllib3-1.7.1/urllib3/packages/__init__.py0000644000076500000240000000011211702133666020774 0ustar shazowstaff00000000000000from __future__ import absolute_import from . import ssl_match_hostname urllib3-1.7.1/urllib3/packages/ordered_dict.py0000644000076500000240000002135012041045271021662 0ustar shazowstaff00000000000000# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. # Passes Python2.7's test suite and incorporates all the latest updates. # Copyright 2009 Raymond Hettinger, released under the MIT License. # http://code.activestate.com/recipes/576693/ try: from thread import get_ident as _get_ident except ImportError: from dummy_thread import get_ident as _get_ident try: from _abcoll import KeysView, ValuesView, ItemsView except ImportError: pass class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. # Big-O running times for all methods are the same as for regular dictionaries. # The internal self.__map dictionary maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): '''Initialize an ordered dictionary. Signature is the same as for regular dictionaries, but keyword arguments are not recommended because their insertion order is arbitrary. ''' if len(args) > 1: raise TypeError('expected at most 1 arguments, got %d' % len(args)) try: self.__root except AttributeError: self.__root = root = [] # sentinel node root[:] = [root, root, None] self.__map = {} self.__update(*args, **kwds) def __setitem__(self, key, value, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' # Setting a new item creates a new link which goes at the end of the linked # list, and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[0] last[1] = root[0] = self.__map[key] = [last, root, key] dict_setitem(self, key, value) def __delitem__(self, key, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' # Deleting an existing item uses self.__map to find the link which is # then removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) link_prev, link_next, key = self.__map.pop(key) link_prev[1] = link_next link_next[0] = link_prev def __iter__(self): 'od.__iter__() <==> iter(od)' root = self.__root curr = root[1] while curr is not root: yield curr[2] curr = curr[1] def __reversed__(self): 'od.__reversed__() <==> reversed(od)' root = self.__root curr = root[0] while curr is not root: yield curr[2] curr = curr[0] def clear(self): 'od.clear() -> None. Remove all items from od.' try: for node in self.__map.itervalues(): del node[:] root = self.__root root[:] = [root, root, None] self.__map.clear() except AttributeError: pass dict.clear(self) def popitem(self, last=True): '''od.popitem() -> (k, v), return and remove a (key, value) pair. Pairs are returned in LIFO order if last is true or FIFO order if false. ''' if not self: raise KeyError('dictionary is empty') root = self.__root if last: link = root[0] link_prev = link[0] link_prev[1] = root root[0] = link_prev else: link = root[1] link_next = link[1] root[1] = link_next link_next[0] = root key = link[2] del self.__map[key] value = dict.pop(self, key) return key, value # -- the following methods do not depend on the internal structure -- def keys(self): 'od.keys() -> list of keys in od' return list(self) def values(self): 'od.values() -> list of values in od' return [self[key] for key in self] def items(self): 'od.items() -> list of (key, value) pairs in od' return [(key, self[key]) for key in self] def iterkeys(self): 'od.iterkeys() -> an iterator over the keys in od' return iter(self) def itervalues(self): 'od.itervalues -> an iterator over the values in od' for k in self: yield self[k] def iteritems(self): 'od.iteritems -> an iterator over the (key, value) items in od' for k in self: yield (k, self[k]) def update(*args, **kwds): '''od.update(E, **F) -> None. Update od from dict/iterable E and F. If E is a dict instance, does: for k in E: od[k] = E[k] If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] Or if E is an iterable of items, does: for k, v in E: od[k] = v In either case, this is followed by: for k, v in F.items(): od[k] = v ''' if len(args) > 2: raise TypeError('update() takes at most 2 positional ' 'arguments (%d given)' % (len(args),)) elif not args: raise TypeError('update() takes at least 1 argument (0 given)') self = args[0] # Make progressively weaker assumptions about "other" other = () if len(args) == 2: other = args[1] if isinstance(other, dict): for key in other: self[key] = other[key] elif hasattr(other, 'keys'): for key in other.keys(): self[key] = other[key] else: for key, value in other: self[key] = value for key, value in kwds.items(): self[key] = value __update = update # let subclasses override update without breaking __init__ __marker = object() def pop(self, key, default=__marker): '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised. ''' if key in self: result = self[key] del self[key] return result if default is self.__marker: raise KeyError(key) return default def setdefault(self, key, default=None): 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' if key in self: return self[key] self[key] = default return default def __repr__(self, _repr_running={}): 'od.__repr__() <==> repr(od)' call_key = id(self), _get_ident() if call_key in _repr_running: return '...' _repr_running[call_key] = 1 try: if not self: return '%s()' % (self.__class__.__name__,) return '%s(%r)' % (self.__class__.__name__, self.items()) finally: del _repr_running[call_key] def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] inst_dict = vars(self).copy() for k in vars(OrderedDict()): inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) def copy(self): 'od.copy() -> a shallow copy of od' return self.__class__(self) @classmethod def fromkeys(cls, iterable, value=None): '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and values equal to v (which defaults to None). ''' d = cls() for key in iterable: d[key] = value return d def __eq__(self, other): '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive while comparison to a regular mapping is order-insensitive. ''' if isinstance(other, OrderedDict): return len(self)==len(other) and self.items() == other.items() return dict.__eq__(self, other) def __ne__(self, other): return not self == other # -- the following methods are only used in Python 2.7 -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" return KeysView(self) def viewvalues(self): "od.viewvalues() -> an object providing a view on od's values" return ValuesView(self) def viewitems(self): "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) urllib3-1.7.1/urllib3/packages/six.py0000644000076500000240000002655412162632565020065 0ustar shazowstaff00000000000000"""Utilities for writing code that runs on Python 2 and 3""" #Copyright (c) 2010-2011 Benjamin Peterson #Permission is hereby granted, free of charge, to any person obtaining a copy of #this software and associated documentation files (the "Software"), to deal in #the Software without restriction, including without limitation the rights to #use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of #the Software, and to permit persons to whom the Software is furnished to do so, #subject to the following conditions: #The above copyright notice and this permission notice shall be included in all #copies or substantial portions of the Software. #THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR #IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS #FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR #COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER #IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN #CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. import operator import sys import types __author__ = "Benjamin Peterson " __version__ = "1.2.0" # Revision 41c74fef2ded # True if we are running on Python 3. PY3 = sys.version_info[0] == 3 if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # This is a bit ugly, but it avoids running this again. delattr(tp, self.name) return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _MovedItems(types.ModuleType): """Lazy loading of moved objects""" _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("reload_module", "__builtin__", "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("copyreg", "copy_reg"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) del attr moves = sys.modules[__name__ + ".moves"] = _MovedItems("moves") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_code = "__code__" _func_defaults = "__defaults__" _iterkeys = "keys" _itervalues = "values" _iteritems = "items" else: _meth_func = "im_func" _meth_self = "im_self" _func_code = "func_code" _func_defaults = "func_defaults" _iterkeys = "iterkeys" _itervalues = "itervalues" _iteritems = "iteritems" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator if PY3: def get_unbound_function(unbound): return unbound Iterator = object def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) else: def get_unbound_function(unbound): return unbound.im_func class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) def iterkeys(d): """Return an iterator over the keys of a dictionary.""" return iter(getattr(d, _iterkeys)()) def itervalues(d): """Return an iterator over the values of a dictionary.""" return iter(getattr(d, _itervalues)()) def iteritems(d): """Return an iterator over the (key, value) pairs of a dictionary.""" return iter(getattr(d, _iteritems)()) if PY3: def b(s): return s.encode("latin-1") def u(s): return s if sys.version_info[1] <= 1: def int2byte(i): return bytes((i,)) else: # This is about 2x faster than the implementation above on 3.2+ int2byte = operator.methodcaller("to_bytes", 1, "big") import io StringIO = io.StringIO BytesIO = io.BytesIO else: def b(s): return s def u(s): return unicode(s, "unicode_escape") int2byte = chr import StringIO StringIO = BytesIO = StringIO.StringIO _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") if PY3: import builtins exec_ = getattr(builtins, "exec") def reraise(tp, value, tb=None): if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value print_ = getattr(builtins, "print") del builtins else: def exec_(code, globs=None, locs=None): """Execute code in a namespace.""" if globs is None: frame = sys._getframe(1) globs = frame.f_globals if locs is None: locs = frame.f_locals del frame elif locs is None: locs = globs exec("""exec code in globs, locs""") exec_("""def reraise(tp, value, tb=None): raise tp, value, tb """) def print_(*args, **kwargs): """The new-style print function.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) _add_doc(reraise, """Reraise an exception.""") def with_metaclass(meta, base=object): """Create a base class with a metaclass.""" return meta("NewBase", (base,), {}) urllib3-1.7.1/urllib3/packages/ssl_match_hostname/0000755000076500000240000000000012220605014022527 5ustar shazowstaff00000000000000urllib3-1.7.1/urllib3/packages/ssl_match_hostname/__init__.py0000644000076500000240000000672612202774751024672 0ustar shazowstaff00000000000000"""The match_hostname() function from Python 3.2, essential when using SSL.""" import re __version__ = '3.2.2' class CertificateError(ValueError): pass def _dnsname_match(dn, hostname, max_wildcards=1): """Matching according to RFC 6125, section 6.4.3 http://tools.ietf.org/html/rfc6125#section-6.4.3 """ pats = [] if not dn: return False parts = dn.split(r'.') leftmost = parts[0] wildcards = leftmost.count('*') if wildcards > max_wildcards: # Issue #17980: avoid denials of service by refusing more # than one wildcard per fragment. A survery of established # policy among SSL implementations showed it to be a # reasonable choice. raise CertificateError( "too many wildcards in certificate DNS name: " + repr(dn)) # speed up common case w/o wildcards if not wildcards: return dn.lower() == hostname.lower() # RFC 6125, section 6.4.3, subitem 1. # The client SHOULD NOT attempt to match a presented identifier in which # the wildcard character comprises a label other than the left-most label. if leftmost == '*': # When '*' is a fragment by itself, it matches a non-empty dotless # fragment. pats.append('[^.]+') elif leftmost.startswith('xn--') or hostname.startswith('xn--'): # RFC 6125, section 6.4.3, subitem 3. # The client SHOULD NOT attempt to match a presented identifier # where the wildcard character is embedded within an A-label or # U-label of an internationalized domain name. pats.append(re.escape(leftmost)) else: # Otherwise, '*' matches any dotless string, e.g. www* pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) # add the remaining fragments, ignore any wildcards for frag in parts[1:]: pats.append(re.escape(frag)) pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) return pat.match(hostname) def match_hostname(cert, hostname): """Verify that *cert* (in decoded format as returned by SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 rules are followed, but IP addresses are not accepted for *hostname*. CertificateError is raised on failure. On success, the function returns nothing. """ if not cert: raise ValueError("empty or no certificate") dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if _dnsname_match(value, hostname): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName': if _dnsname_match(value, hostname): return dnsnames.append(value) if len(dnsnames) > 1: raise CertificateError("hostname %r " "doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames)))) elif len(dnsnames) == 1: raise CertificateError("hostname %r " "doesn't match %r" % (hostname, dnsnames[0])) else: raise CertificateError("no appropriate commonName or " "subjectAltName fields were found") urllib3-1.7.1/urllib3/poolmanager.py0000644000076500000240000002146112202774751017777 0ustar shazowstaff00000000000000# urllib3/poolmanager.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import logging try: # Python 3 from urllib.parse import urljoin except ImportError: from urlparse import urljoin from ._collections import RecentlyUsedContainer from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool from .connectionpool import port_by_scheme from .request import RequestMethods from .util import parse_url __all__ = ['PoolManager', 'ProxyManager', 'proxy_from_url'] pool_classes_by_scheme = { 'http': HTTPConnectionPool, 'https': HTTPSConnectionPool, } log = logging.getLogger(__name__) SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs', 'ssl_version') class PoolManager(RequestMethods): """ Allows for arbitrary requests while transparently keeping track of necessary connection pools for you. :param num_pools: Number of connection pools to cache before discarding the least recently used pool. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param \**connection_pool_kw: Additional parameters are used to create fresh :class:`urllib3.connectionpool.ConnectionPool` instances. Example: :: >>> manager = PoolManager(num_pools=2) >>> r = manager.request('GET', 'http://google.com/') >>> r = manager.request('GET', 'http://google.com/mail') >>> r = manager.request('GET', 'http://yahoo.com/') >>> len(manager.pools) 2 """ proxy = None def __init__(self, num_pools=10, headers=None, **connection_pool_kw): RequestMethods.__init__(self, headers) self.connection_pool_kw = connection_pool_kw self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) def _new_pool(self, scheme, host, port): """ Create a new :class:`ConnectionPool` based on host, port and scheme. This method is used to actually create the connection pools handed out by :meth:`connection_from_url` and companion methods. It is intended to be overridden for customization. """ pool_cls = pool_classes_by_scheme[scheme] kwargs = self.connection_pool_kw if scheme == 'http': kwargs = self.connection_pool_kw.copy() for kw in SSL_KEYWORDS: kwargs.pop(kw, None) return pool_cls(host, port, **kwargs) def clear(self): """ Empty our store of pools and direct them all to close. This will not affect in-flight connections, but they will not be re-used after completion. """ self.pools.clear() def connection_from_host(self, host, port=None, scheme='http'): """ Get a :class:`ConnectionPool` based on the host, port, and scheme. If ``port`` isn't given, it will be derived from the ``scheme`` using ``urllib3.connectionpool.port_by_scheme``. """ scheme = scheme or 'http' port = port or port_by_scheme.get(scheme, 80) pool_key = (scheme, host, port) with self.pools.lock: # If the scheme, host, or port doesn't match existing open # connections, open a new ConnectionPool. pool = self.pools.get(pool_key) if pool: return pool # Make a fresh ConnectionPool of the desired type pool = self._new_pool(scheme, host, port) self.pools[pool_key] = pool return pool def connection_from_url(self, url): """ Similar to :func:`urllib3.connectionpool.connection_from_url` but doesn't pass any additional parameters to the :class:`urllib3.connectionpool.ConnectionPool` constructor. Additional parameters are taken from the :class:`.PoolManager` constructor. """ u = parse_url(url) return self.connection_from_host(u.host, port=u.port, scheme=u.scheme) def urlopen(self, method, url, redirect=True, **kw): """ Same as :meth:`urllib3.connectionpool.HTTPConnectionPool.urlopen` with custom cross-host redirect logic and only sends the request-uri portion of the ``url``. The given ``url`` parameter must be absolute, such that an appropriate :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. """ u = parse_url(url) conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) kw['assert_same_host'] = False kw['redirect'] = False if 'headers' not in kw: kw['headers'] = self.headers if self.proxy is not None and u.scheme == "http": response = conn.urlopen(method, url, **kw) else: response = conn.urlopen(method, u.request_uri, **kw) redirect_location = redirect and response.get_redirect_location() if not redirect_location: return response # Support relative URLs for redirecting. redirect_location = urljoin(url, redirect_location) # RFC 2616, Section 10.3.4 if response.status == 303: method = 'GET' log.info("Redirecting %s -> %s" % (url, redirect_location)) kw['retries'] = kw.get('retries', 3) - 1 # Persist retries countdown kw['redirect'] = redirect return self.urlopen(method, redirect_location, **kw) class ProxyManager(PoolManager): """ Behaves just like :class:`PoolManager`, but sends all requests through the defined proxy, using the CONNECT method for HTTPS URLs. :param poxy_url: The URL of the proxy to be used. :param proxy_headers: A dictionary contaning headers that will be sent to the proxy. In case of HTTP they are being sent with each request, while in the HTTPS/CONNECT case they are sent only once. Could be used for proxy authentication. Example: >>> proxy = urllib3.ProxyManager('http://localhost:3128/') >>> r1 = proxy.request('GET', 'http://google.com/') >>> r2 = proxy.request('GET', 'http://httpbin.org/') >>> len(proxy.pools) 1 >>> r3 = proxy.request('GET', 'https://httpbin.org/') >>> r4 = proxy.request('GET', 'https://twitter.com/') >>> len(proxy.pools) 3 """ def __init__(self, proxy_url, num_pools=10, headers=None, proxy_headers=None, **connection_pool_kw): if isinstance(proxy_url, HTTPConnectionPool): proxy_url = '%s://%s:%i' % (proxy_url.scheme, proxy_url.host, proxy_url.port) proxy = parse_url(proxy_url) if not proxy.port: port = port_by_scheme.get(proxy.scheme, 80) proxy = proxy._replace(port=port) self.proxy = proxy self.proxy_headers = proxy_headers or {} assert self.proxy.scheme in ("http", "https"), \ 'Not supported proxy scheme %s' % self.proxy.scheme connection_pool_kw['_proxy'] = self.proxy connection_pool_kw['_proxy_headers'] = self.proxy_headers super(ProxyManager, self).__init__( num_pools, headers, **connection_pool_kw) def connection_from_host(self, host, port=None, scheme='http'): if scheme == "https": return super(ProxyManager, self).connection_from_host( host, port, scheme) return super(ProxyManager, self).connection_from_host( self.proxy.host, self.proxy.port, self.proxy.scheme) def _set_proxy_headers(self, url, headers=None): """ Sets headers needed by proxies: specifically, the Accept and Host headers. Only sets headers not provided by the user. """ headers_ = {'Accept': '*/*'} netloc = parse_url(url).netloc if netloc: headers_['Host'] = netloc if headers: headers_.update(headers) return headers_ def urlopen(self, method, url, redirect=True, **kw): "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute." u = parse_url(url) if u.scheme == "http": # It's too late to set proxy headers on per-request basis for # tunnelled HTTPS connections, should use # constructor's proxy_headers instead. kw['headers'] = self._set_proxy_headers(url, kw.get('headers', self.headers)) kw['headers'].update(self.proxy_headers) return super(ProxyManager, self).urlopen(method, url, redirect, **kw) def proxy_from_url(url, **kw): return ProxyManager(proxy_url=url, **kw) urllib3-1.7.1/urllib3/request.py0000644000076500000240000001336212202774751017164 0ustar shazowstaff00000000000000# urllib3/request.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php try: from urllib.parse import urlencode except ImportError: from urllib import urlencode from .filepost import encode_multipart_formdata __all__ = ['RequestMethods'] class RequestMethods(object): """ Convenience mixin for classes who implement a :meth:`urlopen` method, such as :class:`~urllib3.connectionpool.HTTPConnectionPool` and :class:`~urllib3.poolmanager.PoolManager`. Provides behavior for making common types of HTTP request methods and decides which type of request field encoding to use. Specifically, :meth:`.request_encode_url` is for sending requests whose fields are encoded in the URL (such as GET, HEAD, DELETE). :meth:`.request_encode_body` is for sending requests whose fields are encoded in the *body* of the request using multipart or www-form-urlencoded (such as for POST, PUT, PATCH). :meth:`.request` is for making any kind of request, it will look up the appropriate encoding format and use one of the above two methods to make the request. Initializer parameters: :param headers: Headers to include with all requests, unless other headers are given explicitly. """ _encode_url_methods = set(['DELETE', 'GET', 'HEAD', 'OPTIONS']) _encode_body_methods = set(['PATCH', 'POST', 'PUT', 'TRACE']) def __init__(self, headers=None): self.headers = headers or {} def urlopen(self, method, url, body=None, headers=None, encode_multipart=True, multipart_boundary=None, **kw): # Abstract raise NotImplemented("Classes extending RequestMethods must implement " "their own ``urlopen`` method.") def request(self, method, url, fields=None, headers=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the appropriate encoding of ``fields`` based on the ``method`` used. This is a convenience method that requires the least amount of manual effort. It can be used in most situations, while still having the option to drop down to more specific methods when necessary, such as :meth:`request_encode_url`, :meth:`request_encode_body`, or even the lowest level :meth:`urlopen`. """ method = method.upper() if method in self._encode_url_methods: return self.request_encode_url(method, url, fields=fields, headers=headers, **urlopen_kw) else: return self.request_encode_body(method, url, fields=fields, headers=headers, **urlopen_kw) def request_encode_url(self, method, url, fields=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the ``fields`` encoded in the url. This is useful for request methods like GET, HEAD, DELETE, etc. """ if fields: url += '?' + urlencode(fields) return self.urlopen(method, url, **urlopen_kw) def request_encode_body(self, method, url, fields=None, headers=None, encode_multipart=True, multipart_boundary=None, **urlopen_kw): """ Make a request using :meth:`urlopen` with the ``fields`` encoded in the body. This is useful for request methods like POST, PUT, PATCH, etc. When ``encode_multipart=True`` (default), then :meth:`urllib3.filepost.encode_multipart_formdata` is used to encode the payload with the appropriate content type. Otherwise :meth:`urllib.urlencode` is used with the 'application/x-www-form-urlencoded' content type. Multipart encoding must be used when posting files, and it's reasonably safe to use it in other times too. However, it may break request signing, such as with OAuth. Supports an optional ``fields`` parameter of key/value strings AND key/filetuple. A filetuple is a (filename, data, MIME type) tuple where the MIME type is optional. For example: :: fields = { 'foo': 'bar', 'fakefile': ('foofile.txt', 'contents of foofile'), 'realfile': ('barfile.txt', open('realfile').read()), 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), 'nonamefile': 'contents of nonamefile field', } When uploading a file, providing a filename (the first parameter of the tuple) is optional but recommended to best mimick behavior of browsers. Note that if ``headers`` are supplied, the 'Content-Type' header will be overwritten because it depends on the dynamic random boundary string which is used to compose the body of the request. The random boundary string can be explicitly set with the ``multipart_boundary`` parameter. """ if encode_multipart: body, content_type = encode_multipart_formdata(fields or {}, boundary=multipart_boundary) else: body, content_type = (urlencode(fields or {}), 'application/x-www-form-urlencoded') if headers is None: headers = self.headers headers_ = {'Content-Type': content_type} headers_.update(headers) return self.urlopen(method, url, body=body, headers=headers_, **urlopen_kw) urllib3-1.7.1/urllib3/response.py0000644000076500000240000002367112220604305017321 0ustar shazowstaff00000000000000# urllib3/response.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import logging import zlib import io from .exceptions import DecodeError from .packages.six import string_types as basestring, binary_type from .util import is_fp_closed log = logging.getLogger(__name__) class DeflateDecoder(object): def __init__(self): self._first_try = True self._data = binary_type() self._obj = zlib.decompressobj() def __getattr__(self, name): return getattr(self._obj, name) def decompress(self, data): if not self._first_try: return self._obj.decompress(data) self._data += data try: return self._obj.decompress(data) except zlib.error: self._first_try = False self._obj = zlib.decompressobj(-zlib.MAX_WBITS) try: return self.decompress(self._data) finally: self._data = None def _get_decoder(mode): if mode == 'gzip': return zlib.decompressobj(16 + zlib.MAX_WBITS) return DeflateDecoder() class HTTPResponse(io.IOBase): """ HTTP Response container. Backwards-compatible to httplib's HTTPResponse but the response ``body`` is loaded and decoded on-demand when the ``data`` property is accessed. Extra parameters for behaviour not present in httplib.HTTPResponse: :param preload_content: If True, the response's body will be preloaded during construction. :param decode_content: If True, attempts to decode specific content-encoding's based on headers (like 'gzip' and 'deflate') will be skipped and raw data will be used instead. :param original_response: When this HTTPResponse wrapper is generated from an httplib.HTTPResponse object, it's convenient to include the original for debug purposes. It's otherwise unused. """ CONTENT_DECODERS = ['gzip', 'deflate'] REDIRECT_STATUSES = [301, 302, 303, 307, 308] def __init__(self, body='', headers=None, status=0, version=0, reason=None, strict=0, preload_content=True, decode_content=True, original_response=None, pool=None, connection=None): self.headers = headers or {} self.status = status self.version = version self.reason = reason self.strict = strict self.decode_content = decode_content self._decoder = None self._body = body if body and isinstance(body, basestring) else None self._fp = None self._original_response = original_response self._pool = pool self._connection = connection if hasattr(body, 'read'): self._fp = body if preload_content and not self._body: self._body = self.read(decode_content=decode_content) def get_redirect_location(self): """ Should we redirect and where to? :returns: Truthy redirect location string if we got a redirect status code and valid location. ``None`` if redirect status and no location. ``False`` if not a redirect status code. """ if self.status in self.REDIRECT_STATUSES: return self.headers.get('location') return False def release_conn(self): if not self._pool or not self._connection: return self._pool._put_conn(self._connection) self._connection = None @property def data(self): # For backwords-compat with earlier urllib3 0.4 and earlier. if self._body: return self._body if self._fp: return self.read(cache_content=True) def read(self, amt=None, decode_content=None, cache_content=False): """ Similar to :meth:`httplib.HTTPResponse.read`, but with two additional parameters: ``decode_content`` and ``cache_content``. :param amt: How much of the content to read. If specified, caching is skipped because it doesn't make sense to cache partial content as the full response. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param cache_content: If True, will save the returned data such that the same result is returned despite of the state of the underlying file object. This is useful if you want the ``.data`` property to continue working after having ``.read()`` the file object. (Overridden if ``amt`` is set.) """ # Note: content-encoding value should be case-insensitive, per RFC 2616 # Section 3.5 content_encoding = self.headers.get('content-encoding', '').lower() if self._decoder is None: if content_encoding in self.CONTENT_DECODERS: self._decoder = _get_decoder(content_encoding) if decode_content is None: decode_content = self.decode_content if self._fp is None: return flush_decoder = False try: if amt is None: # cStringIO doesn't like amt=None data = self._fp.read() flush_decoder = True else: cache_content = False data = self._fp.read(amt) if amt != 0 and not data: # Platform-specific: Buggy versions of Python. # Close the connection when no data is returned # # This is redundant to what httplib/http.client _should_ # already do. However, versions of python released before # December 15, 2012 (http://bugs.python.org/issue16298) do not # properly close the connection in all cases. There is no harm # in redundantly calling close. self._fp.close() flush_decoder = True try: if decode_content and self._decoder: data = self._decoder.decompress(data) except (IOError, zlib.error) as e: raise DecodeError( "Received response with content-encoding: %s, but " "failed to decode it." % content_encoding, e) if flush_decoder and decode_content and self._decoder: buf = self._decoder.decompress(binary_type()) data += buf + self._decoder.flush() if cache_content: self._body = data return data finally: if self._original_response and self._original_response.isclosed(): self.release_conn() def stream(self, amt=2**16, decode_content=None): """ A generator wrapper for the read() method. A call will block until ``amt`` bytes have been read from the connection or until the connection is closed. :param amt: How much of the content to read. The generator will return up to much data per iteration, but may return less. This is particularly likely when using compressed data. However, the empty string will never be returned. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. """ while not is_fp_closed(self._fp): data = self.read(amt=amt, decode_content=decode_content) if data: yield data @classmethod def from_httplib(ResponseCls, r, **response_kw): """ Given an :class:`httplib.HTTPResponse` instance ``r``, return a corresponding :class:`urllib3.response.HTTPResponse` object. Remaining parameters are passed to the HTTPResponse constructor, along with ``original_response=r``. """ # Normalize headers between different versions of Python headers = {} for k, v in r.getheaders(): # Python 3: Header keys are returned capitalised k = k.lower() has_value = headers.get(k) if has_value: # Python 3: Repeating header keys are unmerged. v = ', '.join([has_value, v]) headers[k] = v # HTTPResponse objects in Python 3 don't have a .strict attribute strict = getattr(r, 'strict', 0) return ResponseCls(body=r, headers=headers, status=r.status, version=r.version, reason=r.reason, strict=strict, original_response=r, **response_kw) # Backwards-compatibility methods for httplib.HTTPResponse def getheaders(self): return self.headers def getheader(self, name, default=None): return self.headers.get(name, default) # Overrides from io.IOBase def close(self): if not self.closed: self._fp.close() @property def closed(self): if self._fp is None: return True elif hasattr(self._fp, 'closed'): return self._fp.closed elif hasattr(self._fp, 'isclosed'): # Python 2 return self._fp.isclosed() else: return True def fileno(self): if self._fp is None: raise IOError("HTTPResponse has no file to get a fileno from") elif hasattr(self._fp, "fileno"): return self._fp.fileno() else: raise IOError("The file-like object this HTTPResponse is wrapped " "around has no file descriptor") def flush(self): if self._fp is not None and hasattr(self._fp, 'flush'): return self._fp.flush() def readable(self): return True urllib3-1.7.1/urllib3/util.py0000644000076500000240000005001512220604305016430 0ustar shazowstaff00000000000000# urllib3/util.py # Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt) # # This module is part of urllib3 and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php from base64 import b64encode from binascii import hexlify, unhexlify from collections import namedtuple from hashlib import md5, sha1 from socket import error as SocketError, _GLOBAL_DEFAULT_TIMEOUT import time try: from select import poll, POLLIN except ImportError: # `poll` doesn't exist on OSX and other platforms poll = False try: from select import select except ImportError: # `select` doesn't exist on AppEngine. select = False try: # Test for SSL features SSLContext = None HAS_SNI = False import ssl from ssl import wrap_socket, CERT_NONE, PROTOCOL_SSLv23 from ssl import SSLContext # Modern SSL? from ssl import HAS_SNI # Has SNI? except ImportError: pass from .packages import six from .exceptions import LocationParseError, SSLError, TimeoutStateError _Default = object() # The default timeout to use for socket connections. This is the attribute used # by httplib to define the default timeout def current_time(): """ Retrieve the current time, this function is mocked out in unit testing. """ return time.time() class Timeout(object): """ Utility object for storing timeout values. Example usage: .. code-block:: python timeout = urllib3.util.Timeout(connect=2.0, read=7.0) pool = HTTPConnectionPool('www.google.com', 80, timeout=timeout) pool.request(...) # Etc, etc :param connect: The maximum amount of time to wait for a connection attempt to a server to succeed. Omitting the parameter will default the connect timeout to the system default, probably `the global default timeout in socket.py `_. None will set an infinite timeout for connection attempts. :type connect: integer, float, or None :param read: The maximum amount of time to wait between consecutive read operations for a response from the server. Omitting the parameter will default the read timeout to the system default, probably `the global default timeout in socket.py `_. None will set an infinite timeout. :type read: integer, float, or None :param total: The maximum amount of time to wait for an HTTP request to connect and return. This combines the connect and read timeouts into one. In the event that both a connect timeout and a total are specified, or a read timeout and a total are specified, the shorter timeout will be applied. Defaults to None. :type total: integer, float, or None .. note:: Many factors can affect the total amount of time for urllib3 to return an HTTP response. Specifically, Python's DNS resolver does not obey the timeout specified on the socket. Other factors that can affect total request time include high CPU load, high swap, the program running at a low priority level, or other behaviors. The observed running time for urllib3 to return a response may be greater than the value passed to `total`. In addition, the read and total timeouts only measure the time between read operations on the socket connecting the client and the server, not the total amount of time for the request to return a complete response. As an example, you may want a request to return within 7 seconds or fail, so you set the ``total`` timeout to 7 seconds. If the server sends one byte to you every 5 seconds, the request will **not** trigger time out. This case is admittedly rare. """ #: A sentinel object representing the default timeout value DEFAULT_TIMEOUT = _GLOBAL_DEFAULT_TIMEOUT def __init__(self, connect=_Default, read=_Default, total=None): self._connect = self._validate_timeout(connect, 'connect') self._read = self._validate_timeout(read, 'read') self.total = self._validate_timeout(total, 'total') self._start_connect = None def __str__(self): return '%s(connect=%r, read=%r, total=%r)' % ( type(self).__name__, self._connect, self._read, self.total) @classmethod def _validate_timeout(cls, value, name): """ Check that a timeout attribute is valid :param value: The timeout value to validate :param name: The name of the timeout attribute to validate. This is used for clear error messages :return: the value :raises ValueError: if the type is not an integer or a float, or if it is a numeric value less than zero """ if value is _Default: return cls.DEFAULT_TIMEOUT if value is None or value is cls.DEFAULT_TIMEOUT: return value try: float(value) except (TypeError, ValueError): raise ValueError("Timeout value %s was %s, but it must be an " "int or float." % (name, value)) try: if value < 0: raise ValueError("Attempted to set %s timeout to %s, but the " "timeout cannot be set to a value less " "than 0." % (name, value)) except TypeError: # Python 3 raise ValueError("Timeout value %s was %s, but it must be an " "int or float." % (name, value)) return value @classmethod def from_float(cls, timeout): """ Create a new Timeout from a legacy timeout value. The timeout value used by httplib.py sets the same timeout on the connect(), and recv() socket requests. This creates a :class:`Timeout` object that sets the individual timeouts to the ``timeout`` value passed to this function. :param timeout: The legacy timeout value :type timeout: integer, float, sentinel default object, or None :return: a Timeout object :rtype: :class:`Timeout` """ return Timeout(read=timeout, connect=timeout) def clone(self): """ Create a copy of the timeout object Timeout properties are stored per-pool but each request needs a fresh Timeout object to ensure each one has its own start/stop configured. :return: a copy of the timeout object :rtype: :class:`Timeout` """ # We can't use copy.deepcopy because that will also create a new object # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to # detect the user default. return Timeout(connect=self._connect, read=self._read, total=self.total) def start_connect(self): """ Start the timeout clock, used during a connect() attempt :raises urllib3.exceptions.TimeoutStateError: if you attempt to start a timer that has been started already. """ if self._start_connect is not None: raise TimeoutStateError("Timeout timer has already been started.") self._start_connect = current_time() return self._start_connect def get_connect_duration(self): """ Gets the time elapsed since the call to :meth:`start_connect`. :return: the elapsed time :rtype: float :raises urllib3.exceptions.TimeoutStateError: if you attempt to get duration for a timer that hasn't been started. """ if self._start_connect is None: raise TimeoutStateError("Can't get connect duration for timer " "that has not started.") return current_time() - self._start_connect @property def connect_timeout(self): """ Get the value to use when setting a connection timeout. This will be a positive float or integer, the value None (never timeout), or the default system timeout. :return: the connect timeout :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None """ if self.total is None: return self._connect if self._connect is None or self._connect is self.DEFAULT_TIMEOUT: return self.total return min(self._connect, self.total) @property def read_timeout(self): """ Get the value for the read timeout. This assumes some time has elapsed in the connection timeout and computes the read timeout appropriately. If self.total is set, the read timeout is dependent on the amount of time taken by the connect timeout. If the connection time has not been established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be raised. :return: the value to use for the read timeout :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect` has not yet been called on this object. """ if (self.total is not None and self.total is not self.DEFAULT_TIMEOUT and self._read is not None and self._read is not self.DEFAULT_TIMEOUT): # in case the connect timeout has not yet been established. if self._start_connect is None: return self._read return max(0, min(self.total - self.get_connect_duration(), self._read)) elif self.total is not None and self.total is not self.DEFAULT_TIMEOUT: return max(0, self.total - self.get_connect_duration()) else: return self._read class Url(namedtuple('Url', ['scheme', 'auth', 'host', 'port', 'path', 'query', 'fragment'])): """ Datastructure for representing an HTTP URL. Used as a return value for :func:`parse_url`. """ slots = () def __new__(cls, scheme=None, auth=None, host=None, port=None, path=None, query=None, fragment=None): return super(Url, cls).__new__(cls, scheme, auth, host, port, path, query, fragment) @property def hostname(self): """For backwards-compatibility with urlparse. We're nice like that.""" return self.host @property def request_uri(self): """Absolute path including the query string.""" uri = self.path or '/' if self.query is not None: uri += '?' + self.query return uri @property def netloc(self): """Network location including host and port""" if self.port: return '%s:%d' % (self.host, self.port) return self.host def split_first(s, delims): """ Given a string and an iterable of delimiters, split on the first found delimiter. Return two split parts and the matched delimiter. If not found, then the first part is the full input string. Example: :: >>> split_first('foo/bar?baz', '?/=') ('foo', 'bar?baz', '/') >>> split_first('foo/bar?baz', '123') ('foo/bar?baz', '', None) Scales linearly with number of delims. Not ideal for large number of delims. """ min_idx = None min_delim = None for d in delims: idx = s.find(d) if idx < 0: continue if min_idx is None or idx < min_idx: min_idx = idx min_delim = d if min_idx is None or min_idx < 0: return s, '', None return s[:min_idx], s[min_idx+1:], min_delim def parse_url(url): """ Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is performed to parse incomplete urls. Fields not provided will be None. Partly backwards-compatible with :mod:`urlparse`. Example: :: >>> parse_url('http://google.com/mail/') Url(scheme='http', host='google.com', port=None, path='/', ...) >>> parse_url('google.com:80') Url(scheme=None, host='google.com', port=80, path=None, ...) >>> parse_url('/foo?bar') Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) """ # While this code has overlap with stdlib's urlparse, it is much # simplified for our needs and less annoying. # Additionally, this implementations does silly things to be optimal # on CPython. scheme = None auth = None host = None port = None path = None fragment = None query = None # Scheme if '://' in url: scheme, url = url.split('://', 1) # Find the earliest Authority Terminator # (http://tools.ietf.org/html/rfc3986#section-3.2) url, path_, delim = split_first(url, ['/', '?', '#']) if delim: # Reassemble the path path = delim + path_ # Auth if '@' in url: auth, url = url.split('@', 1) # IPv6 if url and url[0] == '[': host, url = url.split(']', 1) host += ']' # Port if ':' in url: _host, port = url.split(':', 1) if not host: host = _host if not port.isdigit(): raise LocationParseError("Failed to parse: %s" % url) port = int(port) elif not host and url: host = url if not path: return Url(scheme, auth, host, port, path, query, fragment) # Fragment if '#' in path: path, fragment = path.split('#', 1) # Query if '?' in path: path, query = path.split('?', 1) return Url(scheme, auth, host, port, path, query, fragment) def get_host(url): """ Deprecated. Use :func:`.parse_url` instead. """ p = parse_url(url) return p.scheme or 'http', p.hostname, p.port def make_headers(keep_alive=None, accept_encoding=None, user_agent=None, basic_auth=None): """ Shortcuts for generating request headers. :param keep_alive: If ``True``, adds 'connection: keep-alive' header. :param accept_encoding: Can be a boolean, list, or string. ``True`` translates to 'gzip,deflate'. List will get joined by comma. String will be used as provided. :param user_agent: String representing the user-agent you want, such as "python-urllib3/0.6" :param basic_auth: Colon-separated username:password string for 'authorization: basic ...' auth header. Example: :: >>> make_headers(keep_alive=True, user_agent="Batman/1.0") {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} >>> make_headers(accept_encoding=True) {'accept-encoding': 'gzip,deflate'} """ headers = {} if accept_encoding: if isinstance(accept_encoding, str): pass elif isinstance(accept_encoding, list): accept_encoding = ','.join(accept_encoding) else: accept_encoding = 'gzip,deflate' headers['accept-encoding'] = accept_encoding if user_agent: headers['user-agent'] = user_agent if keep_alive: headers['connection'] = 'keep-alive' if basic_auth: headers['authorization'] = 'Basic ' + \ b64encode(six.b(basic_auth)).decode('utf-8') return headers def is_connection_dropped(conn): # Platform-specific """ Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us. """ sock = getattr(conn, 'sock', False) if not sock: # Platform-specific: AppEngine return False if not poll: if not select: # Platform-specific: AppEngine return False try: return select([sock], [], [], 0.0)[0] except SocketError: return True # This version is better on platforms that support it. p = poll() p.register(sock, POLLIN) for (fno, ev) in p.poll(0.0): if fno == sock.fileno(): # Either data is buffered (bad), or the connection is dropped. return True def resolve_cert_reqs(candidate): """ Resolves the argument to a numeric constant, which can be passed to the wrap_socket function/method from the ssl module. Defaults to :data:`ssl.CERT_NONE`. If given a string it is assumed to be the name of the constant in the :mod:`ssl` module or its abbrevation. (So you can specify `REQUIRED` instead of `CERT_REQUIRED`. If it's neither `None` nor a string we assume it is already the numeric constant which can directly be passed to wrap_socket. """ if candidate is None: return CERT_NONE if isinstance(candidate, str): res = getattr(ssl, candidate, None) if res is None: res = getattr(ssl, 'CERT_' + candidate) return res return candidate def resolve_ssl_version(candidate): """ like resolve_cert_reqs """ if candidate is None: return PROTOCOL_SSLv23 if isinstance(candidate, str): res = getattr(ssl, candidate, None) if res is None: res = getattr(ssl, 'PROTOCOL_' + candidate) return res return candidate def assert_fingerprint(cert, fingerprint): """ Checks if given fingerprint matches the supplied certificate. :param cert: Certificate as bytes object. :param fingerprint: Fingerprint as string of hexdigits, can be interspersed by colons. """ # Maps the length of a digest to a possible hash function producing # this digest. hashfunc_map = { 16: md5, 20: sha1 } fingerprint = fingerprint.replace(':', '').lower() digest_length, rest = divmod(len(fingerprint), 2) if rest or digest_length not in hashfunc_map: raise SSLError('Fingerprint is of invalid length.') # We need encode() here for py32; works on py2 and p33. fingerprint_bytes = unhexlify(fingerprint.encode()) hashfunc = hashfunc_map[digest_length] cert_digest = hashfunc(cert).digest() if not cert_digest == fingerprint_bytes: raise SSLError('Fingerprints did not match. Expected "{0}", got "{1}".' .format(hexlify(fingerprint_bytes), hexlify(cert_digest))) def is_fp_closed(obj): """ Checks whether a given file-like object is closed. :param obj: The file-like object to check. """ if hasattr(obj, 'fp'): # Object is a container for another file-like object that gets released # on exhaustion (e.g. HTTPResponse) return obj.fp is None return obj.closed if SSLContext is not None: # Python 3.2+ def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None, ca_certs=None, server_hostname=None, ssl_version=None): """ All arguments except `server_hostname` have the same meaning as for :func:`ssl.wrap_socket` :param server_hostname: Hostname of the expected certificate """ context = SSLContext(ssl_version) context.verify_mode = cert_reqs if ca_certs: try: context.load_verify_locations(ca_certs) # Py32 raises IOError # Py33 raises FileNotFoundError except Exception as e: # Reraise as SSLError raise SSLError(e) if certfile: # FIXME: This block needs a test. context.load_cert_chain(certfile, keyfile) if HAS_SNI: # Platform-specific: OpenSSL with enabled SNI return context.wrap_socket(sock, server_hostname=server_hostname) return context.wrap_socket(sock) else: # Python 3.1 and earlier def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None, ca_certs=None, server_hostname=None, ssl_version=None): return wrap_socket(sock, keyfile=keyfile, certfile=certfile, ca_certs=ca_certs, cert_reqs=cert_reqs, ssl_version=ssl_version) urllib3-1.7.1/urllib3.egg-info/0000755000076500000240000000000012220605014016570 5ustar shazowstaff00000000000000urllib3-1.7.1/urllib3.egg-info/dependency_links.txt0000644000076500000240000000000112220605014022636 0ustar shazowstaff00000000000000 urllib3-1.7.1/urllib3.egg-info/PKG-INFO0000644000076500000240000004126712220605014017677 0ustar shazowstaff00000000000000Metadata-Version: 1.0 Name: urllib3 Version: 1.7.1 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: http://urllib3.readthedocs.org/ Author: Andrey Petrov Author-email: andrey.petrov@shazow.net License: MIT Description: ======= urllib3 ======= .. image:: https://travis-ci.org/shazow/urllib3.png?branch=master :target: https://travis-ci.org/shazow/urllib3 Highlights ========== - Re-use the same socket connection for multiple requests (``HTTPConnectionPool`` and ``HTTPSConnectionPool``) (with optional client-side certificate verification). - File posting (``encode_multipart_formdata``). - Built-in redirection and retries (optional). - Supports gzip and deflate decoding. - Thread-safe and sanity-safe. - Works with AppEngine, gevent, and eventlib. - Tested on Python 2.6+ and Python 3.2+, 100% unit test coverage. - Small and easy to understand codebase perfect for extending and building upon. For a more comprehensive solution, have a look at `Requests `_ which is also powered by urllib3. What's wrong with urllib and urllib2? ===================================== There are two critical features missing from the Python standard library: Connection re-using/pooling and file posting. It's not terribly hard to implement these yourself, but it's much easier to use a module that already did the work for you. The Python standard libraries ``urllib`` and ``urllib2`` have little to do with each other. They were designed to be independent and standalone, each solving a different scope of problems, and ``urllib3`` follows in a similar vein. Why do I want to reuse connections? =================================== Performance. When you normally do a urllib call, a separate socket connection is created with each request. By reusing existing sockets (supported since HTTP 1.1), the requests will take up less resources on the server's end, and also provide a faster response time at the client's end. With some simple benchmarks (see `test/benchmark.py `_ ), downloading 15 URLs from google.com is about twice as fast when using HTTPConnectionPool (which uses 1 connection) than using plain urllib (which uses 15 connections). This library is perfect for: - Talking to an API - Crawling a website - Any situation where being able to post files, handle redirection, and retrying is useful. It's relatively lightweight, so it can be used for anything! Examples ======== Go to `urllib3.readthedocs.org `_ for more nice syntax-highlighted examples. But, long story short:: import urllib3 http = urllib3.PoolManager() r = http.request('GET', 'http://google.com/') print r.status, r.data The ``PoolManager`` will take care of reusing connections for you whenever you request the same host. For more fine-grained control of your connection pools, you should look at `ConnectionPool `_. Run the tests ============= We use some external dependencies, multiple interpreters and code coverage analysis while running test suite. Easiest way to run the tests is thusly the ``tox`` utility: :: $ tox # [..] py26: commands succeeded py27: commands succeeded py32: commands succeeded py33: commands succeeded Note that code coverage less than 100% is regarded as a failing run. Contributing ============ #. `Check for open issues `_ or open a fresh issue to start a discussion around a feature idea or a bug. There is a *Contributor Friendly* tag for issues that should be ideal for people who are not very familiar with the codebase yet. #. Fork the `urllib3 repository on Github `_ to start making your changes. #. Write a test which shows that the bug was fixed or that the feature works as expected. #. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to ``CONTRIBUTORS.txt``. Changes ======= 1.7.1 (2013-09-25) ++++++++++++++++++ * Added granular timeout support with new `urllib3.util.Timeout` class. (Issue #231) * Fixed Python 3.4 support. (Issue #238) 1.7 (2013-08-14) ++++++++++++++++ * More exceptions are now pickle-able, with tests. (Issue #174) * Fixed redirecting with relative URLs in Location header. (Issue #178) * Support for relative urls in ``Location: ...`` header. (Issue #179) * ``urllib3.response.HTTPResponse`` now inherits from ``io.IOBase`` for bonus file-like functionality. (Issue #187) * Passing ``assert_hostname=False`` when creating a HTTPSConnectionPool will skip hostname verification for SSL connections. (Issue #194) * New method ``urllib3.response.HTTPResponse.stream(...)`` which acts as a generator wrapped around ``.read(...)``. (Issue #198) * IPv6 url parsing enforces brackets around the hostname. (Issue #199) * Fixed thread race condition in ``urllib3.poolmanager.PoolManager.connection_from_host(...)`` (Issue #204) * ``ProxyManager`` requests now include non-default port in ``Host: ...`` header. (Issue #217) * Added HTTPS proxy support in ``ProxyManager``. (Issue #170 #139) * New ``RequestField`` object can be passed to the ``fields=...`` param which can specify headers. (Issue #220) * Raise ``urllib3.exceptions.ProxyError`` when connecting to proxy fails. (Issue #221) * Use international headers when posting file names. (Issue #119) * Improved IPv6 support. (Issue #203) 1.6 (2013-04-25) ++++++++++++++++ * Contrib: Optional SNI support for Py2 using PyOpenSSL. (Issue #156) * ``ProxyManager`` automatically adds ``Host: ...`` header if not given. * Improved SSL-related code. ``cert_req`` now optionally takes a string like "REQUIRED" or "NONE". Same with ``ssl_version`` takes strings like "SSLv23" The string values reflect the suffix of the respective constant variable. (Issue #130) * Vendored ``socksipy`` now based on Anorov's fork which handles unexpectedly closed proxy connections and larger read buffers. (Issue #135) * Ensure the connection is closed if no data is received, fixes connection leak on some platforms. (Issue #133) * Added SNI support for SSL/TLS connections on Py32+. (Issue #89) * Tests fixed to be compatible with Py26 again. (Issue #125) * Added ability to choose SSL version by passing an ``ssl.PROTOCOL_*`` constant to the ``ssl_version`` parameter of ``HTTPSConnectionPool``. (Issue #109) * Allow an explicit content type to be specified when encoding file fields. (Issue #126) * Exceptions are now pickleable, with tests. (Issue #101) * Fixed default headers not getting passed in some cases. (Issue #99) * Treat "content-encoding" header value as case-insensitive, per RFC 2616 Section 3.5. (Issue #110) * "Connection Refused" SocketErrors will get retried rather than raised. (Issue #92) * Updated vendored ``six``, no longer overrides the global ``six`` module namespace. (Issue #113) * ``urllib3.exceptions.MaxRetryError`` contains a ``reason`` property holding the exception that prompted the final retry. If ``reason is None`` then it was due to a redirect. (Issue #92, #114) * Fixed ``PoolManager.urlopen()`` from not redirecting more than once. (Issue #149) * Don't assume ``Content-Type: text/plain`` for multi-part encoding parameters that are not files. (Issue #111) * Pass `strict` param down to ``httplib.HTTPConnection``. (Issue #122) * Added mechanism to verify SSL certificates by fingerprint (md5, sha1) or against an arbitrary hostname (when connecting by IP or for misconfigured servers). (Issue #140) * Streaming decompression support. (Issue #159) 1.5 (2012-08-02) ++++++++++++++++ * Added ``urllib3.add_stderr_logger()`` for quickly enabling STDERR debug logging in urllib3. * Native full URL parsing (including auth, path, query, fragment) available in ``urllib3.util.parse_url(url)``. * Built-in redirect will switch method to 'GET' if status code is 303. (Issue #11) * ``urllib3.PoolManager`` strips the scheme and host before sending the request uri. (Issue #8) * New ``urllib3.exceptions.DecodeError`` exception for when automatic decoding, based on the Content-Type header, fails. * Fixed bug with pool depletion and leaking connections (Issue #76). Added explicit connection closing on pool eviction. Added ``urllib3.PoolManager.clear()``. * 99% -> 100% unit test coverage. 1.4 (2012-06-16) ++++++++++++++++ * Minor AppEngine-related fixes. * Switched from ``mimetools.choose_boundary`` to ``uuid.uuid4()``. * Improved url parsing. (Issue #73) * IPv6 url support. (Issue #72) 1.3 (2012-03-25) ++++++++++++++++ * Removed pre-1.0 deprecated API. * Refactored helpers into a ``urllib3.util`` submodule. * Fixed multipart encoding to support list-of-tuples for keys with multiple values. (Issue #48) * Fixed multiple Set-Cookie headers in response not getting merged properly in Python 3. (Issue #53) * AppEngine support with Py27. (Issue #61) * Minor ``encode_multipart_formdata`` fixes related to Python 3 strings vs bytes. 1.2.2 (2012-02-06) ++++++++++++++++++ * Fixed packaging bug of not shipping ``test-requirements.txt``. (Issue #47) 1.2.1 (2012-02-05) ++++++++++++++++++ * Fixed another bug related to when ``ssl`` module is not available. (Issue #41) * Location parsing errors now raise ``urllib3.exceptions.LocationParseError`` which inherits from ``ValueError``. 1.2 (2012-01-29) ++++++++++++++++ * Added Python 3 support (tested on 3.2.2) * Dropped Python 2.5 support (tested on 2.6.7, 2.7.2) * Use ``select.poll`` instead of ``select.select`` for platforms that support it. * Use ``Queue.LifoQueue`` instead of ``Queue.Queue`` for more aggressive connection reusing. Configurable by overriding ``ConnectionPool.QueueCls``. * Fixed ``ImportError`` during install when ``ssl`` module is not available. (Issue #41) * Fixed ``PoolManager`` redirects between schemes (such as HTTP -> HTTPS) not completing properly. (Issue #28, uncovered by Issue #10 in v1.1) * Ported ``dummyserver`` to use ``tornado`` instead of ``webob`` + ``eventlet``. Removed extraneous unsupported dummyserver testing backends. Added socket-level tests. * More tests. Achievement Unlocked: 99% Coverage. 1.1 (2012-01-07) ++++++++++++++++ * Refactored ``dummyserver`` to its own root namespace module (used for testing). * Added hostname verification for ``VerifiedHTTPSConnection`` by vendoring in Py32's ``ssl_match_hostname``. (Issue #25) * Fixed cross-host HTTP redirects when using ``PoolManager``. (Issue #10) * Fixed ``decode_content`` being ignored when set through ``urlopen``. (Issue #27) * Fixed timeout-related bugs. (Issues #17, #23) 1.0.2 (2011-11-04) ++++++++++++++++++ * Fixed typo in ``VerifiedHTTPSConnection`` which would only present as a bug if you're using the object manually. (Thanks pyos) * Made RecentlyUsedContainer (and consequently PoolManager) more thread-safe by wrapping the access log in a mutex. (Thanks @christer) * Made RecentlyUsedContainer more dict-like (corrected ``__delitem__`` and ``__getitem__`` behaviour), with tests. Shouldn't affect core urllib3 code. 1.0.1 (2011-10-10) ++++++++++++++++++ * Fixed a bug where the same connection would get returned into the pool twice, causing extraneous "HttpConnectionPool is full" log warnings. 1.0 (2011-10-08) ++++++++++++++++ * Added ``PoolManager`` with LRU expiration of connections (tested and documented). * Added ``ProxyManager`` (needs tests, docs, and confirmation that it works with HTTPS proxies). * Added optional partial-read support for responses when ``preload_content=False``. You can now make requests and just read the headers without loading the content. * Made response decoding optional (default on, same as before). * Added optional explicit boundary string for ``encode_multipart_formdata``. * Convenience request methods are now inherited from ``RequestMethods``. Old helpers like ``get_url`` and ``post_url`` should be abandoned in favour of the new ``request(method, url, ...)``. * Refactored code to be even more decoupled, reusable, and extendable. * License header added to ``.py`` files. * Embiggened the documentation: Lots of Sphinx-friendly docstrings in the code and docs in ``docs/`` and on urllib3.readthedocs.org. * Embettered all the things! * Started writing this file. 0.4.1 (2011-07-17) ++++++++++++++++++ * Minor bug fixes, code cleanup. 0.4 (2011-03-01) ++++++++++++++++ * Better unicode support. * Added ``VerifiedHTTPSConnection``. * Added ``NTLMConnectionPool`` in contrib. * Minor improvements. 0.3.1 (2010-07-13) ++++++++++++++++++ * Added ``assert_host_name`` optional parameter. Now compatible with proxies. 0.3 (2009-12-10) ++++++++++++++++ * Added HTTPS support. * Minor bug fixes. * Refactored, broken backwards compatibility with 0.2. * API to be treated as stable from this version forward. 0.2 (2008-11-17) ++++++++++++++++ * Added unit tests. * Bug fixes. 0.1 (2008-11-16) ++++++++++++++++ * First release. Keywords: urllib httplib threadsafe filepost http https ssl pooling Platform: UNKNOWN Classifier: Environment :: Web Environment Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Internet :: WWW/HTTP Classifier: Topic :: Software Development :: Libraries urllib3-1.7.1/urllib3.egg-info/SOURCES.txt0000644000076500000240000000176712220605014020467 0ustar shazowstaff00000000000000CHANGES.rst CONTRIBUTORS.txt LICENSE.txt MANIFEST.in README.rst setup.cfg setup.py test-requirements.txt dummyserver/__init__.py dummyserver/handlers.py dummyserver/proxy.py dummyserver/server.py dummyserver/testcase.py test/__init__.py test/benchmark.py test/test_collections.py test/test_connectionpool.py test/test_exceptions.py test/test_fields.py test/test_filepost.py test/test_poolmanager.py test/test_proxymanager.py test/test_response.py test/test_util.py urllib3/__init__.py urllib3/_collections.py urllib3/connectionpool.py urllib3/exceptions.py urllib3/fields.py urllib3/filepost.py urllib3/poolmanager.py urllib3/request.py urllib3/response.py urllib3/util.py urllib3.egg-info/PKG-INFO urllib3.egg-info/SOURCES.txt urllib3.egg-info/dependency_links.txt urllib3.egg-info/top_level.txt urllib3/contrib/__init__.py urllib3/contrib/ntlmpool.py urllib3/contrib/pyopenssl.py urllib3/packages/__init__.py urllib3/packages/ordered_dict.py urllib3/packages/six.py urllib3/packages/ssl_match_hostname/__init__.pyurllib3-1.7.1/urllib3.egg-info/top_level.txt0000644000076500000240000000002412220605014021316 0ustar shazowstaff00000000000000urllib3 dummyserver