treq-15.1.0/0000755000076600000240000000000012634652643013033 5ustar glyphstaff00000000000000treq-15.1.0/docs/0000755000076600000240000000000012634652643013763 5ustar glyphstaff00000000000000treq-15.1.0/docs/_static/0000755000076600000240000000000012634652643015411 5ustar glyphstaff00000000000000treq-15.1.0/docs/_static/.keepme0000644000076600000240000000000012545372057016645 0ustar glyphstaff00000000000000treq-15.1.0/docs/api.rst0000644000076600000240000000412212545372057015264 0ustar glyphstaff00000000000000Making Requests =============== .. module:: treq .. autofunction:: request .. autofunction:: get .. autofunction:: head .. autofunction:: post .. autofunction:: put .. autofunction:: patch .. autofunction:: delete Accessing Content ================= .. autofunction:: collect .. autofunction:: content .. autofunction:: text_content .. autofunction:: json_content Responses ========= .. module:: treq.response .. class:: Response .. method:: collect(collector) Incrementally collect the body of the response. :param collector: A single argument callable that will be called with chunks of body data as it is received. :returns: A `Deferred` that fires when the entire body has been received. .. method:: content() Read the entire body all at once. :returns: A `Deferred` that fires with a `bytes` object when the entire body has been received. .. method:: text(encoding='ISO-8859-1') Read the entire body all at once as text. :param encoding: An encoding for the body, if none is given the encoding will be guessed, defaulting to this argument. :returns: A `Deferred` that fires with a `unicode` object when the entire body has been received. .. method:: json() Read the entire body all at once and decode it as JSON. :returns: A `Deferred` that fires with the result of `json.loads` on the body after it has been received. .. method:: history() Get a list of all responses that (such as intermediate redirects), that ultimately ended in the current response. :returns: A `list` of :class:`treq.response.Response` objects. .. method:: cookies() :returns: A `CookieJar`. Inherited from twisted.web.iweb.IResponse. .. attribute:: version .. attribute:: code .. attribute:: phrase .. attribute:: headers .. attribute:: length .. attribute:: request .. attribute:: previousResponse .. method:: deliverBody(protocol) .. method:: setPreviousResponse(response) treq-15.1.0/docs/conf.py0000644000076600000240000001715512545372057015272 0ustar glyphstaff00000000000000# -*- coding: utf-8 -*- # # treq documentation build configuration file, created by # sphinx-quickstart on Mon Dec 10 22:32:11 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.viewcode', 'sphinx.ext.autodoc'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'treq' copyright = u'2014, David Reid' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The full version, including alpha/beta/rc tags. release = open('../treq/_version').readline().strip() version = '.'.join(release.split('.')[:2]) # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'treqdoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'treq.tex', u'treq Documentation', u'David Reid', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'treq', u'treq Documentation', [u'David Reid'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'treq', u'treq Documentation', u'David Reid', 'treq', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' RTD_NEW_THEME = True treq-15.1.0/docs/examples/0000755000076600000240000000000012634652643015601 5ustar glyphstaff00000000000000treq-15.1.0/docs/examples/_utils.py0000644000076600000240000000032412545372057017450 0ustar glyphstaff00000000000000from __future__ import print_function import treq def print_response(response): print(response.code, response.phrase) print(response.headers) return treq.text_content(response).addCallback(print) treq-15.1.0/docs/examples/basic_auth.py0000644000076600000240000000043412545372057020255 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get( 'http://httpbin.org/basic-auth/treq/treq', auth=('treq', 'treq') ) d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/examples/basic_get.py0000644000076600000240000000033612545372057020074 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('http://httpbin.org/get') d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/examples/basic_post.py0000644000076600000240000000054012545372057020277 0ustar glyphstaff00000000000000import json from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.post('http://httpbin.org/post', json.dumps({"msg": "Hello!"}), headers={'Content-Type': ['application/json']}) d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/examples/disable_redirects.py0000644000076600000240000000037412545372057021625 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('http://httpbin.org/redirect/1', allow_redirects=False) d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/examples/download_file.py0000644000076600000240000000054512545372057020764 0ustar glyphstaff00000000000000from twisted.internet.task import react import treq def download_file(reactor, url, destination_filename): destination = file(destination_filename, 'w') d = treq.get(url) d.addCallback(treq.collect, destination.write) d.addBoth(lambda _: destination.close()) return d react(download_file, ['http://httpbin.org/get', 'download.txt']) treq-15.1.0/docs/examples/query_params.py0000644000076600000240000000226512545372057020667 0ustar glyphstaff00000000000000from twisted.internet.task import react from twisted.internet.defer import inlineCallbacks import treq @inlineCallbacks def main(reactor): print 'List of tuples' resp = yield treq.get('http://httpbin.org/get', params=[('foo', 'bar'), ('baz', 'bax')]) content = yield treq.content(resp) print content print 'Single value dictionary' resp = yield treq.get('http://httpbin.org/get', params={'foo': 'bar', 'baz': 'bax'}) content = yield treq.content(resp) print content print 'Multi value dictionary' resp = yield treq.get('http://httpbin.org/get', params={'foo': ['bar', 'baz', 'bax']}) content = yield treq.content(resp) print content print 'Mixed value dictionary' resp = yield treq.get('http://httpbin.org/get', params={'foo': ['bar', 'baz'], 'bax': 'quux'}) content = yield treq.content(resp) print content print 'Preserved query parameters' resp = yield treq.get('http://httpbin.org/get?foo=bar', params={'baz': 'bax'}) content = yield treq.content(resp) print content react(main, []) treq-15.1.0/docs/examples/redirects.py0000644000076600000240000000034512545372057020140 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('http://httpbin.org/redirect/1') d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/examples/response_history.py0000644000076600000240000000051712545372057021574 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('http://httpbin.org/redirect/1') def cb(response): print 'Response history:', response.history() return print_response(response) d.addCallback(cb) return d react(main, []) treq-15.1.0/docs/examples/using_cookies.py0000644000076600000240000000073012545372057021013 0ustar glyphstaff00000000000000from twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('http://httpbin.org/cookies/set?hello=world') def _get_jar(resp): jar = resp.cookies() print 'The server set our hello cookie to: {0}'.format(jar['hello']) return treq.get('http://httpbin.org/cookies', cookies=jar) d.addCallback(_get_jar) d.addCallback(print_response) return d react(main, []) treq-15.1.0/docs/howto.rst0000644000076600000240000000713712606051431015650 0ustar glyphstaff00000000000000Handling Streaming Responses ---------------------------- In addition to `receiving responses `_ with ``IResponse.deliverBody``. treq provides a helper function :py:func:`treq.collect` which takes a ``response``, and a single argument function which will be called with all new data available from the response. Much like ``IProtocol.dataReceived``, :py:func:`treq.collect` knows nothing about the framing of your data and will simply call your collector function with any data that is currently available. Here is an example which simply a ``file`` object's write method to :py:func:`treq.collect` to save the response body to a file. .. literalinclude:: examples/download_file.py :linenos: :lines: 6-11 Full example: :download:`download_file.py ` Query Parameters ---------------- :py:func:`treq.request` supports a ``params`` keyword argument which will be urlencoded and added to the ``url`` argument in addition to any query parameters that may already exist. The ``params`` argument may be either a ``dict`` or a ``list`` of ``(key, value)`` tuples. If it is a ``dict`` then the values in the dict may either be a ``str`` value or a ``list`` of ``str`` values. .. literalinclude:: examples/query_params.py :linenos: :lines: 7-37 Full example: :download:`query_params.py ` Auth ---- HTTP Basic authentication as specified in `RFC 2617`_ is easily supported by passing an ``auth`` keyword argument to any of the request functions. The ``auth`` argument should be a tuple of the form ``('username', 'password')``. .. literalinclude:: examples/basic_auth.py :linenos: :lines: 7-13 Full example: :download:`basic_auth.py ` .. _RFC 2617: http://www.ietf.org/rfc/rfc2617.txt Redirects --------- treq handles redirects by default. The following will print a 200 OK response. .. literalinclude:: examples/redirects.py :linenos: :lines: 7-13 Full example: :download:`redirects.py ` You can easily disable redirects by simply passing `allow_redirects=False` to any of the request methods. .. literalinclude:: examples/disable_redirects.py :linenos: :lines: 7-13 Full example: :download:`disable_redirects.py ` You can even access the complete history of treq response objects by calling the `history()` method on the the response. .. literalinclude:: examples/response_history.py :linenos: :lines: 7-15 Full example: :download:`response_history.py ` Cookies ------- Cookies can be set by passing a ``dict`` or ``cookielib.CookieJar`` instance via the ``cookies`` keyword argument. Later cookies set by the server can be retrieved using the :py:func:`treq.cookies` function. The the object returned by :py:func:`treq.cookies` supports the same key/value access as `requests cookies `_ .. literalinclude:: examples/using_cookies.py :linenos: :lines: 7-20 Full example: :download:`using_cookies.py ` Agent Customization ------------------- treq creates its own `twisted.web.client.Agent `_ with reasonable defaults, but you may want to provide your own custom agent. A custom agent can be passed to the various treq request methods using the ``agent`` keyword argument. .. code-block:: python custom_agent = Agent(reactor, connectTimeout=42) treq.get(url, agent=custom_agent) treq-15.1.0/docs/index.rst0000644000076600000240000000774212634616707015637 0ustar glyphstaff00000000000000treq: High-level Twisted HTTP Client API ======================================== treq depends on Twisted 12.3.0 or later and optionally pyOpenSSL. Python 3 support requires at least Twisted 15.5. Why? ---- `requests`_ by `Kenneth Reitz`_ is a wonderful library. I want the same ease of use when writing Twisted applications. treq is not of course a perfect clone of `requests`. I have tried to stay true to the do-what-I-mean spirit of the `requests` API and also kept the API familiar to users of `Twisted`_ and ``twisted.web.client.Agent`` on which treq is based. .. _requests: http://python-requests.org/ .. _Kenneth Reitz: https://www.gittip.com/kennethreitz/ .. _Twisted: http://twistedmatrix.com/ Quick Start ----------- Installation:: pip install treq GET +++ .. literalinclude:: examples/basic_get.py :linenos: :lines: 7-11 Full example: :download:`basic_get.py ` POST ++++ .. literalinclude:: examples/basic_post.py :linenos: :lines: 9-14 Full example: :download:`basic_post.py ` Why not 100% requests-alike? ---------------------------- Initially when I started off working on treq I thought the API should look exactly like `requests`_ except anything that would involve the network would return a ``Deferred``. Over time while attempting to mimic the `requests`_ API it became clear that not enough code could be shared between `requests`_ and treq for it to be worth the effort to translate many of the usage patterns from `requests`_. With the current version of treq I have tried to keep the API simple, yet remain familiar to users of Twisted and its lower-level HTTP libraries. Feature Parity w/ Requests -------------------------- Even though mimicing the `requests`_ API is not a goal, supporting most of it's features is. Here is a list of `requests`_ features and their status in treq. +----------------------------------+----------+----------+ | | requests | treq | +----------------------------------+----------+----------+ | International Domains and URLs | yes | no | +----------------------------------+----------+----------+ | Keep-Alive & Connection Pooling | yes | yes | +----------------------------------+----------+----------+ | Sessions with Cookie Persistence | yes | yes | +----------------------------------+----------+----------+ | Browser-style SSL Verification | yes | yes | +----------------------------------+----------+----------+ | Basic Authentication | yes | yes | +----------------------------------+----------+----------+ | Digest Authentication | yes | no | +----------------------------------+----------+----------+ | Elegant Key/Value Cookies | yes | yes | +----------------------------------+----------+----------+ | Automatic Decompression | yes | yes | +----------------------------------+----------+----------+ | Unicode Response Bodies | yes | yes | +----------------------------------+----------+----------+ | Multipart File Uploads | yes | yes | +----------------------------------+----------+----------+ | Connection Timeouts | yes | yes | +----------------------------------+----------+----------+ | .netrc support | yes | no | +----------------------------------+----------+----------+ | Python 2.6 | yes | no | +----------------------------------+----------+----------+ | Python 2.7 | yes | yes | +----------------------------------+----------+----------+ | Python 3.x | yes | yes | +----------------------------------+----------+----------+ Howto ----- .. toctree:: :maxdepth: 3 howto API Documentation ----------------- .. toctree:: :maxdepth: 2 api Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` treq-15.1.0/docs/make.bat0000644000076600000240000001174412545372057015376 0ustar glyphstaff00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\treq.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\treq.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end treq-15.1.0/docs/Makefile0000644000076600000240000001266412545372057015433 0ustar glyphstaff00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/treq.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/treq.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/treq" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/treq" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." treq-15.1.0/LICENSE0000644000076600000240000000207512545372057014043 0ustar glyphstaff00000000000000This is the MIT license. Copyright (c) 2012-2014 David Reid Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. treq-15.1.0/MANIFEST.in0000644000076600000240000000026312606051431014555 0ustar glyphstaff00000000000000include treq/_version README.rst LICENSE tox.ini tox2travis.py requirements-dev.txt recursive-include docs * prune docs/_build exclude .travis.yml global-exclude .DS_Store *.pyc treq-15.1.0/PKG-INFO0000644000076600000240000000571312634652643014136 0ustar glyphstaff00000000000000Metadata-Version: 1.1 Name: treq Version: 15.1.0 Summary: A requests-like API built on top of twisted.web's Agent Home-page: http://github.com/twisted/treq Author: Amber Brown Author-email: hawkowl@twistedmatrix.com License: MIT/X Description: treq ==== |pypi|_ |build|_ |coverage|_ ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> from treq import get >>> def done(response): ... print response.code ... reactor.stop() >>> get("http://www.github.com").addCallback(done) >>> from twisted.internet import reactor >>> reactor.run() 200 For more info `read the docs `_. Contribute ========== ``treq`` is hosted on `GitHub `_. Feel free to fork and send contributions over. Developing ========== Install dependencies: :: pip install -r requirements-dev.txt Optionally install PyOpenSSL: :: pip install PyOpenSSL Run Tests (unit & integration): :: trial treq Lint: :: pep8 treq pyflakes treq Build docs: :: cd docs; make html .. |build| image:: https://secure.travis-ci.org/twisted/treq.svg?branch=master .. _build: http://travis-ci.org/twisted/treq .. |coverage| image:: https://codecov.io/github/twisted/treq/coverage.svg?branch=master .. _coverage: https://codecov.io/github/twisted/treq .. |pypi| image:: http://img.shields.io/pypi/v/treq.svg .. _pypi: https://pypi.python.org/pypi/treq Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Framework :: Twisted Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy treq-15.1.0/README.rst0000644000076600000240000000273512606051431014514 0ustar glyphstaff00000000000000treq ==== |pypi|_ |build|_ |coverage|_ ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> from treq import get >>> def done(response): ... print response.code ... reactor.stop() >>> get("http://www.github.com").addCallback(done) >>> from twisted.internet import reactor >>> reactor.run() 200 For more info `read the docs `_. Contribute ========== ``treq`` is hosted on `GitHub `_. Feel free to fork and send contributions over. Developing ========== Install dependencies: :: pip install -r requirements-dev.txt Optionally install PyOpenSSL: :: pip install PyOpenSSL Run Tests (unit & integration): :: trial treq Lint: :: pep8 treq pyflakes treq Build docs: :: cd docs; make html .. |build| image:: https://secure.travis-ci.org/twisted/treq.svg?branch=master .. _build: http://travis-ci.org/twisted/treq .. |coverage| image:: https://codecov.io/github/twisted/treq/coverage.svg?branch=master .. _coverage: https://codecov.io/github/twisted/treq .. |pypi| image:: http://img.shields.io/pypi/v/treq.svg .. _pypi: https://pypi.python.org/pypi/treq treq-15.1.0/requirements-dev.txt0000644000076600000240000000012412606051431017053 0ustar glyphstaff00000000000000-e . pyflakes pep8 sphinx mock==1.0.1 # Can be removed once Python 2.6 is dropped. treq-15.1.0/setup.cfg0000644000076600000240000000052112634652643014652 0ustar glyphstaff00000000000000[bdist_wheel] universal = 1 [metadata] requires-dist = pyOpenSSL >= 0.15.1; python_version > '3.0' pyOpenSSL >= 0.13; python_version < '3.0' requests >= 2.1.0 service_identity >=14.0.0 Twisted >= 15.5.0; python_version > '3.0' Twisted >= 14.0.2; python_version < '3.0' [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 treq-15.1.0/setup.py0000644000076600000240000000317112634616707014550 0ustar glyphstaff00000000000000from setuptools import find_packages, setup import os.path import sys with open(os.path.join(os.path.dirname(__file__), "treq", "_version")) as ver: __version__ = ver.readline().strip() classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Framework :: Twisted", "Programming Language :: Python", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", ] with open('README.rst') as f: readme = f.read() PY3 = (sys.version_info[0] >= 3) install_requires = [ "requests >= 2.1.0", "service_identity >= 14.0.0", "six" ] if PY3: install_requires.append("Twisted >= 15.5.0") install_requires.append("pyOpenSSL >= 0.15.1") else: install_requires.append("Twisted >= 14.0.2") install_requires.append("pyOpenSSL >= 0.13") setup( name="treq", version=__version__, packages=find_packages(), install_requires=install_requires, package_data={"treq": ["_version"]}, author="David Reid", author_email="dreid@dreid.org", maintainer="Amber Brown", maintainer_email="hawkowl@twistedmatrix.com", classifiers=classifiers, description="A requests-like API built on top of twisted.web's Agent", license="MIT/X", url="http://github.com/twisted/treq", long_description=readme ) treq-15.1.0/tox.ini0000644000076600000240000000220412634652457014347 0ustar glyphstaff00000000000000[tox] envlist = {pypy,py27}-twisted_{14.0,15.1,15.3,15.5}-pyopenssl_{0.13.1,0.14,0.15.1}, {py33,py34,py35}-twisted_{15.5}-pyopenssl_{0.15.1}, {pypy,py27,py33,py34,py35}-twisted_trunk-pyopenssl_trunk, pypi-readme, check-manifest, flake8 [testenv] deps = coverage mock ; Can't use Cryptography 1.0 on older PyPys pypy: cryptography<=0.9 twisted_14.0: twisted==14.0 twisted_15.1: twisted==15.1 twisted_15.3: twisted==15.3 twisted_15.5: twisted==15.5 twisted_trunk: https://github.com/twisted/twisted/archive/trunk.zip pyopenssl_0.13.1: pyopenssl==0.13.1 pyopenssl_0.14: pyopenssl==0.14 pyopenssl_0.15.1: pyopenssl==0.15.1 pyopenssl_trunk: https://github.com/pyca/pyopenssl/archive/master.zip passenv = TERM # ensure colors commands = coverage run --branch --source=treq {envbindir}/trial {posargs:treq} coverage report -m [testenv:flake8] skip_install = True deps = flake8 commands = flake8 treq/ [testenv:pypi-readme] deps = readme commands = python setup.py check -r -s [testenv:check-manifest] deps = check-manifest commands = check-manifest treq-15.1.0/tox2travis.py0000755000076600000240000000211012634616707015530 0ustar glyphstaff00000000000000#!/usr/bin/env python from __future__ import absolute_import, print_function import sys travis_template = """\ # AUTO-GENERATED BY tox2travis.py -- DO NOT EDIT THIS FILE BY HAND! sudo: false language: python python: 2.7 cache: false env: {envs} install: - pip install tox codecov script: - tox -e $TOX_ENV after_success: - codecov notifications: email: false # Don't fail on trunk versions. matrix: allow_failures: - env: TOX_ENV=pypy-twisted_trunk-pyopenssl_trunk - env: TOX_ENV=py27-twisted_trunk-pyopenssl_trunk - env: TOX_ENV=py33-twisted_trunk-pyopenssl_trunk - env: TOX_ENV=py34-twisted_trunk-pyopenssl_trunk - env: TOX_ENV=py35-twisted_trunk-pyopenssl_trunk branches: only: - master # AUTO-GENERATED BY tox2travis.py -- DO NOT EDIT THIS FILE BY HAND!""" if __name__ == "__main__": line = sys.stdin.readline() tox_envs = [] while line: tox_envs.append(line) line = sys.stdin.readline() print(travis_template.format( envs=' '.join( '- TOX_ENV={0}'.format(env) for env in tox_envs))) treq-15.1.0/treq/0000755000076600000240000000000012634652643014006 5ustar glyphstaff00000000000000treq-15.1.0/treq/__init__.py0000644000076600000240000000067312634616707016126 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function from pkg_resources import resource_string from treq.api import head, get, post, put, patch, delete, request from treq.content import collect, content, text_content, json_content __all__ = ['head', 'get', 'post', 'put', 'patch', 'delete', 'request', 'collect', 'content', 'text_content', 'json_content'] __version__ = resource_string(__name__, "_version").strip() treq-15.1.0/treq/_utils.py0000644000076600000240000000165412634616707015666 0ustar glyphstaff00000000000000""" Strictly internal utilities. """ from __future__ import absolute_import, division, print_function from twisted.web.client import HTTPConnectionPool def default_reactor(reactor): """ Return the specified reactor or the default. """ if reactor is None: from twisted.internet import reactor return reactor _global_pool = [None] def get_global_pool(): return _global_pool[0] def set_global_pool(pool): _global_pool[0] = pool def default_pool(reactor, pool, persistent): """ Return the specified pool or a a pool with the specified reactor and persistence. """ reactor = default_reactor(reactor) if pool is not None: return pool if persistent is False: return HTTPConnectionPool(reactor, persistent=persistent) if get_global_pool() is None: set_global_pool(HTTPConnectionPool(reactor, persistent=True)) return get_global_pool() treq-15.1.0/treq/_version0000644000076600000240000000000712634652457015555 0ustar glyphstaff0000000000000015.1.0 treq-15.1.0/treq/api.py0000644000076600000240000000577312634616707015146 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function from twisted.web.client import Agent from treq.client import HTTPClient from treq._utils import default_pool, default_reactor def head(url, **kwargs): """ Make a ``HEAD`` request. See :py:func:`treq.request` """ return _client(**kwargs).head(url, **kwargs) def get(url, headers=None, **kwargs): """ Make a ``GET`` request. See :py:func:`treq.request` """ return _client(**kwargs).get(url, headers=headers, **kwargs) def post(url, data=None, **kwargs): """ Make a ``POST`` request. See :py:func:`treq.request` """ return _client(**kwargs).post(url, data=data, **kwargs) def put(url, data=None, **kwargs): """ Make a ``PUT`` request. See :py:func:`treq.request` """ return _client(**kwargs).put(url, data=data, **kwargs) def patch(url, data=None, **kwargs): """ Make a ``PATCH`` request. See :py:func:`treq.request` """ return _client(**kwargs).patch(url, data=data, **kwargs) def delete(url, **kwargs): """ Make a ``DELETE`` request. See :py:func:`treq.request` """ return _client(**kwargs).delete(url, **kwargs) def request(method, url, **kwargs): """ Make an HTTP request. :param str method: HTTP method. Example: ``'GET'``, ``'HEAD'``. ``'PUT'``, ``'POST'``. :param str url: http or https URL, which may include query arguments. :param headers: Optional HTTP Headers to send with this request. :type headers: Headers or None :param params: Optional parameters to be append as the query string to the URL, any query string parameters in the URL already will be preserved. :type params: dict w/ str or list/tuple of str values, list of 2-tuples, or None. :param data: Optional request body. :type data: str, file-like, IBodyProducer, or None :param reactor: Optional twisted reactor. :param bool persistent: Use persistent HTTP connections. Default: ``True`` :param bool allow_redirects: Follow HTTP redirects. Default: ``True`` :param auth: HTTP Basic Authentication information. :type auth: tuple of ``('username', 'password')``. :param cookies: Cookies to send with this request. The HTTP kind, not the tasty kind. :type cookies: ``dict`` or ``cookielib.CookieJar`` :param int timeout: Request timeout seconds. If a response is not received within this timeframe, a connection is aborted with ``CancelledError``. :rtype: Deferred that fires with an IResponse provider. """ return _client(**kwargs).request(method, url, **kwargs) # # Private API # def _client(*args, **kwargs): agent = kwargs.get('agent') if agent is None: reactor = default_reactor(kwargs.get('reactor')) pool = default_pool(reactor, kwargs.get('pool'), kwargs.get('persistent')) agent = Agent(reactor, pool=pool) return HTTPClient(agent) treq-15.1.0/treq/auth.py0000644000076600000240000000241412634616707015323 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function from twisted.web.http_headers import Headers import base64 class UnknownAuthConfig(Exception): def __init__(self, config): super(Exception, self).__init__( '{0!r} not of a known type.'.format(config)) class _RequestHeaderSettingAgent(object): def __init__(self, agent, request_headers): self._agent = agent self._request_headers = request_headers def request(self, method, uri, headers=None, bodyProducer=None): if headers is None: headers = self._request_headers else: for header, values in self._request_headers.getAllRawHeaders(): headers.setRawHeaders(header, values) return self._agent.request( method, uri, headers=headers, bodyProducer=bodyProducer) def add_basic_auth(agent, username, password): creds = base64.b64encode( '{0}:{1}'.format(username, password).encode('ascii')) return _RequestHeaderSettingAgent( agent, Headers({b'Authorization': [b'Basic ' + creds]})) def add_auth(agent, auth_config): if isinstance(auth_config, tuple): return add_basic_auth(agent, auth_config[0], auth_config[1]) raise UnknownAuthConfig(auth_config) treq-15.1.0/treq/client.py0000644000076600000240000002377312634616707015653 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function import mimetypes import uuid from io import BytesIO from twisted.internet.interfaces import IProtocol from twisted.internet.defer import Deferred from twisted.python.components import proxyForInterface from twisted.python.compat import _PY3, unicode from twisted.python.filepath import FilePath from twisted.web.http import urlparse if _PY3: from urllib.parse import urlunparse, urlencode as _urlencode def urlencode(query, doseq): return _urlencode(query, doseq).encode('ascii') else: from urlparse import urlunparse from urllib import urlencode from twisted.web.http_headers import Headers from twisted.web.iweb import IBodyProducer, IResponse from twisted.web.client import ( FileBodyProducer, RedirectAgent, ContentDecoderAgent, GzipDecoder, CookieAgent ) from twisted.python.components import registerAdapter from treq._utils import default_reactor from treq.auth import add_auth from treq import multipart from treq.response import _Response if _PY3: from http.cookiejar import CookieJar else: from cookielib import CookieJar from requests.cookies import cookiejar_from_dict, merge_cookies class _BodyBufferingProtocol(proxyForInterface(IProtocol)): def __init__(self, original, buffer, finished): self.original = original self.buffer = buffer self.finished = finished def dataReceived(self, data): self.buffer.append(data) self.original.dataReceived(data) def connectionLost(self, reason): self.original.connectionLost(reason) self.finished.errback(reason) class _BufferedResponse(proxyForInterface(IResponse)): def __init__(self, original): self.original = original self._buffer = [] self._waiters = [] self._waiting = None self._finished = False self._reason = None def _deliverWaiting(self, reason): self._reason = reason self._finished = True for waiter in self._waiters: for segment in self._buffer: waiter.dataReceived(segment) waiter.connectionLost(reason) def deliverBody(self, protocol): if self._waiting is None and not self._finished: self._waiting = Deferred() self._waiting.addBoth(self._deliverWaiting) self.original.deliverBody( _BodyBufferingProtocol( protocol, self._buffer, self._waiting ) ) elif self._finished: for segment in self._buffer: protocol.dataReceived(segment) protocol.connectionLost(self._reason) else: self._waiters.append(protocol) class HTTPClient(object): def __init__(self, agent, cookiejar=None, data_to_body_producer=IBodyProducer): self._agent = agent self._cookiejar = cookiejar or cookiejar_from_dict({}) self._data_to_body_producer = data_to_body_producer def get(self, url, **kwargs): return self.request('GET', url, **kwargs) def put(self, url, data=None, **kwargs): return self.request('PUT', url, data=data, **kwargs) def patch(self, url, data=None, **kwargs): return self.request('PATCH', url, data=data, **kwargs) def post(self, url, data=None, **kwargs): return self.request('POST', url, data=data, **kwargs) def head(self, url, **kwargs): return self.request('HEAD', url, **kwargs) def delete(self, url, **kwargs): return self.request('DELETE', url, **kwargs) def request(self, method, url, **kwargs): method = method.encode('ascii').upper() # Join parameters provided in the URL # and the ones passed as argument. params = kwargs.get('params') if params: url = _combine_query_params(url, params) if isinstance(url, unicode): url = url.encode('ascii') # Convert headers dictionary to # twisted raw headers format. headers = kwargs.get('headers') if headers: if isinstance(headers, dict): h = Headers({}) for k, v in headers.items(): if isinstance(k, unicode): k = k.encode('ascii') if isinstance(v, bytes): h.addRawHeader(k, v) elif isinstance(v, unicode): h.addRawHeader(k, v.encode('ascii')) elif isinstance(v, list): cleanHeaders = [] for item in v: if isinstance(item, unicode): cleanHeaders.append(item.encode('ascii')) else: cleanHeaders.append(item) h.setRawHeaders(k, cleanHeaders) else: h.setRawHeaders(k, v) headers = h else: headers = Headers({}) # Here we choose a right producer # based on the parameters passed in. bodyProducer = None data = kwargs.get('data') files = kwargs.get('files') if files: # If the files keyword is present we will issue a # multipart/form-data request as it suits better for cases # with files and/or large objects. files = list(_convert_files(files)) boundary = str(uuid.uuid4()).encode('ascii') headers.setRawHeaders( b'content-type', [ b'multipart/form-data; boundary=' + boundary]) if data: data = _convert_params(data) else: data = [] bodyProducer = multipart.MultiPartProducer( data + files, boundary=boundary) elif data: # Otherwise stick to x-www-form-urlencoded format # as it's generally faster for smaller requests. if isinstance(data, (dict, list, tuple)): headers.setRawHeaders( b'content-type', [b'application/x-www-form-urlencoded']) data = urlencode(data, doseq=True) bodyProducer = self._data_to_body_producer(data) cookies = kwargs.get('cookies', {}) if not isinstance(cookies, CookieJar): cookies = cookiejar_from_dict(cookies) cookies = merge_cookies(self._cookiejar, cookies) wrapped_agent = CookieAgent(self._agent, cookies) if kwargs.get('allow_redirects', True): wrapped_agent = RedirectAgent(wrapped_agent) wrapped_agent = ContentDecoderAgent(wrapped_agent, [(b'gzip', GzipDecoder)]) auth = kwargs.get('auth') if auth: wrapped_agent = add_auth(wrapped_agent, auth) d = wrapped_agent.request( method, url, headers=headers, bodyProducer=bodyProducer) timeout = kwargs.get('timeout') if timeout: delayedCall = default_reactor(kwargs.get('reactor')).callLater( timeout, d.cancel) def gotResult(result): if delayedCall.active(): delayedCall.cancel() return result d.addBoth(gotResult) if not kwargs.get('unbuffered', False): d.addCallback(_BufferedResponse) return d.addCallback(_Response, cookies) def _convert_params(params): if hasattr(params, "iteritems"): return list(sorted(params.iteritems())) elif hasattr(params, "items"): return list(sorted(params.items())) elif isinstance(params, (tuple, list)): return list(params) else: raise ValueError("Unsupported format") def _convert_files(files): """Files can be passed in a variety of formats: * {'file': open("bla.f")} * {'file': (name, open("bla.f"))} * {'file': (name, content-type, open("bla.f"))} * Anything that has iteritems method, e.g. MultiDict: MultiDict([(name, open()), (name, open())] Our goal is to standardize it to unified form of: * [(param, (file name, content type, producer))] """ if hasattr(files, "iteritems"): files = files.iteritems() elif hasattr(files, "items"): files = files.items() for param, val in files: file_name, content_type, fobj = (None, None, None) if isinstance(val, tuple): if len(val) == 2: file_name, fobj = val elif len(val) == 3: file_name, content_type, fobj = val else: fobj = val if hasattr(fobj, "name"): file_name = FilePath(fobj.name).basename() if not content_type: content_type = _guess_content_type(file_name) yield (param, (file_name, content_type, IBodyProducer(fobj))) def _combine_query_params(url, params): parsed_url = urlparse(url.encode('ascii')) qs = [] if parsed_url.query: qs.extend([parsed_url.query, b'&']) qs.append(urlencode(params, doseq=True)) return urlunparse((parsed_url[0], parsed_url[1], parsed_url[2], parsed_url[3], b''.join(qs), parsed_url[5])) def _from_bytes(orig_bytes): return FileBodyProducer(BytesIO(orig_bytes)) def _from_file(orig_file): return FileBodyProducer(orig_file) def _guess_content_type(filename): if filename: guessed = mimetypes.guess_type(filename)[0] else: guessed = None return guessed or 'application/octet-stream' registerAdapter(_from_bytes, bytes, IBodyProducer) registerAdapter(_from_file, BytesIO, IBodyProducer) if not _PY3: from StringIO import StringIO registerAdapter(_from_file, StringIO, IBodyProducer) registerAdapter(_from_file, file, IBodyProducer) else: import io # file()/open() equiv on Py3 registerAdapter(_from_file, io.BufferedReader, IBodyProducer) treq-15.1.0/treq/content.py0000644000076600000240000000731312634616707016037 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function import cgi import json from twisted.python.compat import _PY3 from twisted.internet.defer import Deferred, succeed from twisted.internet.protocol import Protocol from twisted.web.client import ResponseDone from twisted.web.http import PotentialDataLoss from twisted.web.http_headers import Headers def _encoding_from_headers(headers): content_types = headers.getRawHeaders('content-type') if content_types is None: return None # This seems to be the choice browsers make when encountering multiple # content-type headers. content_type, params = cgi.parse_header(content_types[-1]) if 'charset' in params: return params.get('charset').strip("'\"") class _BodyCollector(Protocol): def __init__(self, finished, collector): self.finished = finished self.collector = collector def dataReceived(self, data): self.collector(data) def connectionLost(self, reason): if reason.check(ResponseDone): self.finished.callback(None) elif reason.check(PotentialDataLoss): # http://twistedmatrix.com/trac/ticket/4840 self.finished.callback(None) else: self.finished.errback(reason) def collect(response, collector): """ Incrementally collect the body of the response. This function may only be called **once** for a given response. :param IResponse response: The HTTP response to collect the body from. :param collector: A callable to be called each time data is available from the response body. :type collector: single argument callable :rtype: Deferred that fires with None when the entire body has been read. """ if response.length == 0: return succeed(None) d = Deferred() response.deliverBody(_BodyCollector(d, collector)) return d def content(response): """ Read the contents of an HTTP response. This function may be called multiple times for a response, it uses a ``WeakKeyDictionary`` to cache the contents of the response. :param IResponse response: The HTTP Response to get the contents of. :rtype: Deferred that fires with the content as a str. """ _content = [] d = collect(response, _content.append) d.addCallback(lambda _: b''.join(_content)) return d def json_content(response): """ Read the contents of an HTTP response and attempt to decode it as JSON. This function relies on :py:func:`content` and so may be called more than once for a given response. :param IResponse response: The HTTP Response to get the contents of. :rtype: Deferred that fires with the decoded JSON. """ if _PY3: d = text_content(response) else: d = content(response) d.addCallback(json.loads) return d def text_content(response, encoding='ISO-8859-1'): """ Read the contents of an HTTP response and decode it with an appropriate charset, which may be guessed from the ``Content-Type`` header. :param IResponse response: The HTTP Response to get the contents of. :param str encoding: An valid charset, such as ``UTF-8`` or ``ISO-8859-1``. :rtype: Deferred that fires with a unicode. """ def _decode_content(c): if _PY3: headers = Headers({ key.decode('ascii'): [y.decode('ascii') for y in val] for key, val in response.headers.getAllRawHeaders()}) else: headers = response.headers e = _encoding_from_headers(headers) if e is not None: return c.decode(e) return c.decode(encoding) d = content(response) d.addCallback(_decode_content) return d treq-15.1.0/treq/multipart.py0000644000076600000240000003053412634616707016407 0ustar glyphstaff00000000000000# Copyright (c) Twisted Matrix Laboratories. # See LICENSE for details. from __future__ import absolute_import, division, print_function from uuid import uuid4 from io import BytesIO from contextlib import closing from twisted.internet import defer, task from twisted.python.compat import unicode, _PY3 from twisted.web.iweb import UNKNOWN_LENGTH, IBodyProducer from zope.interface import implementer if _PY3: long = int CRLF = b"\r\n" @implementer(IBodyProducer) class MultiPartProducer(object): """ L{MultiPartProducer} takes parameters for HTTP Request produces bytes in multipart/form-data format defined U{Multipart} and U{Mime format} The encoded request is produced incrementally and the bytes are written to a consumer. Fields should have form: [(parameter name, value), ...] Accepted values: * Unicode strings (in this case parameter will be encoded with utf-8) * Tuples with (file name, content-type, L{IBodyProducer} objects) Since MultiPart producer can accept L{IBodyProducer} like objects and these objects sometimes cannot be read from in an event-driven manner (e.g. L{FileBodyProducer} is passed in) L{FileBodyProducer} uses a L{Cooperator} instance to schedule reads from the undelying producers. This process is also paused and resumed based on notifications from the L{IConsumer} provider being written to. @ivar _fileds: Sorted parameters, where all strings are enforced to be unicode and file objects stacked on bottom (to produce a human readable form-data request) @ivar _cooperate: A method like L{Cooperator.cooperate} which is used to schedule all reads. @ivar boundary: The generated boundary used in form-data encoding @type boundary: L{bytes} """ def __init__(self, fields, boundary=None, cooperator=task): self._fields = list(_sorted_by_type(_converted(fields))) self._currentProducer = None self._cooperate = cooperator.cooperate self.boundary = boundary or uuid4().hex if isinstance(self.boundary, unicode): self.boundary = self.boundary.encode('ascii') self.length = self._calculateLength() def startProducing(self, consumer): """ Start a cooperative task which will read bytes from the input file and write them to C{consumer}. Return a L{Deferred} which fires after all bytes have been written. @param consumer: Any L{IConsumer} provider """ self._task = self._cooperate(self._writeLoop(consumer)) d = self._task.whenDone() def maybeStopped(reason): reason.trap(task.TaskStopped) return defer.Deferred() d.addCallbacks(lambda ignored: None, maybeStopped) return d def stopProducing(self): """ Permanently stop writing bytes from the file to the consumer by stopping the underlying L{CooperativeTask}. """ if self._currentProducer: self._currentProducer.stopProducing() self._task.stop() def pauseProducing(self): """ Temporarily suspend copying bytes from the input file to the consumer by pausing the L{CooperativeTask} which drives that activity. """ if self._currentProducer: # Having a current producer means that we are in # the paused state because we've returned # the deferred of the current producer to the # the cooperator. So this request # for pausing us is actually a request to pause # our underlying current producer. self._currentProducer.pauseProducing() else: self._task.pause() def resumeProducing(self): """ Undo the effects of a previous C{pauseProducing} and resume copying bytes to the consumer by resuming the L{CooperativeTask} which drives the write activity. """ if self._currentProducer: self._currentProducer.resumeProducing() else: self._task.resume() def _calculateLength(self): """ Determine how many bytes the overall form post would consume. The easiest way is to calculate is to generate of C{fObj} (assuming it is not modified from this point on). If the determination cannot be made, return C{UNKNOWN_LENGTH}. """ consumer = _LengthConsumer() for i in list(self._writeLoop(consumer)): pass return consumer.length def _getBoundary(self, final=False): """ Returns a boundary line, either final (the one that ends the form data request or a regular, the one that separates the boundaries) --this-is-my-boundary """ f = b"--" if final else b"" return b"--" + self.boundary + f def _writeLoop(self, consumer): """ Return an iterator which generates the multipart/form-data request including the encoded objects and writes them to the consumer for each time it is iterated. """ for index, (name, value) in enumerate(self._fields): # We don't write the CRLF of the first boundary: # HTTP request headers are already separated with CRLF # from the request body, another newline is possible # and should be considered as an empty preamble per rfc2046, # but is generally confusing, so we omit it when generating # the request. We don't write Content-Type: multipart/form-data # header here as well as it's defined in the context of the HTTP # request headers, not the producer, so we gust generate # the body. # It's also important to note that the boundary in the message # is defined not only by "--boundary-value" but # but with CRLF characers before it and after the line. # This is very important. # proper boundary is "CRLF--boundary-valueCRLF" consumer.write( (CRLF if index != 0 else b"") + self._getBoundary() + CRLF) yield self._writeField(name, value, consumer) consumer.write(CRLF + self._getBoundary(final=True) + CRLF) def _writeField(self, name, value, consumer): if isinstance(value, unicode): self._writeString(name, value, consumer) elif isinstance(value, tuple): filename, content_type, producer = value return self._writeFile( name, filename, content_type, producer, consumer) def _writeString(self, name, value, consumer): cdisp = _Header(b"Content-Disposition", b"form-data") cdisp.add_param(b"name", name) consumer.write(bytes(cdisp) + CRLF + CRLF) encoded = value.encode("utf-8") consumer.write(encoded) self._currentProducer = None def _writeFile(self, name, filename, content_type, producer, consumer): cdisp = _Header(b"Content-Disposition", b"form-data") cdisp.add_param(b"name", name) if filename: cdisp.add_param(b"filename", filename) consumer.write(bytes(cdisp) + CRLF) consumer.write(bytes(_Header(b"Content-Type", content_type)) + CRLF) if producer.length != UNKNOWN_LENGTH: consumer.write( bytes(_Header(b"Content-Length", producer.length)) + CRLF) consumer.write(CRLF) if isinstance(consumer, _LengthConsumer): consumer.write(producer.length) else: self._currentProducer = producer def unset(val): self._currentProducer = None return val d = producer.startProducing(consumer) d.addCallback(unset) return d def _escape(value): """ This function prevents header values from corrupting the request, a newline in the file name parameter makes form-data request unreadable for majority of parsers. """ if not isinstance(value, (bytes, unicode)): value = unicode(value) if isinstance(value, bytes): value = value.decode('utf-8') return value.replace(u"\r", u"").replace(u"\n", u"").replace(u'"', u'\\"') def _enforce_unicode(value): """ This function enforces the stings passed to be unicode, so we won't need to guess what's the encoding of the binary strings passed in. If someone needs to pass the binary string, use BytesIO and wrap it with L{FileBodyProducer} """ if isinstance(value, unicode): return value elif isinstance(value, bytes): # we got a byte string, and we have no ide what's the encoding of it # we can only assume that it's something cool try: return unicode(value, "utf-8") except UnicodeDecodeError: raise ValueError( "Supplied raw bytes that are not ascii/utf-8." " When supplying raw string make sure it's ascii or utf-8" ", or work with unicode if you are not sure") else: raise ValueError( "Unsupported field type: %s" % (value.__class__.__name__,)) def _converted(fields): if hasattr(fields, "iteritems"): fields = fields.iteritems() elif hasattr(fields, "items"): fields = fields.items() for name, value in fields: name = _enforce_unicode(name) if isinstance(value, (tuple, list)): if len(value) != 3: raise ValueError( "Expected tuple: (filename, content type, producer)") filename, content_type, producer = value filename = _enforce_unicode(filename) if filename else None yield name, (filename, content_type, producer) elif isinstance(value, (bytes, unicode)): yield name, _enforce_unicode(value) else: raise ValueError( "Unsupported value, expected string, unicode " "or tuple (filename, content type, IBodyProducer)") class _LengthConsumer(object): """ L{_LengthConsumer} is used to calculate the length of the multi-part request. The easiest way to do that is to consume all the fields, but instead writing them to the string just accumulate the request length. @ivar length: The length of the request. Can be UNKNOWN_LENGTH if consumer finds the field that has length that can not be calculated """ def __init__(self): self.length = 0 def write(self, value): # this means that we have encountered # unknown length producer # so we need to stop attempts calculating if self.length is UNKNOWN_LENGTH: return if value is UNKNOWN_LENGTH: self.length = value elif isinstance(value, (int, long)): self.length += value else: self.length += len(value) class _Header(object): """ L{_Header} This class is a tiny wrapper that produces request headers. We can't use standard python header class because it encodes unicode fields using =? bla bla ?= encoding, which is correct, but no one in HTTP world expects that, everyone wants utf-8 raw bytes. """ def __init__(self, name, value, params=None): self.name = name self.value = value self.params = params or [] def add_param(self, name, value): self.params.append((name, value)) def __bytes__(self): with closing(BytesIO()) as h: h.write(self.name + b": " + _escape(self.value).encode("us-ascii")) if self.params: for (name, val) in self.params: h.write(b"; ") h.write(_escape(name).encode("us-ascii")) h.write(b"=") h.write(b'"' + _escape(val).encode('utf-8') + b'"') h.seek(0) return h.read() def __str__(self): return self.__bytes__() def _sorted_by_type(fields): """Sorts params so that strings are placed before files. That makes a request more readable, as generally files are bigger. It also provides deterministic order of fields what is easier for testing. """ def key(p): key, val = p if isinstance(val, (bytes, unicode)): return (0, key) else: return (1, key) return sorted(fields, key=key) treq-15.1.0/treq/response.py0000644000076600000240000000257112634616707016224 0ustar glyphstaff00000000000000from __future__ import absolute_import, division, print_function from twisted.python.components import proxyForInterface from twisted.web.iweb import IResponse from requests.cookies import cookiejar_from_dict from treq.content import content, json_content, text_content class _Response(proxyForInterface(IResponse)): def __init__(self, original, cookiejar): self.original = original self._cookiejar = cookiejar def content(self): return content(self.original) def json(self, *args, **kwargs): return json_content(self.original, *args, **kwargs) def text(self, *args, **kwargs): return text_content(self.original, *args, **kwargs) def history(self): if not hasattr(self, "previousResponse"): raise NotImplementedError( "Twisted < 13.1.0 does not support response history.") response = self history = [] while response.previousResponse is not None: history.append(_Response(response.previousResponse, self._cookiejar)) response = response.previousResponse history.reverse() return history def cookies(self): jar = cookiejar_from_dict({}) if self._cookiejar is not None: for cookie in self._cookiejar: jar.set_cookie(cookie) return jar treq-15.1.0/treq/test/0000755000076600000240000000000012634652643014765 5ustar glyphstaff00000000000000treq-15.1.0/treq/test/__init__.py0000644000076600000240000000000012545372057017063 0ustar glyphstaff00000000000000treq-15.1.0/treq/test/test_api.py0000644000076600000240000000303212634616707017146 0ustar glyphstaff00000000000000from __future__ import absolute_import, division import mock from treq.test.util import TestCase import treq from treq._utils import set_global_pool class TreqAPITests(TestCase): def setUp(self): set_global_pool(None) agent_patcher = mock.patch('treq.api.Agent') self.Agent = agent_patcher.start() self.addCleanup(agent_patcher.stop) client_patcher = mock.patch('treq.api.HTTPClient') self.HTTPClient = client_patcher.start() self.addCleanup(client_patcher.stop) pool_patcher = mock.patch('treq._utils.HTTPConnectionPool') self.HTTPConnectionPool = pool_patcher.start() self.addCleanup(pool_patcher.stop) self.client = self.HTTPClient.return_value def test_default_pool(self): resp = treq.get('http://test.com') self.Agent.assert_called_once_with( mock.ANY, pool=self.HTTPConnectionPool.return_value ) self.assertEqual(self.client.get.return_value, resp) def test_cached_pool(self): pool = self.HTTPConnectionPool.return_value treq.get('http://test.com') self.HTTPConnectionPool.return_value = mock.Mock() treq.get('http://test.com') self.Agent.assert_called_with(mock.ANY, pool=pool) def test_custom_agent(self): """ A custom Agent is used if specified. """ custom_agent = mock.Mock() treq.get('https://www.example.org/', agent=custom_agent) self.HTTPClient.assert_called_once_with(custom_agent) treq-15.1.0/treq/test/test_auth.py0000644000076600000240000000456212634616707017347 0ustar glyphstaff00000000000000import mock from twisted.web.client import Agent from twisted.web.http_headers import Headers from treq.test.util import TestCase from treq.auth import _RequestHeaderSettingAgent, add_auth, UnknownAuthConfig class RequestHeaderSettingAgentTests(TestCase): def setUp(self): self.agent = mock.Mock(Agent) def test_sets_headers(self): agent = _RequestHeaderSettingAgent( self.agent, Headers({b'X-Test-Header': [b'Test-Header-Value']})) agent.request('method', 'uri') self.agent.request.assert_called_once_with( 'method', 'uri', headers=Headers({b'X-Test-Header': [b'Test-Header-Value']}), bodyProducer=None ) def test_overrides_per_request_headers(self): agent = _RequestHeaderSettingAgent( self.agent, Headers({b'X-Test-Header': [b'Test-Header-Value']}) ) agent.request( 'method', 'uri', Headers({b'X-Test-Header': [b'Unwanted-Value']}) ) self.agent.request.assert_called_once_with( 'method', 'uri', headers=Headers({b'X-Test-Header': [b'Test-Header-Value']}), bodyProducer=None ) class AddAuthTests(TestCase): def setUp(self): self.rhsa_patcher = mock.patch('treq.auth._RequestHeaderSettingAgent') self._RequestHeaderSettingAgent = self.rhsa_patcher.start() self.addCleanup(self.rhsa_patcher.stop) def test_add_basic_auth(self): agent = mock.Mock() add_auth(agent, ('username', 'password')) self._RequestHeaderSettingAgent.assert_called_once_with( agent, Headers({b'authorization': [b'Basic dXNlcm5hbWU6cGFzc3dvcmQ=']}) ) def test_add_basic_auth_huge(self): agent = mock.Mock() pwd = ('verylongpasswordthatextendsbeyondthepointwheremultiplel' 'inesaregenerated') auth = (b'Basic dXNlcm5hbWU6dmVyeWxvbmdwYXNzd29yZHRoYXRleHRlbmRzY' b'mV5b25kdGhlcG9pbnR3aGVyZW11bHRpcGxlbGluZXNhcmVnZW5lcmF0ZWQ=') add_auth(agent, ('username', pwd)) self._RequestHeaderSettingAgent.assert_called_once_with( agent, Headers({b'authorization': [auth]})) def test_add_unknown_auth(self): agent = mock.Mock() self.assertRaises(UnknownAuthConfig, add_auth, agent, mock.Mock()) treq-15.1.0/treq/test/test_client.py0000644000076600000240000003663312634616707017670 0ustar glyphstaff00000000000000from io import BytesIO import mock from twisted.internet.defer import Deferred, succeed, CancelledError from twisted.internet.protocol import Protocol from twisted.python.failure import Failure from twisted.web.client import Agent from twisted.web.http_headers import Headers from treq.test.util import TestCase, with_clock from treq.client import ( HTTPClient, _BodyBufferingProtocol, _BufferedResponse ) class HTTPClientTests(TestCase): def setUp(self): self.agent = mock.Mock(Agent) self.client = HTTPClient(self.agent) self.fbp_patcher = mock.patch('treq.client.FileBodyProducer') self.FileBodyProducer = self.fbp_patcher.start() self.addCleanup(self.fbp_patcher.stop) self.mbp_patcher = mock.patch('treq.multipart.MultiPartProducer') self.MultiPartProducer = self.mbp_patcher.start() self.addCleanup(self.mbp_patcher.stop) def assertBody(self, expected): body = self.FileBodyProducer.mock_calls[0][1][0] self.assertEqual(body.read(), expected) def test_request_case_insensitive_methods(self): self.client.request('gEt', 'http://example.com/') self.agent.request.assert_called_once_with( b'GET', b'http://example.com/', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_query_params(self): self.client.request('GET', 'http://example.com/', params={'foo': ['bar']}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_tuple_query_values(self): self.client.request('GET', 'http://example.com/', params={'foo': ('bar',)}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_merge_query_params(self): self.client.request('GET', 'http://example.com/?baz=bax', params={'foo': ['bar', 'baz']}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?baz=bax&foo=bar&foo=baz', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_merge_tuple_query_params(self): self.client.request('GET', 'http://example.com/?baz=bax', params=[('foo', 'bar')]) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?baz=bax&foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_dict_single_value_query_params(self): self.client.request('GET', 'http://example.com/', params={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_data_dict(self): self.client.request('POST', 'http://example.com/', data={'foo': ['bar', 'baz']}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar&foo=baz') def test_request_data_single_dict(self): self.client.request('POST', 'http://example.com/', data={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar') def test_request_data_tuple(self): self.client.request('POST', 'http://example.com/', data=[('foo', 'bar')]) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar') def test_request_data_file(self): temp_fn = self.mktemp() with open(temp_fn, "wb") as temp_file: temp_file.write(b'hello') self.client.request('POST', 'http://example.com/', data=open(temp_fn, 'rb')) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'hello') @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_no_name_attachment(self): self.client.request( 'POST', 'http://example.com/', files={"name": BytesIO(b"hello")}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', (None, 'application/octet-stream', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_named_attachment(self): self.client.request( 'POST', 'http://example.com/', files={ "name": ('image.jpg', BytesIO(b"hello"))}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', ('image.jpg', 'image/jpeg', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_named_attachment_and_ctype(self): self.client.request( 'POST', 'http://example.com/', files={ "name": ('image.jpg', 'text/plain', BytesIO(b"hello"))}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', ('image.jpg', 'text/plain', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_mixed_params(self): class NamedFile(BytesIO): def __init__(self, val): BytesIO.__init__(self, val) self.name = "image.png" self.client.request( 'POST', 'http://example.com/', data=[("a", "b"), ("key", "val")], files=[ ("file1", ('image.jpg', BytesIO(b"hello"))), ("file2", NamedFile(b"yo"))]) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call([ ('a', 'b'), ('key', 'val'), ('file1', ('image.jpg', 'image/jpeg', FP)), ('file2', ('image.png', 'image/png', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_mixed_params_dict(self): self.client.request( 'POST', 'http://example.com/', data={"key": "a", "key2": "b"}, files={"file1": BytesIO(b"hey")}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call([ ('key', 'a'), ('key2', 'b'), ('file1', (None, 'application/octet-stream', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) def test_request_unsupported_params_combination(self): self.assertRaises(ValueError, self.client.request, 'POST', 'http://example.com/', data=BytesIO(b"yo"), files={"file1": BytesIO(b"hey")}) def test_request_dict_headers(self): self.client.request('GET', 'http://example.com/', headers={ 'User-Agent': 'treq/0.1dev', 'Accept': ['application/json', 'text/plain'] }) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/', Headers({b'User-Agent': [b'treq/0.1dev'], b'accept-encoding': [b'gzip'], b'Accept': [b'application/json', b'text/plain']}), None) @with_clock def test_request_timeout_fired(self, clock): """ Verify the request is cancelled if a response is not received within specified timeout period. """ self.agent.request.return_value = d = Deferred() self.client.request('GET', 'http://example.com', timeout=2) # simulate we haven't gotten a response within timeout seconds clock.advance(3) # a deferred should have been cancelled self.failureResultOf(d, CancelledError) @with_clock def test_request_timeout_cancelled(self, clock): """ Verify timeout is cancelled if a response is received before timeout period elapses. """ self.agent.request.return_value = d = Deferred() self.client.request('GET', 'http://example.com', timeout=2) # simulate a response d.callback(mock.Mock(code=200, headers=Headers({}))) # now advance the clock but since we already got a result, # a cancellation timer should have been cancelled clock.advance(3) self.successResultOf(d) def test_response_is_buffered(self): response = mock.Mock(deliverBody=mock.Mock(), headers=Headers({})) self.agent.request.return_value = succeed(response) d = self.client.get('http://www.example.com') result = self.successResultOf(d) protocol = mock.Mock(Protocol) result.deliverBody(protocol) self.assertEqual(response.deliverBody.call_count, 1) result.deliverBody(protocol) self.assertEqual(response.deliverBody.call_count, 1) def test_response_buffering_is_disabled_with_unbufferred_arg(self): response = mock.Mock(headers=Headers({})) self.agent.request.return_value = succeed(response) d = self.client.get('http://www.example.com', unbuffered=True) # YOLO public attribute. self.assertEqual(self.successResultOf(d).original, response) class BodyBufferingProtocolTests(TestCase): def test_buffers_data(self): buffer = [] protocol = _BodyBufferingProtocol( mock.Mock(Protocol), buffer, None ) protocol.dataReceived("foo") self.assertEqual(buffer, ["foo"]) protocol.dataReceived("bar") self.assertEqual(buffer, ["foo", "bar"]) def test_propagates_data_to_destination(self): destination = mock.Mock(Protocol) protocol = _BodyBufferingProtocol( destination, [], None ) protocol.dataReceived(b"foo") destination.dataReceived.assert_called_once_with(b"foo") protocol.dataReceived(b"bar") destination.dataReceived.assert_called_with(b"bar") def test_fires_finished_deferred(self): finished = Deferred() protocol = _BodyBufferingProtocol( mock.Mock(Protocol), [], finished ) class TestResponseDone(Exception): pass protocol.connectionLost(TestResponseDone()) self.failureResultOf(finished, TestResponseDone) def test_propogates_connectionLost_reason(self): destination = mock.Mock(Protocol) protocol = _BodyBufferingProtocol( destination, [], Deferred().addErrback(lambda ign: None) ) class TestResponseDone(Exception): pass reason = TestResponseDone() protocol.connectionLost(reason) destination.connectionLost.assert_called_once_with(reason) class BufferedResponseTests(TestCase): def test_wraps_protocol(self): wrappers = [] wrapped = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(wrapped) response.deliverBody.assert_called_once_with(wrappers[0]) self.assertNotEqual(wrapped, wrappers[0]) def test_concurrent_receivers(self): wrappers = [] wrapped = mock.Mock(Protocol) unwrapped = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(wrapped) br.deliverBody(unwrapped) response.deliverBody.assert_called_once_with(wrappers[0]) wrappers[0].dataReceived(b"foo") wrapped.dataReceived.assert_called_once_with(b"foo") self.assertEqual(unwrapped.dataReceived.call_count, 0) class TestResponseDone(Exception): pass done = Failure(TestResponseDone()) wrappers[0].connectionLost(done) wrapped.connectionLost.assert_called_once_with(done) unwrapped.dataReceived.assert_called_once_with(b"foo") unwrapped.connectionLost.assert_called_once_with(done) def test_receiver_after_finished(self): wrappers = [] finished = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(mock.Mock(Protocol)) wrappers[0].dataReceived(b"foo") class TestResponseDone(Exception): pass done = Failure(TestResponseDone()) wrappers[0].connectionLost(done) br.deliverBody(finished) finished.dataReceived.assert_called_once_with(b"foo") finished.connectionLost.assert_called_once_with(done) treq-15.1.0/treq/test/test_content.py0000644000076600000240000001106012634616707020047 0ustar glyphstaff00000000000000import mock from twisted.python.failure import Failure from twisted.web.http_headers import Headers from twisted.web.client import ResponseDone, ResponseFailed from twisted.web.http import PotentialDataLoss from treq.test.util import TestCase from treq import collect, content, json_content, text_content from treq.client import _BufferedResponse class ContentTests(TestCase): def setUp(self): self.response = mock.Mock() self.protocol = None def deliverBody(protocol): self.protocol = protocol self.response.deliverBody.side_effect = deliverBody self.response = _BufferedResponse(self.response) def test_collect(self): data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'{') self.protocol.dataReceived(b'"msg": "hell') self.protocol.dataReceived(b'o"}') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), None) self.assertEqual(data, [b'{', b'"msg": "hell', b'o"}']) def test_collect_failure(self): data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(ResponseFailed("test failure"))) self.failureResultOf(d, ResponseFailed) self.assertEqual(data, [b'foo']) def test_collect_failure_potential_data_loss(self): """ PotentialDataLoss failures are treated as success. """ data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(PotentialDataLoss())) self.assertEqual(self.successResultOf(d), None) self.assertEqual(data, [b'foo']) def test_collect_0_length(self): self.response.length = 0 d = collect( self.response, lambda d: self.fail("Unexpectedly called with: {0}".format(d))) self.assertEqual(self.successResultOf(d), None) def test_content(self): d = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.dataReceived(b'bar') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), b'foobar') def test_content_cached(self): d1 = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.dataReceived(b'bar') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d1), b'foobar') def _fail_deliverBody(protocol): self.fail("deliverBody unexpectedly called.") self.response.original.deliverBody.side_effect = _fail_deliverBody d3 = content(self.response) self.assertEqual(self.successResultOf(d3), b'foobar') self.assertNotIdentical(d1, d3) def test_content_multiple_waiters(self): d1 = content(self.response) d2 = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d1), b'foo') self.assertEqual(self.successResultOf(d2), b'foo') self.assertNotIdentical(d1, d2) def test_json_content(self): self.response.headers = Headers() d = json_content(self.response) self.protocol.dataReceived(b'{"msg":"hello!"}') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), {"msg": "hello!"}) def test_text_content(self): self.response.headers = Headers( {b'Content-Type': [b'text/plain; charset=utf-8']}) d = text_content(self.response) self.protocol.dataReceived(b'\xe2\x98\x83') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\u2603') def test_text_content_default_encoding_no_param(self): self.response.headers = Headers( {b'Content-Type': [b'text/plain']}) d = text_content(self.response) self.protocol.dataReceived(b'\xa1') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\xa1') def test_text_content_default_encoding_no_header(self): self.response.headers = Headers() d = text_content(self.response) self.protocol.dataReceived(b'\xa1') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\xa1') treq-15.1.0/treq/test/test_multipart.py0000644000076600000240000005403212634616707020424 0ustar glyphstaff00000000000000# coding: utf-8 # Copyright (c) Twisted Matrix Laboratories. # See LICENSE for details. import cgi from io import BytesIO from twisted.trial import unittest from zope.interface.verify import verifyObject from twisted.python import failure, compat from twisted.internet import task from twisted.web.client import FileBodyProducer from twisted.web.iweb import UNKNOWN_LENGTH, IBodyProducer from treq.multipart import MultiPartProducer, _LengthConsumer if compat._PY3: long = int unicode = compat.unicode class MultiPartProducerTestCase(unittest.TestCase): """ Tests for the L{MultiPartProducer} which gets dictionary like object with post parameters, converts them to mutltipart/form-data format and feeds them to an L{IConsumer}. """ def _termination(self): """ This method can be used as the C{terminationPredicateFactory} for a L{Cooperator}. It returns a predicate which immediately returns C{False}, indicating that no more work should be done this iteration. This has the result of only allowing one iteration of a cooperative task to be run per L{Cooperator} iteration. """ return lambda: True def setUp(self): """ Create a L{Cooperator} hooked up to an easily controlled, deterministic scheduler to use with L{MultiPartProducer}. """ self._scheduled = [] self.cooperator = task.Cooperator( self._termination, self._scheduled.append) def successResultOf(self, deferred): """ Backport from 13.0 for compatibility with older Twisted versions """ result = [] deferred.addBoth(result.append) if not result: self.fail( "Success result expected on %r, found no result instead" % ( deferred,)) elif isinstance(result[0], failure.Failure): self.fail( "Success result expected on %r, " "found failure result (%r) instead" % (deferred, result[0])) else: return result[0] def assertNoResult(self, deferred): """ Backport from 13.0 for compatibility with older Twisted versions """ result = [] deferred.addBoth(result.append) if result: self.fail( "No result expected on %r, found %r instead" % ( deferred, result[0])) def failureResultOf(self, deferred): """ Backport from 13.0 for compatibility with older Twisted versions """ result = [] deferred.addBoth(result.append) if not result: self.fail( "Failure result expected on %r, found no result instead" % ( deferred,)) elif not isinstance(result[0], failure.Failure): self.fail( "Failure result expected on %r, " "found success result (%r) instead" % (deferred, result[0])) else: return result[0] def getOutput(self, producer, with_producer=False): """ A convenience function to consume and return outpute. """ consumer = output = BytesIO() producer.startProducing(consumer) while self._scheduled: self._scheduled.pop(0)() if with_producer: return (output.getvalue(), producer) else: return output.getvalue() def newLines(self, value): if isinstance(value, unicode): return value.replace(u"\n", u"\r\n") else: return value.replace(b"\n", b"\r\n") def test_interface(self): """ L{MultiPartProducer} instances provide L{IBodyProducer}. """ self.assertTrue( verifyObject( IBodyProducer, MultiPartProducer({}))) def test_unknownLength(self): """ If the L{MultiPartProducer} is constructed with a file-like object passed as a parameter without either a C{seek} or C{tell} method, its C{length} attribute is set to C{UNKNOWN_LENGTH}. """ class HasSeek(object): def seek(self, offset, whence): pass class HasTell(object): def tell(self): pass producer = MultiPartProducer( {"f": ("name", None, FileBodyProducer(HasSeek()))}) self.assertEqual(UNKNOWN_LENGTH, producer.length) producer = MultiPartProducer( {"f": ("name", None, FileBodyProducer(HasTell()))}) self.assertEqual(UNKNOWN_LENGTH, producer.length) def test_knownLengthOnFile(self): """ If the L{MultiPartProducer} is constructed with a file-like object with both C{seek} and C{tell} methods, its C{length} attribute is set to the size of the file as determined by those methods. """ inputBytes = b"here are some bytes" inputFile = BytesIO(inputBytes) inputFile.seek(5) producer = MultiPartProducer({ "field": ('file name', None, FileBodyProducer( inputFile, cooperator=self.cooperator))}) # Make sure we are generous enough not to alter seek position: self.assertEqual(inputFile.tell(), 5) # Total length is hard to calculate manually # as it contains a lot of headers parameters, newlines and boundaries # let's assert for now that it's no less than the input parameter self.assertTrue(producer.length > len(inputBytes)) # Calculating length should not touch producers self.assertTrue(producer._currentProducer is None) def test_defaultCooperator(self): """ If no L{Cooperator} instance is passed to L{MultiPartProducer}, the global cooperator is used. """ producer = MultiPartProducer({ "field": ('file name', None, FileBodyProducer( BytesIO(b"yo"), cooperator=self.cooperator)) }) self.assertEqual(task.cooperate, producer._cooperate) def test_startProducing(self): """ L{MultiPartProducer.startProducing} starts writing bytes from the input file to the given L{IConsumer} and returns a L{Deferred} which fires when they have all been written. """ consumer = output = BytesIO() producer = MultiPartProducer({ b"field": ('file name', "text/hello-world", FileBodyProducer( BytesIO(b"Hello, World"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) iterations = 0 while self._scheduled: iterations += 1 self._scheduled.pop(0)() self.assertTrue(iterations > 1) self.assertEqual(self.newLines(b"""--heyDavid Content-Disposition: form-data; name="field"; filename="file name" Content-Type: text/hello-world Content-Length: 12 Hello, World --heyDavid-- """), output.getvalue()) self.assertEqual(None, self.successResultOf(complete)) def test_inputClosedAtEOF(self): """ When L{MultiPartProducer} reaches end-of-file on the input file given to it, the input file is closed. """ inputFile = BytesIO(b"hello, world!") consumer = BytesIO() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") producer.startProducing(consumer) while self._scheduled: self._scheduled.pop(0)() self.assertTrue(inputFile.closed) def test_failedReadWhileProducing(self): """ If a read from the input file fails while producing bytes to the consumer, the L{Deferred} returned by L{MultiPartProducer.startProducing} fires with a L{Failure} wrapping that exception. """ class BrokenFile(object): def read(self, count): raise IOError("Simulated bad thing") producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( BrokenFile(), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(BytesIO()) while self._scheduled: self._scheduled.pop(0)() self.failureResultOf(complete).trap(IOError) def test_stopProducing(self): """ L{MultiPartProducer.stopProducing} stops the underlying L{IPullProducer} and the cooperative task responsible for calling C{resumeProducing} and closes the input file but does not cause the L{Deferred} returned by C{startProducing} to fire. """ inputFile = BytesIO(b"hello, world!") consumer = BytesIO() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) self._scheduled.pop(0)() producer.stopProducing() self.assertTrue(inputFile.closed) self._scheduled.pop(0)() self.assertNoResult(complete) def test_pauseProducing(self): """ L{MultiPartProducer.pauseProducing} temporarily suspends writing bytes from the input file to the given L{IConsumer}. """ inputFile = BytesIO(b"hello, world!") consumer = output = BytesIO() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) self._scheduled.pop(0)() currentValue = output.getvalue() self.assertTrue(currentValue) producer.pauseProducing() # Sort of depends on an implementation detail of Cooperator: even # though the only task is paused, there's still a scheduled call. If # this were to go away because Cooperator became smart enough to cancel # this call in this case, that would be fine. self._scheduled.pop(0)() # Since the producer is paused, no new data should be here. self.assertEqual(output.getvalue(), currentValue) self.assertNoResult(complete) def test_resumeProducing(self): """ L{MultoPartProducer.resumeProducing} re-commences writing bytes from the input file to the given L{IConsumer} after it was previously paused with L{MultiPartProducer.pauseProducing}. """ inputFile = BytesIO(b"hello, world!") consumer = output = BytesIO() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") producer.startProducing(consumer) self._scheduled.pop(0)() currentValue = output.getvalue() self.assertTrue(currentValue) producer.pauseProducing() producer.resumeProducing() self._scheduled.pop(0)() # make sure we started producing new data after resume self.assertTrue(len(currentValue) < len(output.getvalue())) def test_unicodeString(self): """ Make sure unicode string is passed properly """ output, producer = self.getOutput( MultiPartProducer({ "afield": u"Это моя строчечка\r\n", }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(u"""--heyDavid Content-Disposition: form-data; name="afield" Это моя строчечка --heyDavid-- """.encode("utf-8")) self.assertEqual(producer.length, len(expected)) self.assertEqual(expected, output) def test_failOnByteStrings(self): """ If byte string is passed as a param and we don't know the encoding, fail early to prevent corrupted form posts """ self.assertRaises( ValueError, MultiPartProducer, { "afield": u"это моя строчечка".encode("utf-32"), }, cooperator=self.cooperator, boundary=b"heyDavid") def test_failOnUnknownParams(self): """ If byte string is passed as a param and we don't know the encoding, fail early to prevent corrupted form posts """ # unknown key self.assertRaises( ValueError, MultiPartProducer, { (1, 2): BytesIO(b"yo"), }, cooperator=self.cooperator, boundary=b"heyDavid") # tuple length self.assertRaises( ValueError, MultiPartProducer, { "a": (1,), }, cooperator=self.cooperator, boundary=b"heyDavid") # unknown value type self.assertRaises( ValueError, MultiPartProducer, { "a": {"a": "b"}, }, cooperator=self.cooperator, boundary=b"heyDavid") def test_twoFields(self): """ Make sure multiple fields are rendered properly. """ output = self.getOutput( MultiPartProducer({ "afield": "just a string\r\n", "bfield": "another string" }, cooperator=self.cooperator, boundary=b"heyDavid")) self.assertEqual(self.newLines(b"""--heyDavid Content-Disposition: form-data; name="afield" just a string --heyDavid Content-Disposition: form-data; name="bfield" another string --heyDavid-- """), output) def test_fieldsAndAttachment(self): """ Make sure multiple fields are rendered properly. """ output, producer = self.getOutput( MultiPartProducer({ "bfield": "just a string\r\n", "cfield": "another string", "afield": ( "file name", "text/hello-world", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="bfield" just a string --heyDavid Content-Disposition: form-data; name="cfield" another string --heyDavid Content-Disposition: form-data; name="afield"; filename="file name" Content-Type: text/hello-world Content-Length: 15 my lovely bytes --heyDavid-- """) self.assertEqual(producer.length, len(expected)) self.assertEqual(output, expected) def test_multipleFieldsAndAttachments(self): """ Make sure multiple fields, attachments etc are rendered properly. """ output, producer = self.getOutput( MultiPartProducer({ "cfield": "just a string\r\n", "bfield": "another string", "efield": ( "ef", "text/html", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes2"), cooperator=self.cooperator)), "xfield": ( "xf", "text/json", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes219"), cooperator=self.cooperator)), "afield": ( "af", "text/xml", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes22"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="bfield" another string --heyDavid Content-Disposition: form-data; name="cfield" just a string --heyDavid Content-Disposition: form-data; name="afield"; filename="af" Content-Type: text/xml Content-Length: 17 my lovely bytes22 --heyDavid Content-Disposition: form-data; name="efield"; filename="ef" Content-Type: text/html Content-Length: 16 my lovely bytes2 --heyDavid Content-Disposition: form-data; name="xfield"; filename="xf" Content-Type: text/json Content-Length: 18 my lovely bytes219 --heyDavid-- """) self.assertEqual(producer.length, len(expected)) self.assertEqual(output, expected) def test_unicodeAttachmentName(self): """ Make sure unicode attachment names are supported. """ output, producer = self.getOutput( MultiPartProducer({ "field": ( u'Так себе имя.jpg', "image/jpeg", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator ) ) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(u"""--heyDavid Content-Disposition: form-data; name="field"; filename="Так себе имя.jpg" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """.encode("utf-8")) self.assertEqual(len(expected), producer.length) self.assertEqual(expected, output) def test_missingAttachmentName(self): """ Make sure attachments without names are supported """ output, producer = self.getOutput( MultiPartProducer({ "field": ( None, "image/jpeg", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator, ) ) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="field" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """) self.assertEqual(len(expected), producer.length) self.assertEqual(expected, output) def test_newLinesInParams(self): """ Make sure we generate proper format even with newlines in attachments """ output = self.getOutput( MultiPartProducer({ "field": ( u'\r\noops.j\npg', "image/jp\reg\n", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator ) ) }, cooperator=self.cooperator, boundary=b"heyDavid" ) ) self.assertEqual(self.newLines(u"""--heyDavid Content-Disposition: form-data; name="field"; filename="oops.jpg" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """.encode("utf-8")), output) def test_worksWithCgi(self): """ Make sure the stuff we generated actually parsed by python cgi """ output = self.getOutput( MultiPartProducer([ ("cfield", "just a string\r\n"), ("cfield", "another string"), ("efield", ('ef', "text/html", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes2"), cooperator=self.cooperator, ))), ("xfield", ('xf', "text/json", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes219"), cooperator=self.cooperator, ))), ("afield", ('af', "text/xml", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes22"), cooperator=self.cooperator, ))) ], cooperator=self.cooperator, boundary=b"heyDavid" ) ) form = cgi.parse_multipart(BytesIO(output), {"boundary": b"heyDavid"}) self.assertEqual(set([b'just a string\r\n', b'another string']), set(form['cfield'])) self.assertEqual(set([b'my lovely bytes2']), set(form['efield'])) self.assertEqual(set([b'my lovely bytes219']), set(form['xfield'])) self.assertEqual(set([b'my lovely bytes22']), set(form['afield'])) class LengthConsumerTestCase(unittest.TestCase): """ Tests for the _LengthConsumer, an L{IConsumer} which is used to compute the length of a produced content. """ def test_scalarsUpdateCounter(self): """ When a long or an int are written, _LengthConsumer updates its internal counter. """ consumer = _LengthConsumer() self.assertEqual(consumer.length, 0) consumer.write(long(1)) self.assertEqual(consumer.length, 1) consumer.write(2147483647) self.assertEqual(consumer.length, long(2147483648)) def test_stringUpdatesCounter(self): """ Use the written string length to update the internal counter """ a = (b"Cantami, o Diva, del Pelide Achille\n l'ira funesta che " b"infiniti addusse\n lutti agli Achei") consumer = _LengthConsumer() self.assertEqual(consumer.length, 0) consumer.write(a) self.assertEqual(consumer.length, 89) treq-15.1.0/treq/test/test_response.py0000644000076600000240000000335012545372057020234 0ustar glyphstaff00000000000000from twisted.trial.unittest import TestCase from twisted import version from twisted.python.versions import Version from twisted.web.http_headers import Headers from treq.response import _Response skip_history = None if version < Version("twisted", 13, 1, 0): skip_history = "Response history not supported on Twisted < 13.1.0." class FakeResponse(object): def __init__(self, code, headers): self.code = code self.headers = headers self.previousResponse = None def setPreviousResponse(self, response): self.previousResponse = response class ResponseTests(TestCase): def test_history(self): redirect1 = FakeResponse( 301, Headers({'location': ['http://example.com/']}) ) redirect2 = FakeResponse( 302, Headers({'location': ['https://example.com/']}) ) redirect2.setPreviousResponse(redirect1) final = FakeResponse(200, Headers({})) final.setPreviousResponse(redirect2) wrapper = _Response(final, None) history = wrapper.history() self.assertEqual(wrapper.code, 200) self.assertEqual(history[0].code, 301) self.assertEqual(history[1].code, 302) def test_no_history(self): wrapper = _Response(FakeResponse(200, Headers({})), None) self.assertEqual(wrapper.history(), []) if skip_history: test_history.skip = skip_history test_no_history.skip = skip_history def test_history_notimplemented(self): wrapper = _Response(FakeResponse(200, Headers({})), None) self.assertRaises(NotImplementedError, wrapper.history) if not skip_history: test_history_notimplemented.skip = "History supported." treq-15.1.0/treq/test/test_testing.py0000644000076600000240000004106112634616707020056 0ustar glyphstaff00000000000000""" In-memory treq returns stubbed responses. """ from functools import partial from inspect import getmembers, isfunction from mock import ANY from six import text_type, binary_type from twisted.web.client import ResponseFailed from twisted.web.error import SchemeNotSupported from twisted.web.resource import Resource from twisted.web.server import NOT_DONE_YET from twisted.python.compat import _PY3 import treq from treq.test.util import TestCase from treq.testing import ( HasHeaders, RequestSequence, StringStubbingResource, StubTreq ) class _StaticTestResource(Resource): """Resource that always returns 418 "I'm a teapot""" isLeaf = True def render(self, request): request.setResponseCode(418) request.setHeader(b"x-teapot", b"teapot!") return b"I'm a teapot" class _NonResponsiveTestResource(Resource): """Resource that returns NOT_DONE_YET and never finishes the request""" isLeaf = True def render(self, request): return NOT_DONE_YET class _EventuallyResponsiveTestResource(Resource): """ Resource that returns NOT_DONE_YET and stores the request so that something else can finish the response later. """ isLeaf = True def render(self, request): self.stored_request = request return NOT_DONE_YET class StubbingTests(TestCase): """ Tests for :class:`StubTreq`. """ def test_stubtreq_provides_all_functions_in_treq_all(self): """ Every single function and attribute exposed by :obj:`treq.__all__` is provided by :obj:`StubTreq`. """ treq_things = [(name, obj) for name, obj in getmembers(treq) if name in treq.__all__] stub = StubTreq(_StaticTestResource()) api_things = [(name, obj) for name, obj in treq_things if obj.__module__ == "treq.api"] content_things = [(name, obj) for name, obj in treq_things if obj.__module__ == "treq.content"] # sanity checks - this test should fail if treq exposes a new API # without changes being made to StubTreq and this test. msg = ("At the time this test was written, StubTreq only knew about " "treq exposing functions from treq.api and treq.content. If " "this has changed, StubTreq will need to be updated, as will " "this test.") self.assertTrue(all(isfunction(obj) for name, obj in treq_things), msg) self.assertEqual(set(treq_things), set(api_things + content_things), msg) for name, obj in api_things: self.assertTrue( isfunction(getattr(stub, name, None)), "StubTreq.{0} should be a function.".format(name)) for name, obj in content_things: self.assertIs( getattr(stub, name, None), obj, "StubTreq.{0} should just expose treq.{0}".format(name)) def test_providing_resource_to_stub_treq(self): """ The resource provided to StubTreq responds to every request no matter what the URI or parameters or data. """ verbs = ('GET', 'PUT', 'HEAD', 'PATCH', 'DELETE', 'POST') urls = ( 'http://supports-http.com', 'https://supports-https.com', 'http://this/has/a/path/and/invalid/domain/name', 'https://supports-https.com:8080', 'http://supports-http.com:8080', ) params = (None, {}, {b'page': [1]}) headers = (None, {}, {b'x-random-header': [b'value', b'value2']}) data = (None, b"", b'some data', b'{"some": "json"}') stub = StubTreq(_StaticTestResource()) combos = ( (verb, {"url": url, "params": p, "headers": h, "data": d}) for verb in verbs for url in urls for p in params for h in headers for d in data ) for combo in combos: verb, kwargs = combo deferreds = (stub.request(verb, **kwargs), getattr(stub, verb.lower())(**kwargs)) for d in deferreds: resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual([b'teapot!'], resp.headers.getRawHeaders(b'x-teapot')) self.assertEqual(b"" if verb == "HEAD" else b"I'm a teapot", self.successResultOf(stub.content(resp))) def test_handles_invalid_schemes(self): """ Invalid URLs errback with a :obj:`SchemeNotSupported` failure, and does so even after a successful request. """ stub = StubTreq(_StaticTestResource()) self.failureResultOf(stub.get(""), SchemeNotSupported) self.successResultOf(stub.get("http://url.com")) self.failureResultOf(stub.get(""), SchemeNotSupported) def test_files_are_rejected(self): """ StubTreq does not handle files yet - it should reject requests which attempt to pass files. """ stub = StubTreq(_StaticTestResource()) self.assertRaises( AssertionError, stub.request, 'method', 'http://url', files=b'some file') def test_passing_in_strange_data_is_rejected(self): """ StubTreq rejects data that isn't list/dictionary/tuple/bytes/unicode. """ stub = StubTreq(_StaticTestResource()) self.assertRaises( AssertionError, stub.request, 'method', 'http://url', data=object()) self.successResultOf(stub.request('method', 'http://url', data={})) self.successResultOf(stub.request('method', 'http://url', data=[])) self.successResultOf(stub.request('method', 'http://url', data=())) self.successResultOf( stub.request('method', 'http://url', data=binary_type(b""))) self.successResultOf( stub.request('method', 'http://url', data=text_type(""))) def test_handles_failing_asynchronous_requests(self): """ Handle a resource returning NOT_DONE_YET and then canceling the request. """ stub = StubTreq(_NonResponsiveTestResource()) d = stub.request('method', 'http://url', data=b"1234") self.assertNoResult(d) d.cancel() self.failureResultOf(d, ResponseFailed) def test_handles_successful_asynchronous_requests(self): """ Handle a resource returning NOT_DONE_YET and then later finishing the response. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data=b"1234") self.assertNoResult(d) rsrc.stored_request.finish() stub.flush() resp = self.successResultOf(d) self.assertEqual(resp.code, 200) def test_handles_successful_asynchronous_requests_with_response_data(self): """ Handle a resource returning NOT_DONE_YET and then sending some data in the response. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data=b"1234") self.assertNoResult(d) chunks = [] rsrc.stored_request.write(b'spam ') rsrc.stored_request.write(b'eggs') stub.flush() resp = self.successResultOf(d) d = stub.collect(resp, chunks.append) self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'spam eggs') rsrc.stored_request.finish() stub.flush() self.successResultOf(d) def test_handles_successful_asynchronous_requests_with_streaming(self): """ Handle a resource returning NOT_DONE_YET and then streaming data back gradually over time. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data="1234") self.assertNoResult(d) chunks = [] rsrc.stored_request.write(b'spam ') rsrc.stored_request.write(b'eggs') stub.flush() resp = self.successResultOf(d) d = stub.collect(resp, chunks.append) self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'spam eggs') del chunks[:] rsrc.stored_request.write(b'eggs\r\nspam\r\n') stub.flush() self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'eggs\r\nspam\r\n') rsrc.stored_request.finish() stub.flush() self.successResultOf(d) class HasHeadersTests(TestCase): """ Tests for :obj:`HasHeaders`. """ def test_equality_and_strict_subsets_succeed(self): """ The :obj:`HasHeaders` returns True if both sets of headers are equivalent, or the first is a strict subset of the second. """ self.assertEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two', 'three']}, "Equivalent headers do not match.") self.assertEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two', 'three', 'four'], 'ten': ['six']}, "Strict subset headers do not match") def test_partial_or_zero_intersection_subsets_fail(self): """ The :obj:`HasHeaders` returns False if both sets of headers overlap but the first is not a strict subset of the second. It also returns False if there is no overlap. """ self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['three', 'four']}, "Partial value overlap matches") self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two']}, "Missing value matches") self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'ten': ['six']}, "Complete inequality matches") def test_case_insensitive_keys(self): """ The :obj:`HasHeaders` equality function ignores the case of the header keys. """ self.assertEqual(HasHeaders({b'A': [b'1'], b'b': [b'2']}), {b'a': [b'1'], b'B': [b'2']}) def test_case_sensitive_values(self): """ The :obj:`HasHeaders` equality function does care about the case of the header value. """ self.assertNotEqual(HasHeaders({b'a': [b'a']}), {b'a': [b'A']}) def test_repr(self): """ :obj:`HasHeaders` returns a nice string repr. """ if _PY3: reprOutput = "HasHeaders({b'a': [b'b']})" else: reprOutput = "HasHeaders({'a': ['b']})" self.assertEqual(reprOutput, repr(HasHeaders({b'A': [b'b']}))) class StringStubbingTests(TestCase): """ Tests for :obj:`StringStubbingResource`. """ def _get_response_for(self, expected_args, response): """ Make a :obj:`IStringResponseStubs` that checks the expected args and returns the given response. """ method, url, params, headers, data = expected_args def get_response_for(_method, _url, _params, _headers, _data): self.assertEqual((method, url, params, data), (_method, _url, _params, _data)) self.assertEqual(HasHeaders(headers), _headers) return response return get_response_for def test_interacts_successfully_with_istub(self): """ The :obj:`IStringResponseStubs` is passed the correct parameters with which to evaluate the response, and the response is returned. """ resource = StringStubbingResource(self._get_response_for( (b'DELETE', 'http://what/a/thing', {b'page': [b'1']}, {b'x-header': [b'eh']}, b'datastr'), (418, {b'x-response': b'responseheader'}, b'response body'))) stub = StubTreq(resource) d = stub.delete('http://what/a/thing', headers={b'x-header': b'eh'}, params={b'page': b'1'}, data=b'datastr') resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual([b'responseheader'], resp.headers.getRawHeaders(b'x-response')) self.assertEqual(b'response body', self.successResultOf(stub.content(resp))) class RequestSequenceTests(TestCase): """ Tests for :obj:`RequestSequence`. """ def setUp(self): """ Set up a way to report failures asynchronously. """ self.async_failures = [] def test_mismatched_request_causes_failure(self): """ If a request is made that is not expected as the next request, causes a failure. """ sequence = RequestSequence( [((b'get', 'https://anything/', {b'1': [b'2']}, HasHeaders({b'1': [b'1']}), b'what'), (418, {}, b'body')), ((b'get', 'http://anything', {}, HasHeaders({b'2': [b'1']}), b'what'), (202, {}, b'deleted'))], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) get = partial(stub.get, 'https://anything?1=2', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(get()) self.assertEqual(418, resp.code) self.assertEqual(b'body', self.successResultOf(stub.content(resp))) self.assertEqual([], self.async_failures) resp = self.successResultOf(get()) self.assertEqual(500, resp.code) self.assertEqual(1, len(self.async_failures)) self.assertIn("Expected the next request to be", self.async_failures[0]) self.assertFalse(sequence.consumed()) def test_unexpected_number_of_request_causes_failure(self): """ If there are no more expected requests, making a request causes a failure. """ sequence = RequestSequence( [], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) d = stub.get('https://anything', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(d) self.assertEqual(500, resp.code) self.assertEqual(1, len(self.async_failures)) self.assertIn("No more requests expected, but request", self.async_failures[0]) # the expected requests have all been made self.assertTrue(sequence.consumed()) def test_works_with_mock_any(self): """ :obj:`mock.ANY` can be used with the request parameters. """ sequence = RequestSequence( [((ANY, ANY, ANY, ANY, ANY), (418, {}, b'body'))], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) with sequence.consume(sync_failure_reporter=self.fail): d = stub.get('https://anything', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual(b'body', self.successResultOf(stub.content(resp))) self.assertEqual([], self.async_failures) # the expected requests have all been made self.assertTrue(sequence.consumed()) def test_consume_context_manager_fails_on_remaining_requests(self): """ If the `consume` context manager is used, if there are any remaining expecting requests, the test case will be failed. """ sequence = RequestSequence( [((ANY, ANY, ANY, ANY, ANY), (418, {}, b'body'))] * 2, async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) consume_failures = [] with sequence.consume(sync_failure_reporter=consume_failures.append): self.successResultOf(stub.get('https://anything', data=b'what', headers={b'1': b'1'})) self.assertEqual(1, len(consume_failures)) self.assertIn( "Not all expected requests were made. Still expecting:", consume_failures[0]) self.assertIn( "{0}(url={0}, params={0}, headers={0}, data={0})".format( repr(ANY)), consume_failures[0]) # no asynchronous failures (mismatches, etc.) self.assertEqual([], self.async_failures) treq-15.1.0/treq/test/test_treq_integration.py0000644000076600000240000002134212634616707021757 0ustar glyphstaff00000000000000from io import BytesIO from twisted.trial.unittest import TestCase from twisted.internet.defer import CancelledError, inlineCallbacks from twisted.internet.task import deferLater from twisted.internet import reactor from twisted.internet.tcp import Client from twisted import version as current_version from twisted.python.versions import Version from twisted.web.client import HTTPConnectionPool, ResponseFailed from treq.test.util import DEBUG import treq HTTPBIN_URL = "http://httpbin.org" HTTPSBIN_URL = "https://httpbin.org" def todo_relative_redirect(test_method): expected_version = Version('twisted', 13, 1, 0) if current_version < expected_version: test_method.todo = ( "Relative Redirects are not supported in Twisted versions " "prior to: {0}").format(expected_version.short()) return test_method @inlineCallbacks def print_response(response): if DEBUG: print() print('---') print(response.code) print(response.headers) text = yield treq.text_content(response) print(text) print('---') def with_baseurl(method): def _request(self, url, *args, **kwargs): return method(self.baseurl + url, *args, pool=self.pool, **kwargs) return _request class TreqIntegrationTests(TestCase): baseurl = HTTPBIN_URL get = with_baseurl(treq.get) head = with_baseurl(treq.head) post = with_baseurl(treq.post) put = with_baseurl(treq.put) patch = with_baseurl(treq.patch) delete = with_baseurl(treq.delete) def setUp(self): self.pool = HTTPConnectionPool(reactor, False) def tearDown(self): def _check_fds(_): # This appears to only be necessary for HTTPS tests. # For the normal HTTP tests then closeCachedConnections is # sufficient. fds = set(reactor.getReaders() + reactor.getReaders()) if not [fd for fd in fds if isinstance(fd, Client)]: return return deferLater(reactor, 0, _check_fds, None) return self.pool.closeCachedConnections().addBoth(_check_fds) @inlineCallbacks def assert_data(self, response, expected_data): body = yield treq.json_content(response) self.assertIn('data', body) self.assertEqual(body['data'], expected_data) @inlineCallbacks def assert_sent_header(self, response, header, expected_value): body = yield treq.json_content(response) self.assertIn(header, body['headers']) self.assertEqual(body['headers'][header], expected_value) @inlineCallbacks def test_get(self): response = yield self.get('/get') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_get_headers(self): response = yield self.get('/get', {b'X-Blah': [b'Foo', b'Bar']}) self.assertEqual(response.code, 200) yield self.assert_sent_header(response, 'X-Blah', 'Foo,Bar') yield print_response(response) @inlineCallbacks def test_get_headers_unicode(self): response = yield self.get('/get', {u'X-Blah': [u'Foo', b'Bar']}) self.assertEqual(response.code, 200) yield self.assert_sent_header(response, 'X-Blah', 'Foo,Bar') yield print_response(response) @inlineCallbacks def test_get_302_absolute_redirect(self): response = yield self.get( '/redirect-to?url={0}/get'.format(self.baseurl)) self.assertEqual(response.code, 200) yield print_response(response) @todo_relative_redirect @inlineCallbacks def test_get_302_relative_redirect(self): response = yield self.get('/relative-redirect/1') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_get_302_redirect_disallowed(self): response = yield self.get('/redirect/1', allow_redirects=False) self.assertEqual(response.code, 302) yield print_response(response) @inlineCallbacks def test_head(self): response = yield self.head('/get') body = yield treq.content(response) self.assertEqual(b'', body) yield print_response(response) @inlineCallbacks def test_head_302_absolute_redirect(self): response = yield self.head( '/redirect-to?url={0}/get'.format(self.baseurl)) self.assertEqual(response.code, 200) yield print_response(response) @todo_relative_redirect @inlineCallbacks def test_head_302_relative_redirect(self): response = yield self.head('/relative-redirect/1') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_head_302_redirect_disallowed(self): response = yield self.head('/redirect/1', allow_redirects=False) self.assertEqual(response.code, 302) yield print_response(response) @inlineCallbacks def test_post(self): response = yield self.post('/post', b'Hello!') self.assertEqual(response.code, 200) yield self.assert_data(response, 'Hello!') yield print_response(response) @inlineCallbacks def test_multipart_post(self): class FileLikeObject(BytesIO): def __init__(self, val): BytesIO.__init__(self, val) self.name = "david.png" def read(*args, **kwargs): return BytesIO.read(*args, **kwargs) response = yield self.post( '/post', data={"a": "b"}, files={"file1": FileLikeObject(b"file")}) self.assertEqual(response.code, 200) body = yield treq.json_content(response) self.assertEqual('b', body['form']['a']) self.assertEqual('file', body['files']['file1']) yield print_response(response) @inlineCallbacks def test_post_headers(self): response = yield self.post( '/post', b'{msg: "Hello!"}', headers={'Content-Type': ['application/json']} ) self.assertEqual(response.code, 200) yield self.assert_sent_header( response, 'Content-Type', 'application/json') yield self.assert_data(response, '{msg: "Hello!"}') yield print_response(response) @inlineCallbacks def test_put(self): response = yield self.put('/put', data=b'Hello!') yield print_response(response) @inlineCallbacks def test_patch(self): response = yield self.patch('/patch', data=b'Hello!') self.assertEqual(response.code, 200) yield self.assert_data(response, 'Hello!') yield print_response(response) @inlineCallbacks def test_delete(self): response = yield self.delete('/delete') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_gzip(self): response = yield self.get('/gzip') self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertTrue(json['gzipped']) @inlineCallbacks def test_basic_auth(self): response = yield self.get('/basic-auth/treq/treq', auth=('treq', 'treq')) self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertTrue(json['authenticated']) self.assertEqual(json['user'], 'treq') @inlineCallbacks def test_failed_basic_auth(self): response = yield self.get('/basic-auth/treq/treq', auth=('not-treq', 'not-treq')) self.assertEqual(response.code, 401) yield print_response(response) @inlineCallbacks def test_timeout(self): """ Verify a timeout fires if a request takes too long. """ yield self.assertFailure(self.get('/delay/2', timeout=1), CancelledError, ResponseFailed) @inlineCallbacks def test_cookie(self): response = yield self.get('/cookies', cookies={'hello': 'there'}) self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertEqual(json['cookies']['hello'], 'there') @inlineCallbacks def test_set_cookie(self): response = yield self.get('/cookies/set', allow_redirects=False, params={'hello': 'there'}) # self.assertEqual(response.code, 200) yield print_response(response) self.assertEqual(response.cookies()['hello'], 'there') class HTTPSTreqIntegrationTests(TreqIntegrationTests): baseurl = HTTPSBIN_URL treq-15.1.0/treq/test/test_utils.py0000644000076600000240000000404412545372057017537 0ustar glyphstaff00000000000000import mock from treq.test.util import TestCase from treq._utils import default_reactor, default_pool, set_global_pool class DefaultReactorTests(TestCase): def test_passes_reactor(self): mock_reactor = mock.Mock() self.assertEqual(default_reactor(mock_reactor), mock_reactor) def test_uses_default_reactor(self): from twisted.internet import reactor self.assertEqual(default_reactor(None), reactor) class DefaultPoolTests(TestCase): def setUp(self): set_global_pool(None) pool_patcher = mock.patch('treq._utils.HTTPConnectionPool') self.HTTPConnectionPool = pool_patcher.start() self.addCleanup(pool_patcher.stop) self.reactor = mock.Mock() def test_persistent_false(self): self.assertEqual( default_pool(self.reactor, None, False), self.HTTPConnectionPool.return_value ) self.HTTPConnectionPool.assert_called_once_with( self.reactor, persistent=False ) def test_pool_none_persistent_none(self): self.assertEqual( default_pool(self.reactor, None, None), self.HTTPConnectionPool.return_value ) self.HTTPConnectionPool.assert_called_once_with( self.reactor, persistent=True ) def test_pool_none_persistent_true(self): self.assertEqual( default_pool(self.reactor, None, True), self.HTTPConnectionPool.return_value ) self.HTTPConnectionPool.assert_called_once_with( self.reactor, persistent=True ) def test_cached_global_pool(self): pool1 = default_pool(self.reactor, None, None) self.HTTPConnectionPool.return_value = mock.Mock() pool2 = default_pool(self.reactor, None, True) self.assertEqual(pool1, pool2) def test_specified_pool(self): pool = mock.Mock() self.assertEqual( default_pool(self.reactor, pool, None), pool ) self.HTTPConnectionPool.assert_not_called() treq-15.1.0/treq/test/util.py0000644000076600000240000000236212545372057016316 0ustar glyphstaff00000000000000import os import platform import mock import twisted from twisted.internet import reactor from twisted.internet.task import Clock from twisted.trial.unittest import TestCase from twisted.python.failure import Failure from twisted.python.versions import Version DEBUG = os.getenv("TREQ_DEBUG", False) == "true" is_pypy = platform.python_implementation() == 'PyPy' if twisted.version < Version('twisted', 13, 1, 0): class TestCase(TestCase): def successResultOf(self, d): results = [] d.addBoth(results.append) if isinstance(results[0], Failure): results[0].raiseException() return results[0] def failureResultOf(self, d, *errorTypes): results = [] d.addBoth(results.append) if not isinstance(results[0], Failure): self.fail("Expected one of {0} got {1}.".format( errorTypes, results[0])) self.assertTrue(results[0].check(*errorTypes)) return results[0] def with_clock(fn): def wrapper(*args, **kwargs): clock = Clock() with mock.patch.object(reactor, 'callLater', clock.callLater): return fn(*(args + (clock,)), **kwargs) return wrapper treq-15.1.0/treq/testing.py0000644000076600000240000004045012634616707016041 0ustar glyphstaff00000000000000""" In-memory version of treq for testing. """ from __future__ import absolute_import, division, print_function from six import text_type, PY3 from contextlib import contextmanager from functools import wraps from twisted.test.proto_helpers import MemoryReactor from twisted.test import iosim from twisted.internet.address import IPv4Address from twisted.internet.defer import succeed from twisted.internet.interfaces import ISSLTransport from twisted.python.urlpath import URLPath from twisted.web.client import Agent from twisted.web.resource import Resource from twisted.web.server import Site from twisted.web.iweb import IAgent, IBodyProducer from zope.interface import directlyProvides, implementer import treq from treq.client import HTTPClient @implementer(IAgent) class RequestTraversalAgent(object): """ :obj:`IAgent` implementation that issues an in-memory request rather than going out to a real network socket. """ def __init__(self, rootResource): """ :param rootResource: The twisted IResource at the root of the resource tree. """ self._memoryReactor = MemoryReactor() self._realAgent = Agent(reactor=self._memoryReactor) self._rootResource = rootResource self._pumps = set() def request(self, method, uri, headers=None, bodyProducer=None): """ Implement IAgent.request. """ # We want to use Agent to parse the HTTP response, so let's ask it to # make a request against our in-memory reactor. response = self._realAgent.request(method, uri, headers, bodyProducer) # If the request has already finished, just propagate the result. In # reality this would only happen in failure, but if the agent ever adds # a local cache this might be a success. already_called = [] def check_already_called(r): already_called.append(r) return r response.addBoth(check_already_called) if already_called: return response # That will try to establish an HTTP connection with the reactor's # connectTCP method, and MemoryReactor will place Agent's factory into # the tcpClients list. Alternately, it will try to establish an HTTPS # connection with the reactor's connectSSL method, and MemoryReactor # will place it into the sslClients list. We'll extract that. if PY3: scheme = URLPath.fromBytes(uri).scheme else: scheme = URLPath.fromString(uri).scheme if scheme == b"https": host, port, factory, context_factory, timeout, bindAddress = ( self._memoryReactor.sslClients[-1]) else: host, port, factory, timeout, bindAddress = ( self._memoryReactor.tcpClients[-1]) serverAddress = IPv4Address('TCP', '127.0.0.1', port) clientAddress = IPv4Address('TCP', '127.0.0.1', 31337) # Create the protocol and fake transport for the client and server, # using the factory that was passed to the MemoryReactor for the # client, and a Site around our rootResource for the server. serverProtocol = Site(self._rootResource).buildProtocol(None) serverTransport = iosim.FakeTransport( serverProtocol, isServer=True, hostAddress=serverAddress, peerAddress=clientAddress) clientProtocol = factory.buildProtocol(None) clientTransport = iosim.FakeTransport( clientProtocol, isServer=False, hostAddress=clientAddress, peerAddress=serverAddress) # Twisted 13.2 compatibility. serverTransport.abortConnection = serverTransport.loseConnection clientTransport.abortConnection = clientTransport.loseConnection if scheme == b"https": # Provide ISSLTransport on both transports, so everyone knows that # this is HTTPS. directlyProvides(serverTransport, ISSLTransport) directlyProvides(clientTransport, ISSLTransport) # Make a pump for wiring the client and server together. pump = iosim.connect( serverProtocol, serverTransport, clientProtocol, clientTransport) self._pumps.add(pump) return response def flush(self): """ Flush all data between pending client/server pairs. This is only necessary if a :obj:`Resource` under test returns :obj:`NOT_DONE_YET` from its ``render`` method, making a response asynchronous. In that case, after each write from the server, :meth:`pump` must be called so the client can see it. """ old_pumps = self._pumps new_pumps = self._pumps = set() for p in old_pumps: p.flush() if p.clientIO.disconnected and p.serverIO.disconnected: continue new_pumps.add(p) @implementer(IBodyProducer) class _SynchronousProducer(object): """ A partial implementation of an :obj:`IBodyProducer` which produces its entire payload immediately. There is no way to access to an instance of this object from :obj:`RequestTraversalAgent` or :obj:`StubTreq`, or even a :obj:`Resource: passed to :obj:`StubTreq`. This does not implement the :func:`IBodyProducer.stopProducing` method, because that is very difficult to trigger. (The request from RequestTraversalAgent would have to be canceled while it is still in the transmitting state), and the intent is to use RequestTraversalAgent to make synchronous requests. """ def __init__(self, body): """ Create a synchronous producer with some bytes. """ self.body = body msg = ("StubTreq currently only supports url-encodable types, bytes, " "or unicode as data.") assert isinstance(body, (bytes, text_type)), msg if isinstance(body, text_type): self.body = body.encode('utf-8') self.length = len(body) def startProducing(self, consumer): """ Immediately produce all data. """ consumer.write(self.body) return succeed(None) def _reject_files(f): """ Decorator that rejects the 'files' keyword argument to the request functions, because that is not handled by this yet. """ @wraps(f) def wrapper(*args, **kwargs): if 'files' in kwargs: raise AssertionError("StubTreq cannot handle files.") return f(*args, **kwargs) return wrapper class StubTreq(object): """ A fake version of the treq module that can be used for testing that provides all the function calls exposed in treq.__all__. :ivar resource: A :obj:`Resource` object that provides the fake responses """ def __init__(self, resource): """ Construct a client, and pass through client methods and/or treq.content functions. """ _agent = RequestTraversalAgent(resource) _client = HTTPClient(agent=_agent, data_to_body_producer=_SynchronousProducer) for function_name in treq.__all__: function = getattr(_client, function_name, None) if function is None: function = getattr(treq, function_name) else: function = _reject_files(function) setattr(self, function_name, function) self.flush = _agent.flush class StringStubbingResource(Resource): """ A resource that takes a callable with 5 parameters ``(method, url, params, headers, data)`` and returns ``(code, headers, body)``. The resource uses the callable to return a real response as a result of a request. The parameters for the callable are:: :param bytes method: An HTTP method :param bytes url: The full URL of the request :param dict params: A dictionary of query parameters mapping query keys lists of values (sorted alphabetically) :param dict headers: A dictionary of headers mapping header keys to a list of header values (sorted alphabetically) :param str data: The request body. :return: a ``tuple`` of (code, headers, body) where the code is the HTTP status code, the headers is a dictionary of bytes (unlike the `headers` parameter, which is a dictionary of lists), and body is a string that will be returned as the response body. If there is a stubbing error, the return value is undefined (if an exception is raised, :obj:`Resource` will just eat it and return 500 in its place). The callable, or whomever creates the callable, should have a way to handle error reporting. """ isLeaf = True def __init__(self, get_response_for): """ See ``StringStubbingResource``. """ Resource.__init__(self) self._get_response_for = get_response_for def render(self, request): """ Produce a response according to the stubs provided. """ params = request.args headers = {} for k, v in request.requestHeaders.getAllRawHeaders(): headers[k] = v for dictionary in (params, headers): for k in dictionary: dictionary[k] = sorted(dictionary[k]) # The incoming request does not have the absoluteURI property, because # an incoming request is a IRequest, not an IClientRequest, so it # the absolute URI needs to be synthesized. # But request.URLPath() only returns the scheme and hostname, because # that is the URL for this resource (because this resource handles # everything from the root on down). # So we need to add the request.path (not request.uri, which includes # the query parameters) absoluteURI = str(request.URLPath().click(request.path)) status_code, headers, body = self._get_response_for( request.method, absoluteURI, params, headers, request.content.read()) request.setResponseCode(status_code) for k, v in headers.items(): request.setHeader(k, v) return body def _maybeEncode(someStr): """ Encode `someStr` to ASCII if required. """ if isinstance(someStr, text_type): return someStr.encode('ascii') return someStr class HasHeaders(object): """ Since Twisted adds headers to a request, such as the host and the content length, it's necessary to test whether request headers CONTAIN the expected headers (the ones that are not automatically added by Twisted). This wraps a set of headers, and can be used in an equality test against a superset if the provided headers. The headers keys are lowercased, and keys and values are compared in their bytes-encoded forms. """ def __init__(self, headers): self._headers = dict([(_maybeEncode(k).lower(), _maybeEncode(v)) for k, v in headers.items()]) def __repr__(self): return "HasHeaders({0})".format(repr(self._headers)) def __eq__(self, other_headers): compare_to = dict([(_maybeEncode(k).lower(), _maybeEncode(v)) for k, v in other_headers.items()]) return (set(self._headers.keys()).issubset(set(compare_to.keys())) and all([set(v).issubset(set(compare_to[k])) for k, v in self._headers.items()])) def __ne__(self, other_headers): return not self.__eq__(other_headers) class RequestSequence(object): """ For an example usage, see :meth:`RequestSequence.consume`. Takes a sequence of:: [((method, url, params, headers, data), (code, headers, body)), ...] Expects the requests to arrive in sequence order. If there are no more responses, or the request's paramters do not match the next item's expected request paramters, raises :obj:`AssertionError`. For the expected request arguments:: - ``method`` should be `bytes` normalized to lowercase. - ``url`` should be normalized as per the transformations in https://en.wikipedia.org/wiki/URL_normalization that (usually) preserve semantics. A url to `http://something-that-looks-like-a-directory` would be normalized to `http://something-that-looks-like-a-directory/` and a url to `http://something-that-looks-like-a-page/page.html` remains unchanged. - ``params`` is a dictionary mapping `bytes` to `lists` of `bytes` - ``headers`` is a dictionary mapping `bytes` to `lists` of `bytes` - note that :obj:`twisted.web.client.Agent` may add its own headers though, which are not guaranteed (for instance, `user-agent` or `content-length`), so it's better to use some kind of matcher like :obj:`HasHeaders`. - ``data`` is a `bytes` For the response:: - ``code`` is an integer representing the HTTP status code to return - ``headers`` is a dictionary mapping `bytes` to `bytes` or `lists` of `bytes` - ``body`` is a `bytes` :ivar list sequence: The sequence of expected request arguments mapped to stubbed responses :ivar async_failure_reporter: A callable that takes a single message reporting failures - it's asynchronous because it cannot just raise an exception - if it does, :obj:`Resource.render` will just convert that into a 500 response, and there will be no other failure reporting mechanism. """ def __init__(self, sequence, async_failure_reporter): self._sequence = sequence self._async_reporter = async_failure_reporter def consumed(self): """ :return: `bool` representing whether the entire sequence has been consumed. This is useful in tests to assert that the expected requests have all been made. """ return len(self._sequence) == 0 @contextmanager def consume(self, sync_failure_reporter): """ Usage:: sequence_stubs = RequestSequence([...]) stub_treq = StubTreq(StringStubbingResource(sequence_stubs)) with sequence_stubs.consume(self.fail): # self = unittest.TestCase stub_treq.get('http://fakeurl.com') stub_treq.get('http://another-fake-url.com') If there are still remaining expected requests to be made in the sequence, fails the provided test case. :param sync_failure_reporter: A callable that takes a single message reporting failures. This can just raise an exception - it does not need to be asynchronous, since the exception would not get raised within a Resource. :return: a context manager that can be used to ensure all expected requests have been made. """ yield if not self.consumed(): sync_failure_reporter("\n".join( ["Not all expected requests were made. Still expecting:"] + ["- {0}(url={1}, params={2}, headers={3}, data={4})".format( *expected) for expected, _ in self._sequence])) def __call__(self, method, url, params, headers, data): """ :return: the next response in the sequence, provided that the parameters match the next in the sequence. """ if len(self._sequence) == 0: self._async_reporter( "No more requests expected, but request {0!r} made.".format( (method, url, params, headers, data))) return (500, {}, "StubbingError") expected, response = self._sequence[0] e_method, e_url, e_params, e_headers, e_data = expected checks = [ (e_method == method.lower(), "method"), (e_url == url, "url"), (e_params == params, 'parameters'), (e_headers == headers, "headers"), (e_data == data, "data") ] mismatches = [param for success, param in checks if not success] if mismatches: self._async_reporter( "\nExpected the next request to be: {0!r}" "\nGot request : {1!r}\n" "\nMismatches: {2!r}" .format(expected, (method, url, params, headers, data), mismatches)) return (500, {}, "StubbingError") self._sequence = self._sequence[1:] return response treq-15.1.0/treq.egg-info/0000755000076600000240000000000012634652643015500 5ustar glyphstaff00000000000000treq-15.1.0/treq.egg-info/dependency_links.txt0000644000076600000240000000000112634652643021546 0ustar glyphstaff00000000000000 treq-15.1.0/treq.egg-info/pbr.json0000644000076600000240000000005712550177120017145 0ustar glyphstaff00000000000000{"is_release": false, "git_version": "ca78637"}treq-15.1.0/treq.egg-info/PKG-INFO0000644000076600000240000000571312634652643016603 0ustar glyphstaff00000000000000Metadata-Version: 1.1 Name: treq Version: 15.1.0 Summary: A requests-like API built on top of twisted.web's Agent Home-page: http://github.com/twisted/treq Author: Amber Brown Author-email: hawkowl@twistedmatrix.com License: MIT/X Description: treq ==== |pypi|_ |build|_ |coverage|_ ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> from treq import get >>> def done(response): ... print response.code ... reactor.stop() >>> get("http://www.github.com").addCallback(done) >>> from twisted.internet import reactor >>> reactor.run() 200 For more info `read the docs `_. Contribute ========== ``treq`` is hosted on `GitHub `_. Feel free to fork and send contributions over. Developing ========== Install dependencies: :: pip install -r requirements-dev.txt Optionally install PyOpenSSL: :: pip install PyOpenSSL Run Tests (unit & integration): :: trial treq Lint: :: pep8 treq pyflakes treq Build docs: :: cd docs; make html .. |build| image:: https://secure.travis-ci.org/twisted/treq.svg?branch=master .. _build: http://travis-ci.org/twisted/treq .. |coverage| image:: https://codecov.io/github/twisted/treq/coverage.svg?branch=master .. _coverage: https://codecov.io/github/twisted/treq .. |pypi| image:: http://img.shields.io/pypi/v/treq.svg .. _pypi: https://pypi.python.org/pypi/treq Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Framework :: Twisted Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy treq-15.1.0/treq.egg-info/requires.txt0000644000076600000240000000012512634652643020076 0ustar glyphstaff00000000000000requests >= 2.1.0 service_identity >= 14.0.0 six Twisted >= 14.0.2 pyOpenSSL >= 0.13 treq-15.1.0/treq.egg-info/SOURCES.txt0000644000076600000240000000207312634652643017366 0ustar glyphstaff00000000000000LICENSE MANIFEST.in README.rst requirements-dev.txt setup.cfg setup.py tox.ini tox2travis.py docs/Makefile docs/api.rst docs/conf.py docs/howto.rst docs/index.rst docs/make.bat docs/_static/.keepme docs/examples/_utils.py docs/examples/basic_auth.py docs/examples/basic_get.py docs/examples/basic_post.py docs/examples/disable_redirects.py docs/examples/download_file.py docs/examples/query_params.py docs/examples/redirects.py docs/examples/response_history.py docs/examples/using_cookies.py treq/__init__.py treq/_utils.py treq/_version treq/api.py treq/auth.py treq/client.py treq/content.py treq/multipart.py treq/response.py treq/testing.py treq.egg-info/PKG-INFO treq.egg-info/SOURCES.txt treq.egg-info/dependency_links.txt treq.egg-info/pbr.json treq.egg-info/requires.txt treq.egg-info/top_level.txt treq/test/__init__.py treq/test/test_api.py treq/test/test_auth.py treq/test/test_client.py treq/test/test_content.py treq/test/test_multipart.py treq/test/test_response.py treq/test/test_testing.py treq/test/test_treq_integration.py treq/test/test_utils.py treq/test/util.pytreq-15.1.0/treq.egg-info/top_level.txt0000644000076600000240000000000512634652643020225 0ustar glyphstaff00000000000000treq