././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3231888 treq-24.9.1/0000755000175100001660000000000014673126570012200 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/.coveragerc0000644000175100001660000000020614673126560014316 0ustar00runnerdocker[run] source = treq branch = True [paths] source = src/ .tox/*/lib/python*/site-packages/ .tox/pypy*/site-packages/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/CHANGELOG.rst0000644000175100001660000002653614673126560014234 0ustar00runnerdocker========= Changelog ========= .. currentmodule:: treq .. default-role:: any .. towncrier release notes start 24.9.1 (2024-09-19) =================== Bugfixes -------- - treq has vendored its dependency on the ``multipart`` library to avoid import conflicts with ``python-multipart``; it should now be installable alongside that library. (`#399 `__) 24.9.0 (2024-09-17) =================== Features -------- - treq now ships type annotations. (`#366 `__) - The new :mod:`treq.cookies` module provides helper functions for working with `http.cookiejar.Cookie` and `CookieJar` objects. (`#384 `__) - Python 3.13 is now supported. (`#391 `__) Bugfixes -------- - :mod:`treq.content.text_content()` no longer generates deprecation warnings due to use of the ``cgi`` module. (`#355 `__) Deprecations and Removals ------------------------- - Mixing the *json* argument with *files* or *data* now raises `TypeError`. (`#297 `__) - Passing non-string (`str` or `bytes`) values as part of a dict to the *headers* argument now results in a `TypeError`, as does passing any collection other than a `dict` or `Headers` instance. (`#302 `__) - Support for Python 3.7 and PyPy 3.8, which have reached end of support, has been dropped. (`#378 `__) Misc ---- - `#336 `__, `#382 `__, `#395 `__ 23.11.0 (2023-11-03) ==================== Features -------- - When the collector passed to ``treq.collect(response, collector)`` throws an exception, that error will now be returned to the caller of ``collect()`` via the result ``Deferred``, and the underlying HTTP transport will be closed. (`#347 `__) - Python 3.11 is now supported. (`#364 `__) - Python 3.12 is now supported. (`#375 `__) - PyPy 3.9 is now supported. (`#365 `__) - PyPy 3.10 is now supported. (`#374 `__) Deprecations and Removals ------------------------- - The minimum supported Twisted version has increased to 22.10.0. Older versions are no longer tested in CI. (`#374 `__) - Support for Python 3.6, which has reached end of support, has been dropped. (`#363 `__) - Support for Python 3.7, which reaches end of support 2023-06-27, is deprecated. This is the last release with support for Python 3.7. (`#361 `__) - Support for PyPy 3.7, which has reached end of support, has been removed. (`#365 `__) - Support for PyPy 3.8, which has reached end of support, is deprecated. This is the last release with support for PyPy 3.8. (`#374 `__) Misc ---- - `#349 `__, `#350 `__, `#352 `__ 22.2.0 (2022-02-08) =================== Features -------- - Python 3.10 and PyPy 3.8 are now supported. (`#338 `__) Bugfixes -------- - Address a regression introduced in Treq 22.1.0 that prevented transmission of cookies with requests to ports other than 80, including HTTPS (443). (`#343 `__) Deprecations and Removals ------------------------- - Support for Python 3.6, which has reached end of support, is deprecated. This is the last release with support for Python 3.6. (`#338 `__) 22.1.0 (2022-01-29) =================== Bugfixes -------- - Cookies specified as a dict were sent to every domain, not just the domain of the request, potentially exposing them on redirect. See `GHSA-fhpf-pp6p-55qc `_. (`#339 `__, CVE-2022-23607) 21.5.0 (2021-05-24) =================== Features -------- - PEP 517/518 ``build-system`` metadata is now provided in ``pyproject.toml``. (`#329 `__) Bugfixes -------- - ``treq.testing.StubTreq`` now persists ``twisted.web.server.Session`` instances between requests. (`#327 `__) Improved Documentation ---------------------- - The dependency on Sphinx required to build the documentation has been moved from the ``dev`` extra to the new ``docs`` extra. (`#296 `__) Deprecations and Removals ------------------------- - Support for Python 2.7 and 3.5 has been dropped. treq no longer depends on ``six`` or ``mock``. (`#318 `__) 21.1.0 (2021-01-14) =================== Features -------- - Support for Python 3.9: treq is now tested with CPython 3.9. (`#305 `__) - The *auth* parameter now accepts arbitrary text and `bytes` for usernames and passwords. Text is encoded as UTF-8, per :rfc:`7617`. Previously only ASCII was allowed. (`#268 `__) - treq produces a more helpful exception when passed a tuple of the wrong size in the *files* parameter. (`#299 `__) Bugfixes -------- - The *params* argument once more accepts non-ASCII ``bytes``, fixing a regression first introduced in treq 20.4.1. (`#303 `__) - treq request APIs no longer mutates a :class:`http_headers.Headers ` passed as the *headers* parameter when the *auth* parameter is also passed. (`#314 `__) - The agent returned by :func:`treq.auth.add_auth()` and :func:`treq.auth.add_basic_auth()` is now marked to provide :class:`twisted.web.iweb.IAgent`. (`#312 `__) - treq's package metadata has been updated to require ``six >= 1.13``, noting a dependency introduced in treq 20.9.0. (`#295 `__) Improved Documentation ---------------------- - The documentation of the *params* argument has been updated to more accurately describe its type-coercion behavior. (`#281 `__) - The :mod:`treq.auth` module has been documented. (`#313 `__) Deprecations and Removals ------------------------- - Support for Python 2.7, which has reached end of support, is deprecated. This is the last release with support for Python 2.7. (`#309 `__) - Support for Python 3.5, which has reached end of support, is deprecated. This is the last release with support for Python 3.5. (`#306 `__) - Deprecate tolerance of non-string values when passing headers as a dict. They have historically been silently dropped, but will raise TypeError in the next treq release. Also deprecate passing headers other than :class:`dict`, :class:`~twisted.web.http_headers.Headers`, or ``None``. Historically falsy values like ``[]`` or ``()`` were accepted. (`#294 `__) - treq request functions and methods like :func:`treq.get()` and :meth:`HTTPClient.post()` now issue a ``DeprecationWarning`` when passed unknown keyword arguments, rather than ignoring them. Mixing the *json* argument with *files* or *data* is also deprecated. These warnings will change to a ``TypeError`` in the next treq release. (`#297 `__) - The minimum supported Twisted version has increased to 18.7.0. Older versions are no longer tested in CI. (`#307 `__) 20.9.0 (2020-09-27) =================== Features -------- - The *url* parameter of :meth:`HTTPClient.request()` (and shortcuts like :meth:`~HTTPClient.get()`) now accept :class:`hyperlink.DecodedURL` and :class:`hyperlink.URL` in addition to :class:`str` and :class:`bytes`. (`#212 `__) - Compatibility with the upcoming Twisted 20.9.0 release (`#290 `__). Improved Documentation ---------------------- - An example of sending and receiving JSON has been added. (`#278 `__) 20.4.1 (2020-04-16) =================== Bugfixes -------- - Correct a typo in the treq 20.4.0 package metadata that prevented upload to PyPI (`pypa/twine#589 `__) 20.4.0 (2020-04-16) =================== Features -------- - Support for Python 3.8 and PyPy3: treq is now tested with these interpreters. (`#271 `__) Bugfixes -------- - `treq.client.HTTPClient.request()` and its aliases no longer raise `UnicodeEncodeError` when passed a Unicode *url* and non-empty *params*. Now the URL and query parameters are concatenated as documented. (`#264 `__) - In treq 20.3.0 the *params* argument didn't accept parameter names or values that contain the characters ``&`` or ``#``. Now these characters are properly escaped. (`#282 `__) Improved Documentation ---------------------- - The treq documentation has been revised to emphasize use of `treq.client.HTTPClient` over the module-level convenience functions in the `treq` module. (`#276 `__) 20.3.0 (2020-03-15) =================== Features -------- - Python 3.7 support. (`#228 `__) Bugfixes -------- - `treq.testing.RequestTraversalAgent` now passes its memory reactor to the `twisted.web.server.Site` it creates, preventing the ``Site`` from polluting the global reactor. (`#225 `__) - `treq.testing` no longer generates deprecation warnings about ``twisted.test.proto_helpers.MemoryReactor``. (`#253 `__) Improved Documentation ---------------------- - The ``download_file.py`` example has been updated to do a streaming download with *unbuffered=True*. (`#233 `__) - The *agent* parameter to `treq.request()` has been documented. (`#235 `__) - The type of the *headers* element of a response tuple passed to `treq.testing.RequestSequence` is now correctly documented as `str`. (`#237 `__) Deprecations and Removals ------------------------- - Drop support for Python 3.4. (`#240 `__) Misc ---- - `#247 `__, `#248 `__, `#249 `__ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/CONTRIBUTING.rst0000644000175100001660000000160114673126560014636 0ustar00runnerdockerDeveloping ========== This project uses `Tox 3 `_ to manage virtual environments. To run the tests:: tox -e py38-twisted_latest Lint:: tox -e flake8 Build docs:: tox -e docs firefox docs/html/index.html To do it all:: tox -p Release notes ------------- We use `towncrier`_ to manage our release notes. Basically, every pull request that has a user visible effect should add a short file to the `changelog.d/ <./changelog.d>`_ directory describing the change, with a name like ..rst. See `changelog.d/README.rst `_ for details. This way we can keep a good list of changes as we go, which makes the release manager happy, which means we get more frequent releases, which means your change gets into users’ hands faster. .. _towncrier: https://pypi.org/project/towncrier/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/LICENSE0000644000175100001660000000207514673126560013210 0ustar00runnerdockerThis is the MIT license. Copyright (c) 2012-2014 David Reid Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/MANIFEST.in0000644000175100001660000000053214673126560013735 0ustar00runnerdockerinclude pyproject.toml include *.rst include *.md include LICENSE include .coveragerc include src/treq/py.typed recursive-include docs * prune docs/_build prune docs/html exclude tox.ini exclude .github exclude .readthedocs.yml # This directory will be empty at release time. prune changelog.d global-exclude .DS_Store *.pyc *.pyo __pycache__ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3231888 treq-24.9.1/PKG-INFO0000644000175100001660000000666614673126570013313 0ustar00runnerdockerMetadata-Version: 2.1 Name: treq Version: 24.9.1 Summary: High-level Twisted HTTP Client API Home-page: https://github.com/twisted/treq Author: David Reid Author-email: dreid@dreid.org Maintainer: Tom Most Maintainer-email: twm@freecog.net License: MIT/X Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Framework :: Twisted Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Requires-Python: >=3.7 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: incremental Requires-Dist: requests>=2.1.0 Requires-Dist: hyperlink>=21.0.0 Requires-Dist: Twisted[tls]>=22.10.0 Requires-Dist: attrs Requires-Dist: typing_extensions>=3.10.0 Provides-Extra: dev Requires-Dist: pep8; extra == "dev" Requires-Dist: pyflakes; extra == "dev" Requires-Dist: httpbin==0.7.0; extra == "dev" Requires-Dist: werkzeug==2.0.3; extra == "dev" Provides-Extra: docs Requires-Dist: sphinx<7.0.0; extra == "docs" treq: High-level Twisted HTTP Client API ======================================== .. |pypi| image:: https://img.shields.io/pypi/v/treq.svg :alt: PyPI :target: https://pypi.org/project/treq/ .. |calver| image:: https://img.shields.io/badge/calver-YY.MM.MICRO-22bfda.svg :alt: calver: YY.MM.MICRO :target: https://calver.org/ .. |coverage| image:: https://coveralls.io/repos/github/twisted/treq/badge.svg :alt: Coverage :target: https://coveralls.io/github/twisted/treq .. |documentation| image:: https://readthedocs.org/projects/treq/badge/ :alt: Documentation :target: https://treq.readthedocs.org |pypi| |calver| |coverage| |documentation| ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> import treq >>> async def main(reactor): ... response = await treq.get("https://github.com") ... print(response.code) ... body = await response.text() ... print("" in body) >>> from twisted.internet.task import react >>> react(main) 200 True For more info `read the docs `_. Contributing ------------ ``treq`` development is hosted on `GitHub `_. We welcome contributions: feel free to fork and send contributions over. See `CONTRIBUTING.rst `_ for more info. Code of Conduct --------------- Refer to the `Twisted code of conduct `_. Copyright and License --------------------- ``treq`` is made available under the MIT license. See `LICENSE <./LICENSE>`_ for legal details and copyright notices. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/README.rst0000644000175100001660000000373014673126560013671 0ustar00runnerdockertreq: High-level Twisted HTTP Client API ======================================== .. |pypi| image:: https://img.shields.io/pypi/v/treq.svg :alt: PyPI :target: https://pypi.org/project/treq/ .. |calver| image:: https://img.shields.io/badge/calver-YY.MM.MICRO-22bfda.svg :alt: calver: YY.MM.MICRO :target: https://calver.org/ .. |coverage| image:: https://coveralls.io/repos/github/twisted/treq/badge.svg :alt: Coverage :target: https://coveralls.io/github/twisted/treq .. |documentation| image:: https://readthedocs.org/projects/treq/badge/ :alt: Documentation :target: https://treq.readthedocs.org |pypi| |calver| |coverage| |documentation| ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> import treq >>> async def main(reactor): ... response = await treq.get("https://github.com") ... print(response.code) ... body = await response.text() ... print("" in body) >>> from twisted.internet.task import react >>> react(main) 200 True For more info `read the docs `_. Contributing ------------ ``treq`` development is hosted on `GitHub `_. We welcome contributions: feel free to fork and send contributions over. See `CONTRIBUTING.rst `_ for more info. Code of Conduct --------------- Refer to the `Twisted code of conduct `_. Copyright and License --------------------- ``treq`` is made available under the MIT license. See `LICENSE <./LICENSE>`_ for legal details and copyright notices. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/SECURITY.md0000644000175100001660000000043014673126560013765 0ustar00runnerdocker# Security Policy treq is covered by the [Twisted project security policy](https://github.com/twisted/twisted/security/policy). You can [privately report via GitHub](https://github.com/twisted/treq/security/advisories/new), or via email as described in the policy linked above. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3141885 treq-24.9.1/docs/0000755000175100001660000000000014673126570013130 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/Makefile0000644000175100001660000001266414673126560014600 0ustar00runnerdocker# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/treq.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/treq.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/treq" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/treq" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3141885 treq-24.9.1/docs/_static/0000755000175100001660000000000014673126570014556 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/_static/.keepme0000644000175100001660000000000014673126560016012 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/api.rst0000644000175100001660000001116514673126560014436 0ustar00runnerdockerAPI Reference ============= This page lists all of the interfaces exposed by the `treq` package. Making Requests --------------- The :py:mod:`treq` module provides several convenience functions for making requests. These functions all create a default :py:class:`treq.client.HTTPClient` instance and pass their arguments to the appropriate :py:class:`~treq.client.HTTPClient` method. .. module:: treq .. autofunction:: request .. autofunction:: get .. autofunction:: head .. autofunction:: post .. autofunction:: put .. autofunction:: patch .. autofunction:: delete Accessing Content ----------------- .. autofunction:: collect .. autofunction:: content .. autofunction:: text_content .. autofunction:: json_content The HTTP Client =============== .. module:: treq.client :class:`treq.client.HTTPClient` has methods that match the signatures of the convenience request functions in the :mod:`treq` module. .. autoclass:: HTTPClient(agent, cookiejar=None, data_to_body_producer=IBodyProducer) .. automethod:: request .. automethod:: get .. automethod:: head .. automethod:: post .. automethod:: put .. automethod:: patch .. automethod:: delete Augmented Response Objects -------------------------- :func:`treq.request`, :func:`treq.get`, etc. return an object which provides :class:`twisted.web.iweb.IResponse`, plus a few additional convenience methods: .. module:: treq.response .. class:: _Response .. automethod:: collect .. automethod:: content .. automethod:: json .. automethod:: text .. automethod:: history .. automethod:: cookies Inherited from :class:`twisted.web.iweb.IResponse`: :ivar version: See :attr:`IResponse.version ` :ivar code: See :attr:`IResponse.code ` :ivar phrase: See :attr:`IResponse.phrase ` :ivar headers: See :attr:`IResponse.headers ` :ivar length: See :attr:`IResponse.length ` :ivar request: See :attr:`IResponse.request ` :ivar previousResponse: See :attr:`IResponse.previousResponse ` .. method:: deliverBody(protocol) See :meth:`IResponse.deliverBody() ` .. method:: setPreviousResponse(response) See :meth:`IResponse.setPreviousResponse() ` Authentication -------------- .. module:: treq.auth .. autofunction:: add_auth .. autofunction:: add_basic_auth .. autoexception:: UnknownAuthConfig Cookies ------- .. module:: treq.cookies .. autofunction:: scoped_cookie .. autofunction:: search Test Helpers ------------ .. module:: treq.testing The :mod:`treq.testing` module contains tools for in-memory testing of HTTP clients and servers. StubTreq Objects ~~~~~~~~~~~~~~~~ .. class:: treq.testing.StubTreq(resource) :class:`StubTreq` implements the same interface as the :mod:`treq` module or the :class:`~treq.client.HTTPClient` class, with the limitation that it does not support the ``files`` argument. .. method:: flush() Flush all data between pending client/server pairs. This is only necessary if a :obj:`Resource` under test returns :obj:`NOT_DONE_YET` from its ``render`` method, making a response asynchronous. In that case, after each write from the server, :meth:`flush()` must be called so the client can see it. As the methods on :class:`treq.client.HTTPClient`: .. method:: request See :func:`treq.request()`. .. method:: get See :func:`treq.get()`. .. method:: head See :func:`treq.head()`. .. method:: post See :func:`treq.post()`. .. method:: put See :func:`treq.put()`. .. method:: patch See :func:`treq.patch()`. .. method:: delete See :func:`treq.delete()`. RequestTraversalAgent Objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: treq.testing.RequestTraversalAgent :members: RequestSequence Objects ~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: treq.testing.RequestSequence :members: StringStubbingResource Objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: treq.testing.StringStubbingResource :members: HasHeaders Objects ~~~~~~~~~~~~~~~~~~ .. autoclass:: treq.testing.HasHeaders :members: MultiPartProducer Objects ------------------------- :class:`treq.multipart.MultiPartProducer` is used internally when making requests which involve files. .. automodule:: treq.multipart :members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/changelog.rst0000644000175100001660000000003614673126560015607 0ustar00runnerdocker.. include:: ../CHANGELOG.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/conf.py0000644000175100001660000001770214673126560014435 0ustar00runnerdocker# -*- coding: utf-8 -*- # # treq documentation build configuration file, created by # sphinx-quickstart on Mon Dec 10 22:32:11 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath("..")) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ["sphinx.ext.viewcode", "sphinx.ext.autodoc", "sphinx.ext.intersphinx"] # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix of source filenames. source_suffix = ".rst" # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = "index" # General information about the project. project = "treq" copyright = "2014–2020 David Reid" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The full version, including alpha/beta/rc tags. from treq import __version__ as release version = ".".join(release.split(".")[:2]) # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ["_build"] # The reST default role (used for this markup: `text`) to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = "default" # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = "treqdoc" # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ("index", "treq.tex", "treq Documentation", "David Reid", "manual"), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [("index", "treq", "treq Documentation", ["David Reid"], 1)] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ( "index", "treq", "treq Documentation", "David Reid", "treq", "One line description of project.", "Miscellaneous", ), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' RTD_NEW_THEME = True intersphinx_mapping = { "python": ("https://docs.python.org/3/", None), "twisted": ("https://docs.twisted.org/en/stable/api/", None), "hyperlink": ("https://hyperlink.readthedocs.io/en/latest/", None), } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3161886 treq-24.9.1/docs/examples/0000755000175100001660000000000014673126570014746 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/_utils.py0000644000175100001660000000032414673126560016615 0ustar00runnerdockerfrom __future__ import print_function import treq def print_response(response): print(response.code, response.phrase) print(response.headers) return treq.text_content(response).addCallback(print) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/basic_auth.py0000644000175100001660000000043514673126560017423 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get( 'https://httpbin.org/basic-auth/treq/treq', auth=('treq', 'treq') ) d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/basic_get.py0000644000175100001660000000033714673126560017242 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('https://httpbin.org/get') d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/basic_post.py0000644000175100001660000000040314673126560017442 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor): d = treq.post("https://httpbin.org/post", data={"form": "data"}) d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/basic_url.py0000644000175100001660000000065114673126560017264 0ustar00runnerdocker# -*- encoding: utf-8 -*- from hyperlink import DecodedURL from twisted.internet.task import react from _utils import print_response import treq def main(reactor): url = ( DecodedURL.from_text(u"https://httpbin.org") .child(u"get") # add path /get .add(u"foo", u"&") # add query ?foo=%26 ) print(url.to_text()) return treq.get(url).addCallback(print_response) react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/custom_agent.py0000644000175100001660000000077414673126560020017 0ustar00runnerdockerfrom treq.client import HTTPClient from _utils import print_response from twisted.internet.task import react from twisted.web.client import Agent def make_custom_agent(reactor): return Agent(reactor, connectTimeout=42) def main(reactor, *args): agent = make_custom_agent(reactor) http_client = HTTPClient(agent) d = http_client.get( 'https://secure.example.net/area51', auth=('admin', "you'll never guess!")) d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/disable_redirects.py0000644000175100001660000000037514673126560020773 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('https://httpbin.org/redirect/1', allow_redirects=False) d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/download_file.py0000644000175100001660000000057014673126560020127 0ustar00runnerdockerfrom twisted.internet.task import react import treq def download_file(reactor, url, destination_filename): destination = open(destination_filename, 'wb') d = treq.get(url, unbuffered=True) d.addCallback(treq.collect, destination.write) d.addBoth(lambda _: destination.close()) return d react(download_file, ['https://httpbin.org/get', 'download.txt']) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/iresource.py0000644000175100001660000000065014673126560017320 0ustar00runnerdockerimport json from zope.interface import implementer from twisted.web.resource import IResource @implementer(IResource) class JsonResource(object): isLeaf = True # NB: means getChildWithDefault will not be called def __init__(self, data): self.data = data def render(self, request): request.setHeader(b'Content-Type', b'application/json') return json.dumps(self.data).encode('utf-8') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/json_post.py0000644000175100001660000000051414673126560017335 0ustar00runnerdockerfrom pprint import pprint from twisted.internet import defer from twisted.internet.task import react import treq @defer.inlineCallbacks def main(reactor): response = yield treq.post( 'https://httpbin.org/post', json={"msg": "Hello!"}, ) data = yield response.json() pprint(data) react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/query_params.py0000644000175100001660000000226014673126560020027 0ustar00runnerdockerfrom twisted.internet.task import react from twisted.internet.defer import inlineCallbacks import treq @inlineCallbacks def main(reactor): print('List of tuples') resp = yield treq.get('https://httpbin.org/get', params=[('foo', 'bar'), ('baz', 'bax')]) content = yield resp.text() print(content) print('Single value dictionary') resp = yield treq.get('https://httpbin.org/get', params={'foo': 'bar', 'baz': 'bax'}) content = yield resp.text() print(content) print('Multi value dictionary') resp = yield treq.get('https://httpbin.org/get', params={b'foo': [b'bar', b'baz', b'bax']}) content = yield resp.text() print(content) print('Mixed value dictionary') resp = yield treq.get('https://httpbin.org/get', params={'foo': [1, 2, 3], 'bax': b'quux', b'bar': 'foo'}) content = yield resp.text() print(content) print('Preserved query parameters') resp = yield treq.get('https://httpbin.org/get?foo=bar', params={'baz': 'bax'}) content = yield resp.text() print(content) react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/redirects.py0000644000175100001660000000034614673126560017306 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('https://httpbin.org/redirect/1') d.addCallback(print_response) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/response_history.py0000644000175100001660000000053714673126560020743 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq def main(reactor, *args): d = treq.get('https://httpbin.org/redirect/1') def cb(response): print('Response history:') print(response.history()) return print_response(response) d.addCallback(cb) return d react(main, []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/testing_seq.py0000644000175100001660000000377414673126560017657 0ustar00runnerdockerfrom twisted.internet import defer from twisted.trial.unittest import SynchronousTestCase from twisted.web import http from treq.testing import StubTreq, HasHeaders from treq.testing import RequestSequence, StringStubbingResource @defer.inlineCallbacks def make_a_request(treq): """ Make a request using treq. """ response = yield treq.get('http://an.example/foo', params={'a': 'b'}, headers={b'Accept': b'application/json'}) if response.code == http.OK: result = yield response.json() else: message = yield response.text() raise Exception("Got an error from the server: {}".format(message)) defer.returnValue(result) class MakeARequestTests(SynchronousTestCase): """ Test :func:`make_a_request()` using :mod:`treq.testing.RequestSequence`. """ def test_200_ok(self): """On a 200 response, return the response's JSON.""" req_seq = RequestSequence([ ((b'get', 'http://an.example/foo', {b'a': [b'b']}, HasHeaders({'Accept': ['application/json']}), b''), (http.OK, {b'Content-Type': b'application/json'}, b'{"status": "ok"}')) ]) treq = StubTreq(StringStubbingResource(req_seq)) with req_seq.consume(self.fail): result = self.successResultOf(make_a_request(treq)) self.assertEqual({"status": "ok"}, result) def test_418_teapot(self): """On an unexpected response code, raise an exception""" req_seq = RequestSequence([ ((b'get', 'http://an.example/foo', {b'a': [b'b']}, HasHeaders({'Accept': ['application/json']}), b''), (418, {b'Content-Type': b'text/plain'}, b"I'm a teapot!")) ]) treq = StubTreq(StringStubbingResource(req_seq)) with req_seq.consume(self.fail): failure = self.failureResultOf(make_a_request(treq)) self.assertEqual(u"Got an error from the server: I'm a teapot!", failure.getErrorMessage()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/examples/using_cookies.py0000644000175100001660000000074214673126560020163 0ustar00runnerdockerfrom twisted.internet.task import react from _utils import print_response import treq async def main(reactor): resp = await treq.get("https://httpbin.org/cookies/set?hello=world") jar = resp.cookies() [cookie] = treq.cookies.search(jar, domain="httpbin.org", name="hello") print("The server set our hello cookie to: {}".format(cookie.value)) await treq.get("https://httpbin.org/cookies", cookies=jar).addCallback( print_response ) react(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/howto.rst0000644000175100001660000001305514673126560015025 0ustar00runnerdockerUse Cases ========= Handling Streaming Responses ---------------------------- In addition to `receiving responses `_ with :meth:`IResponse.deliverBody`, treq provides a helper function :py:func:`treq.collect` which takes a ``response`` and a single argument function which will be called with all new data available from the response. Much like :meth:`IProtocol.dataReceived`, :py:func:`treq.collect` knows nothing about the framing of your data and will simply call your collector function with any data that is currently available. Here is an example which simply a file object's write method to :py:func:`treq.collect` to save the response body to a file. .. literalinclude:: examples/download_file.py :linenos: :lines: 6-11 Full example: :download:`download_file.py ` URLs, URIs, and Hyperlinks -------------------------- The *url* argument to :py:meth:`HTTPClient.request` accepts three URL representations: - High-level: :class:`hyperlink.DecodedURL` - Mid-level :class:`str` (``unicode`` on Python 2) - Low-level: ASCII :class:`bytes` or :class:`hyperlink.URL` The high-level :class:`~hyperlink.DecodedURL` form is useful when programatically generating URLs. Here is an example that builds a URL that contains a ``&`` character, which is automatically escaped properly. .. literalinclude:: examples/basic_url.py :linenos: :pyobject: main Full example: :download:`basic_url.py ` Query Parameters ---------------- :py:func:`treq.HTTPClient.request` supports a ``params`` keyword argument which will be URL-encoded and added to the ``url`` argument in addition to any query parameters that may already exist. The ``params`` argument may be either a ``dict`` or a ``list`` of ``(key, value)`` tuples. If it is a ``dict`` then the values in the dict may either be scalar values or a ``list`` or ``tuple`` thereof. Scalar values means ``str``, ``bytes``, or anything else — even ``None`` — which will be coerced to ``str``. Strings are UTF-8 encoded. .. literalinclude:: examples/query_params.py :linenos: :lines: 7-37 Full example: :download:`query_params.py ` If you prefer a strictly-typed API, try :class:`hyperlink.DecodedURL`. Use its :meth:`~hyperlink.URL.add` and :meth:`~hyperlink.URL.set` methods to add query parameters without risk of accidental type coercion. JSON ---- :meth:`HTTPClient.request() ` supports a *json* keyword argument that gives a data structure to serialize as JSON (using :func:`json.dumps()`). This also implies a ``Content-Type: application/json`` request header. The *json* parameter is mutually-exclusive with *data*. The :meth:`_Response.json()` method decodes a JSON response body. It buffers the whole response and decodes it with :func:`json.loads()`. .. literalinclude:: examples/json_post.py :linenos: :pyobject: main Full example: :download:`json_post.py ` Auth ---- HTTP Basic authentication as specified in :rfc:`2617` is easily supported by passing an ``auth`` keyword argument to any of the request functions. The ``auth`` argument should be a tuple of the form ``('username', 'password')``. .. literalinclude:: examples/basic_auth.py :linenos: :lines: 7-15 Full example: :download:`basic_auth.py ` Redirects --------- treq handles redirects by default. The following will print a 200 OK response. .. literalinclude:: examples/redirects.py :linenos: :lines: 7-12 Full example: :download:`redirects.py ` You can easily disable redirects by simply passing `allow_redirects=False` to any of the request methods. .. literalinclude:: examples/disable_redirects.py :linenos: :lines: 7-12 Full example: :download:`disable_redirects.py ` You can even access the complete history of treq response objects by calling the :meth:`~treq.response._Response.history()` method on the response. .. literalinclude:: examples/response_history.py :linenos: :lines: 7-15 Full example: :download:`response_history.py ` Cookies ------- Cookies can be set by passing a ``dict`` or ``cookielib.CookieJar`` instance via the ``cookies`` keyword argument. Later cookies set by the server can be retrieved using the :py:meth:`~treq.response._Response.cookies()` method of the response. The object returned by :py:meth:`~treq.response._Response.cookies()` supports the same key/value access as `requests cookies `_. .. literalinclude:: examples/using_cookies.py :linenos: :lines: 7-20 Full example: :download:`using_cookies.py ` Customizing the Twisted Agent ----------------------------- The main :py:mod:`treq` module has helper functions that automatically instantiate an instance of :py:class:`treq.client.HTTPClient`. You can create an instance of :py:class:`~treq.client.HTTPClient` directly in order to customize the parameters used to initialize it. Internally, the :py:class:`~treq.client.HTTPClient` wraps an instance of :py:class:`twisted.web.client.Agent`. When you create an instance of :py:class:`~treq.client.HTTPClient`, you must initialize it with an instance of :py:class:`~twisted.web.client.Agent`. This allows you to customize its behavior. .. literalinclude:: examples/custom_agent.py :linenos: :lines: 6-19 Full example: :download:`custom_agent.py ` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/index.rst0000644000175100001660000000764214673126560015001 0ustar00runnerdockertreq: High-level Twisted HTTP Client API ======================================== Release v\ |release| (:doc:`What's new? `). `treq `_ depends on a recent Twisted and functions on Python 2.7 and Python 3.3+ (including PyPy). Why? ---- `requests`_ by Kenneth Reitz is a wonderful library. I want the same ease of use when writing Twisted applications. treq is not of course a perfect clone of `requests`_. I have tried to stay true to the do-what-I-mean spirit of the `requests`_ API and also kept the API familiar to users of `Twisted`_ and :class:`twisted.web.client.Agent` on which treq is based. .. _requests: https://requests.readthedocs.io/en/master/ .. _Twisted: https://twistedmatrix.com/ Quick Start ----------- Installation .. code-block:: console $ pip install treq GET +++ .. literalinclude:: examples/basic_get.py :pyobject: main Full example: :download:`basic_get.py ` POST ++++ .. literalinclude:: examples/basic_post.py :pyobject: main Full example: :download:`basic_post.py ` Why not 100% requests-alike? ---------------------------- Initially when I started off working on treq I thought the API should look exactly like `requests`_ except anything that would involve the network would return a :class:`~twisted.internet.defer.Deferred`. Over time while attempting to mimic the `requests`_ API it became clear that not enough code could be shared between `requests`_ and treq for it to be worth the effort to translate many of the usage patterns from `requests`_. With the current version of treq I have tried to keep the API simple, yet remain familiar to users of Twisted and its lower-level HTTP libraries. Feature Parity with Requests ---------------------------- Even though mimicking the `requests`_ API is not a goal, supporting most of its features is. Here is a list of `requests`_ features and their status in treq. +----------------------------------+----------+----------+ | | requests | treq | +----------------------------------+----------+----------+ | International Domains and URLs | yes | yes | +----------------------------------+----------+----------+ | Keep-Alive & Connection Pooling | yes | yes | +----------------------------------+----------+----------+ | Sessions with Cookie Persistence | yes | yes | +----------------------------------+----------+----------+ | Browser-style SSL Verification | yes | yes | +----------------------------------+----------+----------+ | Basic Authentication | yes | yes | +----------------------------------+----------+----------+ | Digest Authentication | yes | no | +----------------------------------+----------+----------+ | Elegant Key/Value Cookies | yes | yes | +----------------------------------+----------+----------+ | Automatic Decompression | yes | yes | +----------------------------------+----------+----------+ | Unicode Response Bodies | yes | yes | +----------------------------------+----------+----------+ | Multipart File Uploads | yes | yes | +----------------------------------+----------+----------+ | Connection Timeouts | yes | yes | +----------------------------------+----------+----------+ | HTTP(S) Proxy Support | yes | no | +----------------------------------+----------+----------+ | .netrc support | yes | no | +----------------------------------+----------+----------+ | Python 3.x | yes | yes | +----------------------------------+----------+----------+ Table of Contents ----------------- .. toctree:: :maxdepth: 3 howto testing api changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/make.bat0000644000175100001660000001174414673126560014543 0ustar00runnerdocker@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\treq.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\treq.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/docs/testing.rst0000644000175100001660000000641414673126560015343 0ustar00runnerdockerTesting Helpers =============== The :mod:`treq.testing` module provides some tools for testing both HTTP clients which use the treq API and implementations of the `Twisted Web resource model `_. Writing tests for HTTP clients ------------------------------ The :class:`~treq.testing.StubTreq` class implements the :mod:`treq` module interface (:func:`treq.get()`, :func:`treq.post()`, etc.) but runs all I/O via a :class:`~twisted.internet.testing.MemoryReactor`. It wraps a :class:`twisted.web.resource.IResource` provider which handles each request. You can wrap a pre-existing `IResource` provider, or write your own. For example, the :class:`twisted.web.resource.ErrorPage` resource can produce an arbitrary HTTP status code. :class:`twisted.web.static.File` can serve files or directories. And you can easily achieve custom responses by writing trivial resources yourself: .. literalinclude:: examples/iresource.py :linenos: :pyobject: JsonResource However, those resources don't assert anything about the request. The :class:`~treq.testing.RequestSequence` and :class:`~treq.testing.StringStubbingResource` classes make it easy to construct a resource which encodes the expected request and response pairs. Do note that most parameters to these functions must be bytes—it's safest to use the ``b''`` string syntax, which works on both Python 2 and 3. For example: .. literalinclude:: examples/testing_seq.py :linenos: This may be run with ``trial testing_seq.py``. Download: :download:`testing_seq.py `. Loosely matching the request ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you don't care about certain parts of the request, you can pass :data:`unittest.mock.ANY`, which compares equal to anything. This sequence matches a single GET request with any parameters or headers: .. code-block:: python from unittest.mock import ANY RequestSequence([ ((b'get', ANY, ANY, b''), (200, {}, b'ok')) ]) If you care about headers, use :class:`~treq.testing.HasHeaders` to make assertions about the headers present in the request. It compares equal to a superset of the headers specified, which helps make your test robust to changes in treq or Agent. Right now treq adds the ``Accept-Encoding: gzip`` header, but as support for additional compression methods is added, this may change. Writing tests for Twisted Web resources --------------------------------------- Since :class:`~treq.testing.StubTreq` wraps any resource, you can use it to test your server-side code as well. This is superior to calling your resource's methods directly or passing mock objects, since it uses a real :class:`~twisted.web.client.Agent` to generate the request and a real :class:`~twisted.web.server.Site` to process the response. Thus, the ``request`` object your code interacts with is a *real* :class:`twisted.web.server.Request` and behaves the same as it would in production. Note that if your resource returns :data:`~twisted.web.server.NOT_DONE_YET` you must keep a reference to the :class:`~treq.testing.RequestTraversalAgent` and call its :meth:`~treq.testing.RequestTraversalAgent.flush()` method to spin the memory reactor once the server writes additional data before the client will receive it. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/pyproject.toml0000644000175100001660000000407414673126560015120 0ustar00runnerdocker[build-system] requires = [ "setuptools >= 35.0.2", "wheel >= 0.29.0", "incremental >= 21.3.0", ] build-backend = "setuptools.build_meta" [tool.towncrier] package = "treq" package_dir = "src" filename = "CHANGELOG.rst" directory = "changelog.d" title_format = "{version} ({project_date})" issue_format = "`#{issue} `__" [tool.ruff] line-length = 88 [tool.mypy] namespace_packages = true plugins = "mypy_zope:plugin" check_untyped_defs = true disallow_incomplete_defs = true disallow_untyped_defs = true no_implicit_optional = true show_column_numbers = true show_error_codes = true strict_optional = true warn_no_return = true warn_redundant_casts = true warn_return_any = true warn_unreachable = true warn_unused_ignores = true disallow_any_decorated = false disallow_any_explicit = false disallow_any_expr = false disallow_any_generics = false disallow_any_unimported = false disallow_subclassing_any = false disallow_untyped_calls = false disallow_untyped_decorators = false strict_equality = false [[tool.mypy.overrides]] module = [ "treq.content", ] disallow_untyped_defs = true [[tool.mypy.overrides]] module = [ "treq.api", "treq.auth", "treq.client", "treq.multipart", "treq.response", "treq.testing", "treq.test.test_api", "treq.test.test_auth", "treq.test.test_client", "treq.test.test_content", "treq.test.test_multipart", "treq.test.test_response", "treq.test.test_testing", "treq.test.test_treq_integration", "treq.test.util", ] disallow_untyped_defs = false check_untyped_defs = false [[tool.mypy.overrides]] module = [ "treq.test.local_httpbin.child", "treq.test.local_httpbin.parent", "treq.test.local_httpbin.shared", "treq.test.local_httpbin.test.test_child", "treq.test.local_httpbin.test.test_parent", "treq.test.local_httpbin.test.test_shared", ] disallow_untyped_defs = false check_untyped_defs = false ignore_missing_imports = true [[tool.mypy.overrides]] module = [ "treq._multipart", ] disallow_untyped_defs = false ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3231888 treq-24.9.1/setup.cfg0000644000175100001660000000004614673126570014021 0ustar00runnerdocker[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/setup.py0000644000175100001660000000372214673126560013715 0ustar00runnerdockerfrom setuptools import find_packages, setup classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Framework :: Twisted", "Programming Language :: Python", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", ] if __name__ == "__main__": with open("README.rst") as f: readme = f.read() setup( name="treq", packages=find_packages("src"), package_dir={"": "src"}, setup_requires=["incremental"], use_incremental=True, python_requires=">=3.7", install_requires=[ "incremental", "requests >= 2.1.0", "hyperlink >= 21.0.0", "Twisted[tls] >= 22.10.0", # For #11635 "attrs", "typing_extensions >= 3.10.0", ], extras_require={ "dev": [ "pep8", "pyflakes", "httpbin==0.7.0", "werkzeug==2.0.3", ], "docs": [ "sphinx<7.0.0", # Removal of 'style' key breaks RTD. ], }, package_data={"treq": ["py.typed"]}, author="David Reid", author_email="dreid@dreid.org", maintainer="Tom Most", maintainer_email="twm@freecog.net", classifiers=classifiers, description="High-level Twisted HTTP Client API", license="MIT/X", url="https://github.com/twisted/treq", long_description=readme, long_description_content_type="text/x-rst", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3101885 treq-24.9.1/src/0000755000175100001660000000000014673126570012767 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3181887 treq-24.9.1/src/treq/0000755000175100001660000000000014673126570013742 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/__init__.py0000644000175100001660000000061014673126560016047 0ustar00runnerdockerfrom treq.api import delete, get, head, patch, post, put, request from treq.content import collect, content, json_content, text_content from ._version import __version__ as _version __version__: str = _version.base() __all__ = [ "head", "get", "post", "put", "patch", "delete", "request", "collect", "content", "text_content", "json_content", ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/_agentspy.py0000644000175100001660000000642014673126560016306 0ustar00runnerdocker# Copyright (c) The treq Authors. # See LICENSE for details. from typing import Callable, List, Optional, Tuple import attr from twisted.internet.defer import Deferred from twisted.web.http_headers import Headers from twisted.web.iweb import IAgent, IBodyProducer, IResponse from zope.interface import implementer @attr.s(frozen=True, order=False, slots=True) class RequestRecord: """ The details of a call to :meth:`_AgentSpy.request` :ivar method: The *method* argument to :meth:`IAgent.request` :ivar uri: The *uri* argument to :meth:`IAgent.request` :ivar headers: The *headers* argument to :meth:`IAgent.request` :ivar bodyProducer: The *bodyProducer* argument to :meth:`IAgent.request` :ivar deferred: The :class:`Deferred` returned by :meth:`IAgent.request` """ method: bytes = attr.field() uri: bytes = attr.field() headers: Optional[Headers] = attr.field() bodyProducer: Optional[IBodyProducer] = attr.field() deferred: "Deferred[IResponse]" = attr.field() @implementer(IAgent) @attr.s class _AgentSpy: """ An agent that records HTTP requests :ivar _callback: A function called with each :class:`RequestRecord` """ _callback: Callable[[RequestRecord], None] = attr.ib() def request( self, method: bytes, uri: bytes, headers: Optional[Headers] = None, bodyProducer: Optional[IBodyProducer] = None, ) -> "Deferred[IResponse]": if not isinstance(method, bytes): raise TypeError( "method must be bytes, not {!r} of type {}".format(method, type(method)) ) if not isinstance(uri, bytes): raise TypeError( "uri must be bytes, not {!r} of type {}".format(uri, type(uri)) ) if headers is not None and not isinstance(headers, Headers): raise TypeError( "headers must be {}, not {!r} of type {}".format( type(Headers), headers, type(headers) ) ) if bodyProducer is not None and not IBodyProducer.providedBy(bodyProducer): raise TypeError( ( "bodyProducer must implement IBodyProducer, but {!r} does not." " Is the implementation marked with @implementer(IBodyProducer)?" ).format(bodyProducer) ) d: "Deferred[IResponse]" = Deferred() record = RequestRecord(method, uri, headers, bodyProducer, d) self._callback(record) return d def agent_spy() -> Tuple[IAgent, List[RequestRecord]]: """ Record HTTP requests made with an agent This is suitable for low-level testing of wrapper agents. It validates the parameters of each call to :meth:`IAgent.request` (synchronously raising :exc:`TypeError`) and captures them as a :class:`RequestRecord`, which can then be used to inspect the request or generate a response by firing the :attr:`~RequestRecord.deferred`. :returns: A two-tuple of: - An :class:`twisted.web.iweb.IAgent` - A list of calls made to the agent's :meth:`~twisted.web.iweb.IAgent.request()` method """ records: List[RequestRecord] = [] agent = _AgentSpy(records.append) return agent, records ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/_multipart.py0000644000175100001660000004413114673126560016476 0ustar00runnerdocker# -*- coding: utf-8 -*- """ Parser for multipart/form-data ============================== This module provides a parser for the multipart/form-data format. It can read from a file, a socket or a WSGI environment. The parser can be used to replace cgi.FieldStorage to work around its limitations. ..note:: Copyright (c) 2010, Marcel Hellkamp. Inspired by the Werkzeug library: http://werkzeug.pocoo.org/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import re from io import BytesIO from typing import IO from tempfile import TemporaryFile from urllib.parse import parse_qs from wsgiref.headers import Headers from collections.abc import MutableMapping as DictMixin __author__ = "Marcel Hellkamp" __version__ = "0.2.5" __license__ = "MIT" __all__ = ["MultipartError", "MultipartParser", "MultipartPart", "parse_form_data"] ############################################################################## ################################ Helper & Misc ############################### ############################################################################## # Some of these were copied from bottle: https://bottlepy.org # --------- # MultiDict # --------- class MultiDict(DictMixin): """A dict that remembers old values for each key. HTTP headers may repeat with differing values, such as Set-Cookie. We need to remember all values. """ def __init__(self, *args, **kwargs): self.dict = dict() for k, v in dict(*args, **kwargs).items(): self[k] = v def __len__(self): return len(self.dict) def __iter__(self): return iter(self.dict) def __contains__(self, key): return key in self.dict def __delitem__(self, key): del self.dict[key] def keys(self): return self.dict.keys() def __getitem__(self, key): return self.get(key, KeyError, -1) def __setitem__(self, key, value): self.append(key, value) def append(self, key, value): self.dict.setdefault(key, []).append(value) def replace(self, key, value): self.dict[key] = [value] def getall(self, key): return self.dict.get(key) or [] def get(self, key, default=None, index=-1): if key not in self.dict and default != KeyError: return [default][index] return self.dict[key][index] def iterallitems(self): for key, values in self.dict.items(): for value in values: yield key, value def to_bytes(data, enc="utf8"): if isinstance(data, str): data = data.encode(enc) return data def copy_file(stream, target, maxread=-1, buffer_size=2**16): """Read from :stream and write to :target until :maxread or EOF.""" size, read = 0, stream.read while True: to_read = buffer_size if maxread < 0 else min(buffer_size, maxread - size) part = read(to_read) if not part: return size target.write(part) size += len(part) # ------------- # Header Parser # ------------- _special = re.escape('()<>@,;:"\\/[]?={} \t') _re_special = re.compile(r"[%s]" % _special) _quoted_string = r'"(?:\\.|[^"])*"' # Quoted string _value = r"(?:[^%s]+|%s)" % (_special, _quoted_string) # Save or quoted string _option = r"(?:;|^)\s*([^%s]+)\s*=\s*(%s)" % (_special, _value) _re_option = re.compile(_option) # key=value part of an Content-Type like header def header_quote(val): if not _re_special.search(val): return val return '"' + val.replace("\\", "\\\\").replace('"', '\\"') + '"' def header_unquote(val, filename=False): if val[0] == val[-1] == '"': val = val[1:-1] if val[1:3] == ":\\" or val[:2] == "\\\\": val = val.split("\\")[-1] # fix ie6 bug: full path --> filename return val.replace("\\\\", "\\").replace('\\"', '"') return val def parse_options_header(header, options=None): if ";" not in header: return header.lower().strip(), {} content_type, tail = header.split(";", 1) options = options or {} for match in _re_option.finditer(tail): key = match.group(1).lower() value = header_unquote(match.group(2), key == "filename") options[key] = value return content_type, options ############################################################################## ################################## Multipart ################################# ############################################################################## class MultipartError(ValueError): pass class MultipartParser(object): def __init__( self, stream, boundary, content_length=-1, disk_limit=2**30, mem_limit=2**20, memfile_limit=2**18, buffer_size=2**16, charset="latin1", ): """Parse a multipart/form-data byte stream. This object is an iterator over the parts of the message. :param stream: A file-like stream. Must implement ``.read(size)``. :param boundary: The multipart boundary as a byte string. :param content_length: The maximum number of bytes to read. """ self.stream = stream self.boundary = boundary self.content_length = content_length self.disk_limit = disk_limit self.memfile_limit = memfile_limit self.mem_limit = min(mem_limit, self.disk_limit) self.buffer_size = min(buffer_size, self.mem_limit) self.charset = charset if self.buffer_size - 6 < len(boundary): # "--boundary--\r\n" raise MultipartError("Boundary does not fit into buffer_size.") self._done = [] self._part_iter = None def __iter__(self): """Iterate over the parts of the multipart message.""" if not self._part_iter: self._part_iter = self._iterparse() for part in self._done: yield part for part in self._part_iter: self._done.append(part) yield part def parts(self): """Returns a list with all parts of the multipart message.""" return list(self) def get(self, name, default=None): """Return the first part with that name or a default value (None).""" for part in self: if name == part.name: return part return default def get_all(self, name): """Return a list of parts with that name.""" return [p for p in self if p.name == name] def _lineiter(self): """Iterate over a binary file-like object line by line. Each line is returned as a (line, line_ending) tuple. If the line does not fit into self.buffer_size, line_ending is empty and the rest of the line is returned with the next iteration. """ read = self.stream.read maxread, maxbuf = self.content_length, self.buffer_size buffer = b"" # buffer for the last (partial) line while True: data = read(maxbuf if maxread < 0 else min(maxbuf, maxread)) maxread -= len(data) lines = (buffer + data).splitlines(True) len_first_line = len(lines[0]) # be sure that the first line does not become too big if len_first_line > self.buffer_size: # at the same time don't split a '\r\n' accidentally if len_first_line == self.buffer_size + 1 and lines[0].endswith( b"\r\n" ): splitpos = self.buffer_size - 1 else: splitpos = self.buffer_size lines[:1] = [lines[0][:splitpos], lines[0][splitpos:]] if data: buffer = lines[-1] lines = lines[:-1] for line in lines: if line.endswith(b"\r\n"): yield line[:-2], b"\r\n" elif line.endswith(b"\n"): yield line[:-1], b"\n" elif line.endswith(b"\r"): yield line[:-1], b"\r" else: yield line, b"" if not data: break def _iterparse(self): lines, line = self._lineiter(), "" separator = b"--" + to_bytes(self.boundary) terminator = b"--" + to_bytes(self.boundary) + b"--" # Consume first boundary. Ignore any preamble, as required by RFC # 2046, section 5.1.1. for line, nl in lines: if line in (separator, terminator): break else: raise MultipartError("Stream does not contain boundary") # Check for empty data if line == terminator: for _ in lines: raise MultipartError("Data after end of stream") return # For each part in stream... mem_used, disk_used = 0, 0 # Track used resources to prevent DoS is_tail = False # True if the last line was incomplete (cutted) opts = { "buffer_size": self.buffer_size, "memfile_limit": self.memfile_limit, "charset": self.charset, } part = MultipartPart(**opts) for line, nl in lines: if line == terminator and not is_tail: part.file.seek(0) yield part break elif line == separator and not is_tail: if part.is_buffered(): mem_used += part.size else: disk_used += part.size part.file.seek(0) yield part part = MultipartPart(**opts) else: is_tail = not nl # The next line continues this one try: part.feed(line, nl) if part.is_buffered(): if part.size + mem_used > self.mem_limit: raise MultipartError("Memory limit reached.") elif part.size + disk_used > self.disk_limit: raise MultipartError("Disk limit reached.") except MultipartError: part.close() raise else: # If we run off the end of the loop, the current MultipartPart # will not have been yielded, so it's our responsibility to # close it. part.close() if line != terminator: raise MultipartError("Unexpected end of multipart stream.") class MultipartPart(object): file: IO[bytes] def __init__(self, buffer_size=2**16, memfile_limit=2**18, charset="latin1"): self.headerlist = [] self.headers = None self.file = False # type:ignore self.size = 0 self._buf = b"" self.disposition = None self.name = None self.filename = None self.content_type = None self.charset = charset self.memfile_limit = memfile_limit self.buffer_size = buffer_size def feed(self, line, nl=""): if self.file: return self.write_body(line, nl) return self.write_header(line, nl) def write_header(self, line, nl): line = line.decode(self.charset) if not nl: raise MultipartError("Unexpected end of line in header.") if not line.strip(): # blank line -> end of header segment self.finish_header() elif line[0] in " \t" and self.headerlist: name, value = self.headerlist.pop() self.headerlist.append((name, value + line.strip())) else: if ":" not in line: raise MultipartError("Syntax error in header: No colon.") name, value = line.split(":", 1) self.headerlist.append((name.strip(), value.strip())) def write_body(self, line, nl): if not line and not nl: return # This does not even flush the buffer self.size += len(line) + len(self._buf) self.file.write(self._buf + line) self._buf = nl if self.content_length > 0 and self.size > self.content_length: raise MultipartError("Size of body exceeds Content-Length header.") if self.size > self.memfile_limit and isinstance(self.file, BytesIO): # TODO: What about non-file uploads that exceed the memfile_limit? self.file, old = TemporaryFile(mode="w+b"), self.file old.seek(0) copy_file(old, self.file, self.size, self.buffer_size) def finish_header(self): self.file = BytesIO() self.headers = Headers(self.headerlist) content_disposition = self.headers.get("Content-Disposition", "") content_type = self.headers.get("Content-Type", "") if not content_disposition: raise MultipartError("Content-Disposition header is missing.") self.disposition, self.options = parse_options_header(content_disposition) self.name = self.options.get("name") self.filename = self.options.get("filename") self.content_type, options = parse_options_header(content_type) self.charset = options.get("charset") or self.charset self.content_length = int(self.headers.get("Content-Length", "-1")) def is_buffered(self): """Return true if the data is fully buffered in memory.""" return isinstance(self.file, BytesIO) @property def value(self): """Data decoded with the specified charset""" return self.raw.decode(self.charset) @property def raw(self): """Data without decoding""" pos = self.file.tell() self.file.seek(0) try: val = self.file.read() except IOError: raise finally: self.file.seek(pos) return val def save_as(self, path): with open(path, "wb") as fp: pos = self.file.tell() try: self.file.seek(0) size = copy_file(self.file, fp) finally: self.file.seek(pos) return size def close(self): if self.file: self.file.close() self.file = False # type:ignore ############################################################################## #################################### WSGI #################################### ############################################################################## def parse_form_data(environ, charset="utf8", strict=False, **kwargs): """Parse form data from an environ dict and return a (forms, files) tuple. Both tuple values are dictionaries with the form-field name as a key (unicode) and lists as values (multiple values per key are possible). The forms-dictionary contains form-field values as unicode strings. The files-dictionary contains :class:`MultipartPart` instances, either because the form-field was a file-upload or the value is too big to fit into memory limits. :param environ: An WSGI environment dict. :param charset: The charset to use if unsure. (default: utf8) :param strict: If True, raise :exc:`MultipartError` on any parsing errors. These are silently ignored by default. """ forms, files = MultiDict(), MultiDict() try: if environ.get("REQUEST_METHOD", "GET").upper() not in ("POST", "PUT"): raise MultipartError("Request method other than POST or PUT.") content_length = int(environ.get("CONTENT_LENGTH", "-1")) content_type = environ.get("CONTENT_TYPE", "") if not content_type: raise MultipartError("Missing Content-Type header.") content_type, options = parse_options_header(content_type) stream = environ.get("wsgi.input") or BytesIO() kwargs["charset"] = charset = options.get("charset", charset) if content_type == "multipart/form-data": boundary = options.get("boundary", "") if not boundary: raise MultipartError("No boundary for multipart/form-data.") for part in MultipartParser(stream, boundary, content_length, **kwargs): if part.filename or not part.is_buffered(): files[part.name] = part else: # TODO: Big form-fields are in the files dict. really? forms[part.name] = part.value elif content_type in ( "application/x-www-form-urlencoded", "application/x-url-encoded", ): mem_limit = kwargs.get("mem_limit", 2**20) if content_length > mem_limit: raise MultipartError("Request too big. Increase MAXMEM.") data = stream.read(mem_limit).decode(charset) if stream.read(1): # These is more that does not fit mem_limit raise MultipartError("Request too big. Increase MAXMEM.") data = parse_qs(data, keep_blank_values=True, encoding=charset) for key, values in data.items(): for value in values: forms[key] = value else: raise MultipartError("Unsupported content type.") except MultipartError: if strict: for part in files.values(): part.close() raise return forms, files ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/_types.py0000644000175100001660000000455214673126560015624 0ustar00runnerdocker# Copyright (c) The treq Authors. # See LICENSE for details. import io from http.cookiejar import CookieJar from typing import Any, Dict, Iterable, List, Mapping, Tuple, Union from hyperlink import DecodedURL, EncodedURL from twisted.internet.interfaces import (IReactorPluggableNameResolver, IReactorTCP, IReactorTime) from twisted.web.http_headers import Headers from twisted.web.iweb import IBodyProducer class _ITreqReactor(IReactorTCP, IReactorTime, IReactorPluggableNameResolver): """ The kind of reactor treq needs for type-checking purposes. This is an approximation of the actual requirement, which comes from the `twisted.internet.endpoints.HostnameEndpoint` used by the `Agent` implementation: > Provider of IReactorTCP, IReactorTime and either > IReactorPluggableNameResolver or IReactorPluggableResolver. We don't model the `IReactorPluggableResolver` option because it is deprecated. """ _S = Union[bytes, str] _URLType = Union[ str, bytes, EncodedURL, DecodedURL, ] _ParamsType = Union[ Mapping[str, Union[str, Tuple[str, ...], List[str]]], List[Tuple[str, str]], ] _HeadersType = Union[ Headers, Dict[_S, _S], Dict[_S, List[_S]], ] _CookiesType = Union[ CookieJar, Mapping[str, str], ] _WholeBody = Union[ bytes, io.BytesIO, io.BufferedReader, IBodyProducer, ] """ Types that define the entire HTTP request body, including those coercible to `IBodyProducer`. """ # Concrete types are used here because the handling of the *data* parameter # does lots of isinstance checks. _BodyFields = Union[ Dict[str, str], List[Tuple[str, str]], ] """ Types that will be URL- or multipart-encoded before being sent as part of the HTTP request body. """ _DataType = Union[_WholeBody, _BodyFields] """ Values accepted for the *data* parameter Note that this is a simplification. Only `_BodyFields` may be supplied if the *files* parameter is passed. """ _FileValue = Union[ str, bytes, Tuple[str, str, IBodyProducer], ] """ Either a scalar string, or a file to upload as (filename, content type, IBodyProducer) """ _FilesType = Union[ Mapping[str, _FileValue], Iterable[Tuple[str, _FileValue]], ] """ Values accepted for the *files* parameter. """ # Soon... 🤞 https://github.com/python/mypy/issues/731 _JSONType = Any ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/_version.py0000644000175100001660000000034514673126560016141 0ustar00runnerdocker""" Provides treq version information. """ # This file is auto-generated! Do not edit! # Use `incremental` to change this file. from incremental import Version __version__ = Version("treq", 24, 9, 1) __all__ = ["__version__"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/api.py0000644000175100001660000001575214673126560015076 0ustar00runnerdockerfrom __future__ import absolute_import, division, print_function from twisted.web.client import Agent, HTTPConnectionPool from treq.client import HTTPClient def head(url, **kwargs): """ Make a ``HEAD`` request. See :py:func:`treq.request` """ return _client(kwargs).head(url, _stacklevel=4, **kwargs) def get(url, headers=None, **kwargs): """ Make a ``GET`` request. See :py:func:`treq.request` """ return _client(kwargs).get(url, headers=headers, _stacklevel=4, **kwargs) def post(url, data=None, **kwargs): """ Make a ``POST`` request. See :py:func:`treq.request` """ return _client(kwargs).post(url, data=data, _stacklevel=4, **kwargs) def put(url, data=None, **kwargs): """ Make a ``PUT`` request. See :py:func:`treq.request` """ return _client(kwargs).put(url, data=data, _stacklevel=4, **kwargs) def patch(url, data=None, **kwargs): """ Make a ``PATCH`` request. See :py:func:`treq.request` """ return _client(kwargs).patch(url, data=data, _stacklevel=4, **kwargs) def delete(url, **kwargs): """ Make a ``DELETE`` request. See :py:func:`treq.request` """ return _client(kwargs).delete(url, _stacklevel=4, **kwargs) def request(method, url, **kwargs): """ Make an HTTP request. :param str method: HTTP method. Example: ``'GET'``, ``'HEAD'``. ``'PUT'``, ``'POST'``. :param url: http or https URL, which may include query arguments. :type url: :class:`hyperlink.DecodedURL`, `str`, `bytes`, or :class:`hyperlink.EncodedURL` :param headers: Optional HTTP Headers to send with this request. :type headers: :class:`~twisted.web.http_headers.Headers` or None :param params: Optional parameters to be append to the URL query string. Any query string parameters in the *url* will be preserved. :type params: dict w/ str or list/tuple of str values, list of 2-tuples, or None. :param data: Arbitrary request body data. If *files* is also passed this must be a :class:`dict`, a :class:`tuple` or :class:`list` of field tuples as accepted by :class:`MultiPartProducer`. The request is assigned a Content-Type of ``multipart/form-data``. If a :class:`dict`, :class:`list`, or :class:`tuple` it is URL-encoded and the request assigned a Content-Type of ``application/x-www-form-urlencoded``. Otherwise, any non-``None`` value is passed to the client's *data_to_body_producer* callable (by default, :class:`IBodyProducer`), which accepts :class:`bytes` and binary files like returned by ``open(..., "rb")``. :type data: `bytes`, `typing.BinaryIO`, `IBodyProducer`, or `None` :param files: Files to include in the request body, in any of the several formats: - ``[("fieldname", binary_file)]`` - ``[("fieldname", "filename", binary_file)]`` - ``[("fieldname, "filename', "content-type", binary_file)]`` Or a mapping: - ``{"fieldname": binary_file}`` - ``{"fieldname": ("filename", binary_file)}`` - ``{"fieldname": ("filename", "content-type", binary_file)}`` Each ``binary_file`` is a file-like object open in binary mode (like returned by ``open("filename", "rb")``). The filename is taken from the file's ``name`` attribute if not specified. The Content-Type is guessed based on the filename using :func:`mimetypes.guess_type()` if not specified, falling back to ``application/octet-stream``. While uploading Treq will measure the length of seekable files to populate the Content-Length header of the file part. If *files* is given the request is assigned a Content-Type of ``multipart/form-data``. Additional fields may be given in the *data* argument. :param json: Optional JSON-serializable content for the request body. Mutually exclusive with *data* and *files*. :type json: `dict`, `list`, `tuple`, `int`, `str`, `bool`, or `None` :param auth: HTTP Basic Authentication information --- see :func:`treq.auth.add_auth`. :type auth: tuple of ``('username', 'password')`` :param cookies: Cookies to send with this request. The HTTP kind, not the tasty kind. :type cookies: ``dict`` or ``cookielib.CookieJar`` :param int timeout: Request timeout seconds. If a response is not received within this timeframe, a connection is aborted with ``CancelledError``. :param bool allow_redirects: Follow HTTP redirects. Default: ``True`` :param bool browser_like_redirects: Follow redirects like a web browser: When a 301 or 302 redirect is received in response to a POST request convert the method to GET. See :rfc:`7231 <7231#section-6.4.3>` and :class:`~twisted.web.client.BrowserLikeRedirectAgent`). Default: ``False`` :param bool unbuffered: Pass ``True`` to to disable response buffering. By default treq buffers the entire response body in memory. :param reactor: Optional Twisted reactor. :param bool persistent: Use persistent HTTP connections. Default: ``True`` :param agent: Provide your own custom agent. Use this to override things like ``connectTimeout`` or ``BrowserLikePolicyForHTTPS``. By default, treq will create its own Agent with reasonable defaults. :type agent: twisted.web.iweb.IAgent :rtype: Deferred that fires with an :class:`IResponse` .. versionchanged:: treq 20.9.0 The *url* param now accepts :class:`hyperlink.DecodedURL` and :class:`hyperlink.EncodedURL` objects. """ return _client(kwargs).request(method, url, _stacklevel=3, **kwargs) # # Private API # def default_reactor(reactor): """ Return the specified reactor or the default. """ if reactor is None: from twisted.internet import reactor return reactor _global_pool = [None] def get_global_pool(): return _global_pool[0] def set_global_pool(pool): _global_pool[0] = pool def default_pool(reactor, pool, persistent): """ Return the specified pool or a pool with the specified reactor and persistence. """ reactor = default_reactor(reactor) if pool is not None: return pool if persistent is False: return HTTPConnectionPool(reactor, persistent=persistent) if get_global_pool() is None: set_global_pool(HTTPConnectionPool(reactor, persistent=True)) return get_global_pool() def _client(kwargs): agent = kwargs.pop("agent", None) pool = kwargs.pop("pool", None) persistent = kwargs.pop("persistent", None) if agent is None: # "reactor" isn't removed from kwargs because it must also be passed # down for use in the timeout logic. reactor = default_reactor(kwargs.get("reactor")) pool = default_pool(reactor, pool, persistent) agent = Agent(reactor, pool=pool) return HTTPClient(agent) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/auth.py0000644000175100001660000000576714673126560015273 0ustar00runnerdocker# Copyright 2012-2020 The treq Authors. # See LICENSE for details. from __future__ import absolute_import, division, print_function import binascii from typing import Union from twisted.web.http_headers import Headers from twisted.web.iweb import IAgent from zope.interface import implementer class UnknownAuthConfig(Exception): """ The authentication config provided couldn't be interpreted. """ def __init__(self, config): super(Exception, self).__init__( '{0!r} not of a known type.'.format(config)) @implementer(IAgent) class _RequestHeaderSetterAgent: """ Wrap an agent to set request headers :ivar _agent: The wrapped agent. :ivar _request_headers: Headers to set on each request before forwarding it to the wrapped agent. """ def __init__(self, agent, headers): self._agent = agent self._headers = headers def request(self, method, uri, headers=None, bodyProducer=None): if headers is None: requestHeaders = self._headers else: requestHeaders = headers.copy() for header, values in self._headers.getAllRawHeaders(): requestHeaders.setRawHeaders(header, values) return self._agent.request( method, uri, headers=requestHeaders, bodyProducer=bodyProducer) def add_basic_auth( agent: IAgent, username: Union[str, bytes], password: Union[str, bytes] ) -> IAgent: """ Wrap an agent to add HTTP basic authentication The returned agent sets the *Authorization* request header according to the basic authentication scheme described in :rfc:`7617`. This header contains the given *username* and *password* in plaintext, and thus should only be used over an encrypted transport (HTTPS). Note that the colon (``:``) is used as a delimiter between the *username* and *password*, so if either parameter includes a colon the interpretation of the *Authorization* header is server-defined. :param agent: Agent to wrap. :param username: The username. :param password: The password. :returns: :class:`~twisted.web.iweb.IAgent` """ if not isinstance(username, bytes): username = username.encode('utf-8') if not isinstance(password, bytes): password = password.encode('utf-8') creds = binascii.b2a_base64(b'%s:%s' % (username, password)).rstrip(b'\n') return _RequestHeaderSetterAgent( agent, Headers({b'Authorization': [b'Basic ' + creds]}), ) def add_auth(agent, auth_config): """ Wrap an agent to perform authentication :param agent: Agent to wrap. :param auth_config: A ``('username', 'password')`` tuple --- see :func:`add_basic_auth`. :returns: :class:`~twisted.web.iweb.IAgent` :raises UnknownAuthConfig: When the format *auth_config* isn't supported. """ if isinstance(auth_config, tuple): return add_basic_auth(agent, auth_config[0], auth_config[1]) raise UnknownAuthConfig(auth_config) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/client.py0000644000175100001660000004172314673126560015600 0ustar00runnerdockerimport io import mimetypes import uuid from collections import abc from http.cookiejar import CookieJar from json import dumps as json_dumps from typing import ( Any, Callable, Iterable, Iterator, List, Mapping, Optional, Tuple, Union, ) from urllib.parse import quote_plus from urllib.parse import urlencode as _urlencode from hyperlink import DecodedURL, EncodedURL from requests.cookies import merge_cookies from treq.cookies import scoped_cookie from twisted.internet.defer import Deferred from twisted.internet.interfaces import IProtocol from twisted.python.components import proxyForInterface, registerAdapter from twisted.python.filepath import FilePath from twisted.web.client import ( BrowserLikeRedirectAgent, ContentDecoderAgent, CookieAgent, FileBodyProducer, GzipDecoder, IAgent, RedirectAgent, ) from twisted.web.http_headers import Headers from twisted.web.iweb import IBodyProducer, IResponse from treq import multipart from treq._types import ( _CookiesType, _DataType, _FilesType, _FileValue, _HeadersType, _ITreqReactor, _JSONType, _ParamsType, _URLType, ) from treq.auth import add_auth from treq.response import _Response class _Nothing: """Type of the sentinel `_NOTHING`""" _NOTHING = _Nothing() def urlencode(query: _ParamsType, doseq: bool) -> bytes: s = _urlencode(query, doseq) return s.encode("ascii") def _scoped_cookiejar_from_dict( url_object: EncodedURL, cookie_dict: Optional[Mapping[str, str]] ) -> CookieJar: """ Create a CookieJar from a dictionary whose cookies are all scoped to the given URL's origin. @note: This does not scope the cookies to any particular path, only the host, port, and scheme of the given URL. """ cookie_jar = CookieJar() if cookie_dict is None: return cookie_jar for k, v in cookie_dict.items(): cookie_jar.set_cookie(scoped_cookie(url_object, k, v)) return cookie_jar class _BodyBufferingProtocol(proxyForInterface(IProtocol)): # type: ignore def __init__(self, original, buffer, finished): self.original = original self.buffer = buffer self.finished = finished def dataReceived(self, data: bytes) -> None: self.buffer.append(data) self.original.dataReceived(data) def connectionLost(self, reason: Exception) -> None: self.original.connectionLost(reason) self.finished.errback(reason) class _BufferedResponse(proxyForInterface(IResponse)): # type: ignore def __init__(self, original): self.original = original self._buffer = [] self._waiters = [] self._waiting = None self._finished = False self._reason = None def _deliverWaiting(self, reason): self._reason = reason self._finished = True for waiter in self._waiters: for segment in self._buffer: waiter.dataReceived(segment) waiter.connectionLost(reason) def deliverBody(self, protocol): if self._waiting is None and not self._finished: self._waiting = Deferred() self._waiting.addBoth(self._deliverWaiting) self.original.deliverBody( _BodyBufferingProtocol(protocol, self._buffer, self._waiting) ) elif self._finished: for segment in self._buffer: protocol.dataReceived(segment) protocol.connectionLost(self._reason) else: self._waiters.append(protocol) class HTTPClient: def __init__( self, agent: IAgent, cookiejar: Optional[CookieJar] = None, data_to_body_producer: Callable[[Any], IBodyProducer] = IBodyProducer, ) -> None: self._agent = agent if cookiejar is None: cookiejar = CookieJar() self._cookiejar = cookiejar self._data_to_body_producer = data_to_body_producer def get(self, url: _URLType, **kwargs: Any) -> "Deferred[_Response]": """ See :func:`treq.get()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("GET", url, **kwargs) def put( self, url: _URLType, data: Optional[_DataType] = None, **kwargs: Any ) -> "Deferred[_Response]": """ See :func:`treq.put()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("PUT", url, data=data, **kwargs) def patch( self, url: _URLType, data: Optional[_DataType] = None, **kwargs: Any ) -> "Deferred[_Response]": """ See :func:`treq.patch()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("PATCH", url, data=data, **kwargs) def post( self, url: _URLType, data: Optional[_DataType] = None, **kwargs: Any ) -> "Deferred[_Response]": """ See :func:`treq.post()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("POST", url, data=data, **kwargs) def head(self, url: _URLType, **kwargs: Any) -> "Deferred[_Response]": """ See :func:`treq.head()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("HEAD", url, **kwargs) def delete(self, url: _URLType, **kwargs: Any) -> "Deferred[_Response]": """ See :func:`treq.delete()`. """ kwargs.setdefault("_stacklevel", 3) return self.request("DELETE", url, **kwargs) def request( self, method: str, url: _URLType, *, params: Optional[_ParamsType] = None, headers: Optional[_HeadersType] = None, data: Optional[_DataType] = None, files: Optional[_FilesType] = None, json: Union[_JSONType, _Nothing] = _NOTHING, auth: Optional[Tuple[Union[str, bytes], Union[str, bytes]]] = None, cookies: Optional[_CookiesType] = None, allow_redirects: bool = True, browser_like_redirects: bool = False, unbuffered: bool = False, reactor: Optional[_ITreqReactor] = None, timeout: Optional[float] = None, _stacklevel: int = 2, ) -> "Deferred[_Response]": """ See :func:`treq.request()`. """ method_: bytes = method.encode("ascii").upper() if isinstance(url, DecodedURL): parsed_url = url.encoded_url elif isinstance(url, EncodedURL): parsed_url = url elif isinstance(url, str): # We use hyperlink in lazy mode so that users can pass arbitrary # bytes in the path and querystring. parsed_url = EncodedURL.from_text(url) else: parsed_url = EncodedURL.from_text(url.decode("ascii")) # Join parameters provided in the URL # and the ones passed as argument. if params: parsed_url = parsed_url.replace( query=parsed_url.query + tuple(_coerced_query_params(params)) ) url = parsed_url.to_uri().to_text().encode("ascii") headers = self._request_headers(headers, _stacklevel + 1) bodyProducer, contentType = self._request_body( data, files, json, stacklevel=_stacklevel + 1 ) if contentType is not None: headers.setRawHeaders(b"Content-Type", [contentType]) if not isinstance(cookies, CookieJar): cookies = _scoped_cookiejar_from_dict(parsed_url, cookies) merge_cookies(self._cookiejar, cookies) wrapped_agent: IAgent = CookieAgent(self._agent, self._cookiejar) if allow_redirects: if browser_like_redirects: wrapped_agent = BrowserLikeRedirectAgent(wrapped_agent) else: wrapped_agent = RedirectAgent(wrapped_agent) wrapped_agent = ContentDecoderAgent(wrapped_agent, [(b"gzip", GzipDecoder)]) if auth: wrapped_agent = add_auth(wrapped_agent, auth) d = wrapped_agent.request( method_, url, headers=headers, bodyProducer=bodyProducer ) if reactor is None: from twisted.internet import reactor # type: ignore assert reactor is not None if timeout: delayedCall = reactor.callLater(timeout, d.cancel) def gotResult(result): if delayedCall.active(): delayedCall.cancel() return result d.addBoth(gotResult) if not unbuffered: d.addCallback(_BufferedResponse) return d.addCallback(_Response, self._cookiejar) def _request_headers( self, headers: Optional[_HeadersType], stacklevel: int ) -> Headers: """ Convert the *headers* argument to a :class:`Headers` instance """ if isinstance(headers, dict): h = Headers({}) for k, v in headers.items(): if isinstance(v, (bytes, str)): h.addRawHeader(k, v) elif isinstance(v, list): h.setRawHeaders(k, v) else: raise TypeError( "The value of headers key {!r} has non-string type {}.".format( k, type(v) ) ) return h if isinstance(headers, Headers): return headers if headers is None: return Headers({}) raise TypeError( ( "headers must be a dict, twisted.web.http_headers.Headers, or None," " but found {}." ).format(type(headers)) ) def _request_body( self, data: Optional[_DataType], files: Optional[_FilesType], json: Union[_JSONType, _Nothing], stacklevel: int, ) -> Tuple[Optional[IBodyProducer], Optional[bytes]]: """ Here we choose a right producer based on the parameters passed in. :params data: Arbitrary request body data. If *files* is also passed this must be a :class:`dict`, a :class:`tuple` or :class:`list` of field tuples as accepted by :class:`MultiPartProducer`. The request is assigned a Content-Type of ``multipart/form-data``. If a :class:`dict`, :class:`list`, or :class:`tuple` it is URL-encoded and the request assigned a Content-Type of ``application/x-www-form-urlencoded``. Otherwise, any non-``None`` value is passed to the client's *data_to_body_producer* callable (by default, :class:`IBodyProducer`), which accepts file-like objects. :params files: Files to include in the request body, in any of the several formats described in :func:`_convert_files()`. :params json: JSON-encodable data, or the sentinel `_NOTHING`. The sentinel is necessary because ``None`` is a valid JSON value. """ if json is not _NOTHING: if files or data: raise TypeError( "Argument 'json' cannot be combined with '{}'.".format( "data" if data else "files" ) ) return ( self._data_to_body_producer( json_dumps(json, separators=(",", ":")).encode("utf-8"), ), b"application/json; charset=UTF-8", ) if files: # If the files keyword is present we will issue a # multipart/form-data request as it suits better for cases # with files and/or large objects. fields: List[Tuple[str, _FileValue]] = [] if data: for field in _convert_params(data): fields.append(field) for field in _convert_files(files): fields.append(field) boundary = str(uuid.uuid4()).encode("ascii") return ( multipart.MultiPartProducer(fields, boundary=boundary), b"multipart/form-data; boundary=" + boundary, ) # Otherwise stick to x-www-form-urlencoded format # as it's generally faster for smaller requests. if isinstance(data, (dict, list, tuple)): return ( # FIXME: The use of doseq here is not permitted in the types, and # sequence values aren't supported in the files codepath. It is # maintained here for backwards compatibility. See # https://github.com/twisted/treq/issues/360. self._data_to_body_producer(urlencode(data, doseq=True)), b"application/x-www-form-urlencoded", ) elif data: return ( self._data_to_body_producer(data), None, ) return None, None def _convert_params(params: _DataType) -> Iterable[Tuple[str, str]]: items_method = getattr(params, "items", None) if items_method: return list(sorted(items_method())) elif isinstance(params, (tuple, list)): return list(params) else: raise ValueError("Unsupported format") def _convert_files(files): """ Files can be passed in a variety of formats: * {"fieldname": open("bla.f", "rb")} * {"fieldname": ("filename", open("bla.f", "rb"))} * {"fieldname": ("filename", "content-type", open("bla.f", "rb"))} * Anything that has iteritems method, e.g. MultiDict: MultiDict([(name, open()), (name, open())] Our goal is to standardize it to unified form of: * [(param, (file name, content type, producer))] """ if hasattr(files, "iteritems"): files = files.iteritems() elif hasattr(files, "items"): files = files.items() for param, val in files: file_name, content_type, fobj = (None, None, None) if isinstance(val, tuple): if len(val) == 2: file_name, fobj = val elif len(val) == 3: file_name, content_type, fobj = val else: # NB: This is TypeError for backward compatibility. This case # used to fall through to `IBodyProducer`, below, which raised # TypeError about being unable to coerce None. raise TypeError( ( "`files` argument must be a sequence of tuples of" " (file_name, file_obj) or" " (file_name, content_type, file_obj)," " but the {!r} tuple has length {}: {!r}" ).format(param, len(val), val), ) else: fobj = val if hasattr(fobj, "name"): file_name = FilePath(fobj.name).basename() if not content_type: content_type = _guess_content_type(file_name) # XXX: Shouldn't this call self._data_to_body_producer? yield (param, (file_name, content_type, IBodyProducer(fobj))) def _query_quote(v: Any) -> str: """ Percent-encode a querystring name or value. :param v: A value. :returns: The value, coerced to a string and percent-encoded as appropriate for a querystring (with space as ``+``). """ if not isinstance(v, (str, bytes)): v = str(v) if not isinstance(v, bytes): v = v.encode("utf-8") q = quote_plus(v) return q def _coerced_query_params(params: _ParamsType) -> Iterator[Tuple[str, str]]: """ Carefully coerce *params* in the same way as `urllib.parse.urlencode()` Parameter names and values are coerced to unicode, which is encoded as UTF-8 and then percent-encoded. As a special case, `bytes` are directly percent-encoded. :param params: A mapping or sequence of (name, value) two-tuples. The value may be a list or tuple of multiple values. Names and values may be pretty much any type. :returns: A generator that yields two-tuples containing percent-encoded text strings. """ items: Iterable[Tuple[str, Union[str, Tuple[str, ...], List[str]]]] if isinstance(params, abc.Mapping): items = params.items() else: items = params for key, values in items: key_quoted = _query_quote(key) if not isinstance(values, (list, tuple)): values = (values,) for value in values: yield key_quoted, _query_quote(value) def _from_bytes(orig_bytes: bytes) -> IBodyProducer: return FileBodyProducer(io.BytesIO(orig_bytes)) def _from_file(orig_file: Union[io.BytesIO, io.BufferedReader]) -> IBodyProducer: return FileBodyProducer(orig_file) def _guess_content_type(filename: str) -> Optional[str]: if filename: guessed = mimetypes.guess_type(filename)[0] else: guessed = None return guessed or "application/octet-stream" registerAdapter(_from_bytes, bytes, IBodyProducer) registerAdapter(_from_file, io.BytesIO, IBodyProducer) # file()/open() equiv registerAdapter(_from_file, io.BufferedReader, IBodyProducer) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/content.py0000644000175100001660000001250714673126560015772 0ustar00runnerdocker""" Utilities related to retrieving the contents of the response-body. """ import json from typing import Any, Callable, FrozenSet, List, Optional, cast from ._multipart import parse_options_header from twisted.internet.defer import Deferred, succeed from twisted.internet.protocol import Protocol, connectionDone from twisted.python.failure import Failure from twisted.web.client import ResponseDone from twisted.web.http import PotentialDataLoss from twisted.web.http_headers import Headers from twisted.web.iweb import IResponse """Characters that are valid in a charset name per RFC 2978. See https://www.rfc-editor.org/errata/eid5433 """ _MIME_CHARSET_CHARS: FrozenSet[str] = frozenset( "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" # ALPHA "0123456789" # DIGIT "!#$%&+-^_`~" # symbols ) def _encoding_from_headers(headers: Headers) -> Optional[str]: content_types = headers.getRawHeaders("content-type") if content_types is None: return None # This seems to be the choice browsers make when encountering multiple # content-type headers. media_type, params = parse_options_header(content_types[-1]) charset = params.get("charset") if charset: assert isinstance(charset, str) # for MyPy charset = charset.strip("'\"").lower() if not charset: return None if not set(charset).issubset(_MIME_CHARSET_CHARS): return None return charset if media_type == "application/json": return "utf-8" return None class _BodyCollector(Protocol): finished: "Optional[Deferred[None]]" def __init__( self, finished: "Deferred[None]", collector: Callable[[bytes], None] ) -> None: self.finished = finished self.collector = collector def dataReceived(self, data: bytes) -> None: try: self.collector(data) except BaseException: if self.transport: self.transport.loseConnection() if self.finished: self.finished.errback(Failure()) self.finished = None def connectionLost(self, reason: Failure = connectionDone) -> None: if self.finished is None: return if reason.check(ResponseDone): self.finished.callback(None) elif reason.check(PotentialDataLoss): # http://twistedmatrix.com/trac/ticket/4840 self.finished.callback(None) else: self.finished.errback(reason) def collect( response: IResponse, collector: Callable[[bytes], None] ) -> "Deferred[None]": """ Incrementally collect the body of the response. This function may only be called **once** for a given response. If the ``collector`` raises an exception, it will be set as the error value on response ``Deferred`` returned from this function, and the underlying HTTP transport will be closed. :param IResponse response: The HTTP response to collect the body from. :param collector: A callable to be called each time data is available from the response body. :type collector: single argument callable :rtype: Deferred that fires with None when the entire body has been read. """ if response.length == 0: return succeed(None) d: "Deferred[None]" = Deferred() response.deliverBody(_BodyCollector(d, collector)) return d def content(response: IResponse) -> "Deferred[bytes]": """ Read the contents of an HTTP response. This function may be called multiple times for a response, it uses a ``WeakKeyDictionary`` to cache the contents of the response. :param IResponse response: The HTTP Response to get the contents of. :rtype: Deferred that fires with the content as a str. """ _content: List[bytes] = [] d = collect(response, _content.append) return cast( "Deferred[bytes]", d.addCallback(lambda _: b"".join(_content)), ) def json_content(response: IResponse, **kwargs: Any) -> "Deferred[Any]": """ Read the contents of an HTTP response and attempt to decode it as JSON. This function relies on :py:func:`content` and so may be called more than once for a given response. :param IResponse response: The HTTP Response to get the contents of. :param kwargs: Any keyword arguments accepted by :py:func:`json.loads` :rtype: Deferred that fires with the decoded JSON. """ # RFC7159 (8.1): Default JSON character encoding is UTF-8 d = text_content(response, encoding="utf-8") return d.addCallback(lambda text: json.loads(text, **kwargs)) def text_content(response: IResponse, encoding: str = "ISO-8859-1") -> "Deferred[str]": """ Read the contents of an HTTP response and decode it with an appropriate charset, which may be guessed from the ``Content-Type`` header. :param IResponse response: The HTTP Response to get the contents of. :param str encoding: A charset, such as ``UTF-8`` or ``ISO-8859-1``, used if the response does not specify an encoding. :rtype: Deferred that fires with a unicode string. """ def _decode_content(c: bytes) -> str: e = _encoding_from_headers(response.headers) if e is not None: return c.decode(e) return c.decode(encoding) d = content(response) return cast("Deferred[str]", d.addCallback(_decode_content)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/cookies.py0000644000175100001660000000552014673126560015751 0ustar00runnerdocker""" Convenience helpers for :mod:`http.cookiejar` """ from typing import Union, Iterable, Optional from http.cookiejar import Cookie, CookieJar from hyperlink import EncodedURL def scoped_cookie(origin: Union[str, EncodedURL], name: str, value: str) -> Cookie: """ Create a cookie scoped to a given URL's origin. You can insert the result directly into a `CookieJar`, like:: jar = CookieJar() jar.set_cookie(scoped_cookie("https://example.tld", "flavor", "chocolate")) await treq.get("https://domain.example", cookies=jar) :param origin: A URL that specifies the domain and port number of the cookie. If the protocol is HTTP*S* the cookie is marked ``Secure``, meaning it will not be attached to HTTP requests. Otherwise the cookie will be attached to both HTTP and HTTPS requests :param name: Name of the cookie. :param value: Value of the cookie. .. note:: This does not scope the cookies to any particular path, only the host, port, and scheme of the given URL. """ if isinstance(origin, EncodedURL): url_object = origin else: url_object = EncodedURL.from_text(origin) secure = url_object.scheme == "https" port_specified = not ( (url_object.scheme == "https" and url_object.port == 443) or (url_object.scheme == "http" and url_object.port == 80) ) port = str(url_object.port) if port_specified else None domain = url_object.host netscape_domain = domain if "." in domain else domain + ".local" return Cookie( # Scoping domain=netscape_domain, port=port, secure=secure, port_specified=port_specified, # Contents name=name, value=value, # Constant/always-the-same stuff version=0, path="/", expires=None, discard=False, comment=None, comment_url=None, rfc2109=False, path_specified=False, domain_specified=False, domain_initial_dot=False, rest={}, ) def search( jar: CookieJar, *, domain: str, name: Optional[str] = None ) -> Iterable[Cookie]: """ Raid the cookie jar for matching cookies. This is O(n) on the number of cookies in the jar. :param jar: The `CookieJar` (or subclass thereof) to search. :param domain: Domain, as in the URL, to match. ``.local`` is appended to a bare hostname. Subdomains are not matched (i.e., searching for ``foo.bar.tld`` won't return a cookie set for ``bar.tld``). :param name: Cookie name to match (exactly) """ netscape_domain = domain if "." in domain else domain + ".local" for c in jar: if c.domain != netscape_domain: continue if name is not None and c.name != name: continue yield c ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/multipart.py0000644000175100001660000003432114673126560016337 0ustar00runnerdocker# Copyright (c) Twisted Matrix Laboratories. # See LICENSE for details. from contextlib import closing from io import BytesIO from typing import Any, Iterable, List, Mapping, Optional, Tuple, Union, cast from uuid import uuid4 from twisted.internet import task from twisted.internet.defer import Deferred from twisted.internet.interfaces import IConsumer from twisted.python.failure import Failure from twisted.web.iweb import UNKNOWN_LENGTH, IBodyProducer from typing_extensions import TypeAlias, Literal from zope.interface import implementer from treq._types import _S, _FilesType, _FileValue CRLF = b"\r\n" _Consumer: TypeAlias = "Union[IConsumer, _LengthConsumer]" _UnknownLength = Literal["'twisted.web.iweb.UNKNOWN_LENGTH'"] _Length: TypeAlias = Union[int, _UnknownLength] _FieldValue = Union[bytes, Tuple[str, str, IBodyProducer]] _Field: TypeAlias = Tuple[str, _FieldValue] @implementer(IBodyProducer) class MultiPartProducer: """ :class:`MultiPartProducer` takes parameters for a HTTP request and produces bytes in multipart/form-data format defined in :rfc:`2388` and :rfc:`2046`. The encoded request is produced incrementally and the bytes are written to a consumer. Fields should have form: ``[(parameter name, value), ...]`` Accepted values: * Unicode strings (in this case parameter will be encoded with utf-8) * Tuples with (file name, content-type, :class:`~twisted.web.iweb.IBodyProducer` objects) Since :class:`MultiPartProducer` can accept objects like :class:`~twisted.web.iweb.IBodyProducer` which cannot be read from in an event-driven manner it uses uses a :class:`~twisted.internet.task.Cooperator` instance to schedule reads from the underlying producers. Reading is also paused and resumed based on notifications from the :class:`IConsumer` provider being written to. :ivar _fields: Sorted parameters, where all strings are enforced to be unicode and file objects stacked on bottom (to produce a human readable form-data request) :ivar _cooperate: A method like `Cooperator.cooperate` which is used to schedule all reads. :ivar boundary: The generated boundary used in form-data encoding """ length: _Length boundary: bytes _currentProducer: Optional[IBodyProducer] = None _task: Optional[task.CooperativeTask] = None def __init__( self, fields: _FilesType, boundary: Optional[Union[str, bytes]] = None, cooperator: task.Cooperator = cast(task.Cooperator, task), ) -> None: self._fields = _sorted_by_type(_converted(fields)) self._cooperate = cooperator.cooperate if not boundary: boundary = uuid4().hex.encode("ascii") if isinstance(boundary, str): boundary = boundary.encode("ascii") self.boundary = boundary self.length = self._calculateLength() def startProducing(self, consumer: IConsumer) -> "Deferred[None]": """ Start a cooperative task which will read bytes from the input file and write them to `consumer`. Return a `Deferred` which fires after all bytes have been written. :param consumer: Any `IConsumer` provider """ self._task = self._cooperate(self._writeLoop(consumer)) # type: ignore # whenDone returns the iterator that was passed to cooperate, so who # cares what type it has? It's an edge signal; we ignore its value. d: "Deferred[Any]" = self._task.whenDone() def maybeStopped(reason: Failure) -> "Deferred[None]": reason.trap(task.TaskStopped) return Deferred() d = cast("Deferred[None]", d.addCallbacks(lambda ignored: None, maybeStopped)) return d def stopProducing(self) -> None: """ Permanently stop writing bytes from the file to the consumer by stopping the underlying `CooperativeTask`. """ assert self._task is not None if self._currentProducer: self._currentProducer.stopProducing() self._task.stop() def pauseProducing(self) -> None: """ Temporarily suspend copying bytes from the input file to the consumer by pausing the `CooperativeTask` which drives that activity. """ assert self._task is not None if self._currentProducer: # Having a current producer means that we are in # the paused state because we've returned # the deferred of the current producer to the # the cooperator. So this request # for pausing us is actually a request to pause # our underlying current producer. self._currentProducer.pauseProducing() else: self._task.pause() def resumeProducing(self) -> None: """ Undo the effects of a previous `pauseProducing` and resume copying bytes to the consumer by resuming the `CooperativeTask` which drives the write activity. """ assert self._task is not None if self._currentProducer: self._currentProducer.resumeProducing() else: self._task.resume() def _calculateLength(self) -> _Length: """ Determine how many bytes the overall form post would consume. The easiest way is to calculate is to generate of `fObj` (assuming it is not modified from this point on). If the determination cannot be made, return `UNKNOWN_LENGTH`. """ consumer = _LengthConsumer() for i in list(self._writeLoop(consumer)): pass return consumer.length def _getBoundary(self, final: bool = False) -> bytes: """ Returns a boundary line, either final (the one that ends the form data request or a regular, the one that separates the boundaries) --this-is-my-boundary """ f = b"--" if final else b"" return b"--" + self.boundary + f def _writeLoop(self, consumer: _Consumer) -> Iterable[Optional[Deferred]]: """ Return an iterator which generates the multipart/form-data request including the encoded objects and writes them to the consumer for each time it is iterated. """ for index, (name, value) in enumerate(self._fields): # We don't write the CRLF of the first boundary: # HTTP request headers are already separated with CRLF # from the request body, another newline is possible # and should be considered as an empty preamble per rfc2046, # but is generally confusing, so we omit it when generating # the request. We don't write Content-Type: multipart/form-data # header here as well as it's defined in the context of the HTTP # request headers, not the producer, so we gust generate # the body. # It's also important to note that the boundary in the message # is defined not only by "--boundary-value" but # but with CRLF characters before it and after the line. # This is very important. # proper boundary is "CRLF--boundary-valueCRLF" consumer.write((CRLF if index != 0 else b"") + self._getBoundary() + CRLF) yield self._writeField(name, value, consumer) consumer.write(CRLF + self._getBoundary(final=True) + CRLF) def _writeField( self, name: str, value: _FieldValue, consumer: _Consumer ) -> Optional[Deferred]: if isinstance(value, bytes): self._writeString(name, value, consumer) return None else: filename, content_type, producer = value return self._writeFile(name, filename, content_type, producer, consumer) def _writeString(self, name: str, value: bytes, consumer: _Consumer) -> None: cdisp = _Header(b"Content-Disposition", b"form-data") cdisp.add_param(b"name", name) consumer.write(bytes(cdisp) + CRLF + CRLF) consumer.write(value) self._currentProducer = None def _writeFile( self, name: str, filename: str, content_type: str, producer: IBodyProducer, consumer: _Consumer, ) -> "Optional[Deferred[None]]": cdisp = _Header(b"Content-Disposition", b"form-data") cdisp.add_param(b"name", name) if filename: cdisp.add_param(b"filename", filename) consumer.write(bytes(cdisp) + CRLF) consumer.write(bytes(_Header(b"Content-Type", content_type)) + CRLF) if producer.length != UNKNOWN_LENGTH: consumer.write( bytes(_Header(b"Content-Length", str(producer.length))) + CRLF ) consumer.write(CRLF) if isinstance(consumer, _LengthConsumer): consumer.write(producer.length) return None else: self._currentProducer = producer def unset(val): self._currentProducer = None return val d = producer.startProducing(consumer) return cast("Deferred[None]", d.addCallback(unset)) def _escape(value: Union[str, bytes]) -> str: """ This function prevents header values from corrupting the request, a newline in the file name parameter makes form-data request unreadable for majority of parsers. """ if isinstance(value, bytes): value = value.decode("utf-8") return value.replace("\r", "").replace("\n", "").replace('"', '\\"') def _enforce_unicode(value: Any) -> str: """ This function enforces the strings passed to be unicode, so we won't need to guess what's the encoding of the binary strings passed in. If someone needs to pass the binary string, use BytesIO and wrap it with `FileBodyProducer`. """ if isinstance(value, str): return value elif isinstance(value, bytes): # we got a byte string, and we have no idea what's the encoding of it # we can only assume that it's something cool try: return value.decode("utf-8") except UnicodeDecodeError: raise ValueError( "Supplied raw bytes that are not ASCII/UTF-8." " When supplying raw string make sure it's ASCII or UTF-8" ", or work with unicode if you are not sure" ) else: raise ValueError("Unsupported field type: %s" % (value.__class__.__name__,)) def _converted(fields: _FilesType) -> Iterable[_Field]: """ Convert any of the multitude of formats we accept for the *fields* parameter into the form we work with internally. """ fields_: Iterable[Tuple[str, _FileValue]] if hasattr(fields, "items"): assert isinstance(fields, Mapping) fields_ = fields.items() else: fields_ = fields for name, value in fields_: # NOTE: While `name` is typed as `str` we still support UTF-8 `bytes` here # for backward compatibility, thus this call to decode. name = _enforce_unicode(name) if isinstance(value, (tuple, list)): if len(value) != 3: raise ValueError("Expected tuple: (filename, content type, producer)") filename, content_type, producer = value filename = _enforce_unicode(filename) if filename else None yield name, (filename, content_type, producer) elif isinstance(value, str): yield name, value.encode("utf-8") elif isinstance(value, bytes): yield name, value else: raise ValueError( "Unsupported value, expected str, bytes, " "or tuple (filename, content type, IBodyProducer)" ) class _LengthConsumer: """ `_LengthConsumer` is used to calculate the length of the multi-part request. The easiest way to do that is to consume all the fields, but instead writing them to the string just accumulate the request length. :ivar length: The length of the request. Can be `UNKNOWN_LENGTH` if consumer finds the field that has length that can not be calculated """ length: _Length def __init__(self) -> None: self.length = 0 def write(self, value: Union[bytes, _Length]) -> None: # this means that we have encountered # unknown length producer # so we need to stop attempts calculating if self.length == UNKNOWN_LENGTH: return assert isinstance(self.length, int) if value == UNKNOWN_LENGTH: self.length = cast(_UnknownLength, UNKNOWN_LENGTH) elif isinstance(value, int): self.length += value else: assert isinstance(value, bytes) self.length += len(value) class _Header: """ `_Header` This class is a tiny wrapper that produces request headers. We can't use standard python header class because it encodes unicode fields using =? bla bla ?= encoding, which is correct, but no one in HTTP world expects that, everyone wants utf-8 raw bytes. """ def __init__( self, name: bytes, value: _S, params: Optional[List[Tuple[_S, _S]]] = None, ): self.name = name self.value = value self.params = params or [] def add_param(self, name: _S, value: _S) -> None: self.params.append((name, value)) def __bytes__(self) -> bytes: with closing(BytesIO()) as h: h.write(self.name + b": " + _escape(self.value).encode("us-ascii")) if self.params: for (name, val) in self.params: h.write(b"; ") h.write(_escape(name).encode("us-ascii")) h.write(b"=") h.write(b'"' + _escape(val).encode("utf-8") + b'"') h.seek(0) return h.read() def _sorted_by_type(fields: Iterable[_Field]) -> List[_Field]: """Sorts params so that strings are placed before files. That makes a request more readable, as generally files are bigger. It also provides deterministic order of fields what is easier for testing. """ def key(p): key, val = p if isinstance(val, (bytes, str)): return (0, key) else: return (1, key) return sorted(fields, key=key) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/py.typed0000644000175100001660000000000014673126560015426 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/response.py0000644000175100001660000001011014673126560016142 0ustar00runnerdockerfrom typing import Any, Callable, List from requests.cookies import cookiejar_from_dict from http.cookiejar import CookieJar from twisted.internet.defer import Deferred from twisted.python import reflect from twisted.python.components import proxyForInterface from twisted.web.iweb import UNKNOWN_LENGTH, IResponse from treq.content import collect, content, json_content, text_content class _Response(proxyForInterface(IResponse)): # type: ignore """ A wrapper for :class:`twisted.web.iweb.IResponse` which manages cookies and adds a few convenience methods. """ original: IResponse _cookiejar: CookieJar def __init__(self, original: IResponse, cookiejar: CookieJar): self.original = original self._cookiejar = cookiejar def __repr__(self) -> str: """ Generate a representation of the response which includes the HTTP status code, Content-Type header, and body size, if available. """ if self.original.length == UNKNOWN_LENGTH: size = "unknown size" else: size = "{:,d} bytes".format(self.original.length) # Display non-ascii bits of the content-type header as backslash # escapes. content_type_bytes = b", ".join( self.original.headers.getRawHeaders(b"content-type", ()) ) content_type = repr(content_type_bytes).lstrip("b")[1:-1] return "<{} {} '{:.40s}' {}>".format( reflect.qual(self.__class__), self.original.code, content_type, size, ) def collect(self, collector: Callable[[bytes], None]) -> "Deferred[None]": """ Incrementally collect the body of the response, per :func:`treq.collect()`. :param collector: A single argument callable that will be called with chunks of body data as it is received. :returns: A `Deferred` that fires when the entire body has been received. """ return collect(self.original, collector) def content(self) -> "Deferred[bytes]": """ Read the entire body all at once, per :func:`treq.content()`. :returns: A `Deferred` that fires with a `bytes` object when the entire body has been received. """ return content(self.original) def json(self, **kwargs: Any) -> "Deferred[Any]": """ Collect the response body as JSON per :func:`treq.json_content()`. :param kwargs: Any keyword arguments accepted by :py:func:`json.loads` :rtype: Deferred that fires with the decoded JSON when the entire body has been read. """ return json_content(self.original, **kwargs) def text(self, encoding: str = "ISO-8859-1") -> "Deferred[str]": """ Read the entire body all at once as text, per :func:`treq.text_content()`. :rtype: A `Deferred` that fires with a unicode string when the entire body has been received. """ return text_content(self.original, encoding) def history(self) -> "List[_Response]": """ Get a list of all responses that (such as intermediate redirects), that ultimately ended in the current response. The responses are ordered chronologically. """ response = self history = [] while response.previousResponse is not None: history.append(_Response(response.previousResponse, self._cookiejar)) response = response.previousResponse history.reverse() return history def cookies(self) -> CookieJar: """ Get a copy of this response's cookies. """ # NB: This actually returns a RequestsCookieJar, but we type it as a # regular CookieJar because we want to ditch requests as a dependency. # Full deprecation deprecation will require a subclass or wrapper that # warns about the RequestCookieJar extensions. jar: CookieJar = cookiejar_from_dict({}) for cookie in self._cookiejar: jar.set_cookie(cookie) return jar ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3211887 treq-24.9.1/src/treq/test/0000755000175100001660000000000014673126570014721 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/__init__.py0000644000175100001660000000000014673126560017017 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3211887 treq-24.9.1/src/treq/test/local_httpbin/0000755000175100001660000000000014673126570017543 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/__init__.py0000644000175100001660000000000014673126560021641 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/child.py0000644000175100001660000002156414673126560021207 0ustar00runnerdocker""" A local ``httpbin`` server to run integration tests against. This ensures tests do not depend on `httpbin `_. """ from __future__ import print_function import argparse import datetime import sys import httpbin # type: ignore from twisted.internet.defer import Deferred, inlineCallbacks from twisted.internet.endpoints import TCP4ServerEndpoint, SSL4ServerEndpoint from twisted.internet.task import react from twisted.internet.ssl import (Certificate, CertificateOptions) from OpenSSL.crypto import PKey, X509 from twisted.python.threadpool import ThreadPool from twisted.web.server import Site from twisted.web.wsgi import WSGIResource from cryptography import x509 from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.x509.oid import NameOID from cryptography.hazmat.primitives.serialization import Encoding from .shared import _HTTPBinDescription def _certificates_for_authority_and_server(service_identity, key_size=2048): """ Create a self-signed CA certificate and server certificate signed by the CA. :param service_identity: The identity (hostname) of the server. :type service_identity: :py:class:`unicode` :param key_size: (optional) The size of CA's and server's private RSA keys. Defaults to 2048 bits, which is the minimum allowed by OpenSSL Contexts at the default security level. :type key_size: :py:class:`int` :return: a 3-tuple of ``(certificate_authority_certificate, server_private_key, server_certificate)``. :rtype: :py:class:`tuple` of (:py:class:`sslverify.Certificate`, :py:class:`OpenSSL.crypto.PKey`, :py:class:`OpenSSL.crypto.X509`) """ common_name_for_ca = x509.Name( [x509.NameAttribute(NameOID.COMMON_NAME, u'Testing Example CA')] ) common_name_for_server = x509.Name( [x509.NameAttribute(NameOID.COMMON_NAME, u'Testing Example Server')] ) one_day = datetime.timedelta(1, 0, 0) private_key_for_ca = rsa.generate_private_key( public_exponent=65537, key_size=key_size, backend=default_backend() ) public_key_for_ca = private_key_for_ca.public_key() ca_certificate = ( x509.CertificateBuilder() .subject_name(common_name_for_ca) .issuer_name(common_name_for_ca) .not_valid_before(datetime.datetime.today() - one_day) .not_valid_after(datetime.datetime.today() + one_day) .serial_number(x509.random_serial_number()) .public_key(public_key_for_ca) .add_extension( x509.BasicConstraints(ca=True, path_length=9), critical=True, ) .sign( private_key=private_key_for_ca, algorithm=hashes.SHA256(), backend=default_backend() ) ) private_key_for_server = rsa.generate_private_key( public_exponent=65537, key_size=key_size, backend=default_backend() ) public_key_for_server = private_key_for_server.public_key() server_certificate = ( x509.CertificateBuilder() .subject_name(common_name_for_server) .issuer_name(common_name_for_ca) .not_valid_before(datetime.datetime.today() - one_day) .not_valid_after(datetime.datetime.today() + one_day) .serial_number(x509.random_serial_number()) .public_key(public_key_for_server) .add_extension( x509.BasicConstraints(ca=False, path_length=None), critical=True, ) .add_extension( x509.SubjectAlternativeName( [x509.DNSName(service_identity)] ), critical=True, ) .sign( private_key=private_key_for_ca, algorithm=hashes.SHA256(), backend=default_backend() ) ) ca_self_cert = Certificate.loadPEM( ca_certificate.public_bytes(Encoding.PEM) ) pkey = PKey.from_cryptography_key(private_key_for_server) x509_server_certificate = X509.from_cryptography(server_certificate) return ca_self_cert, pkey, x509_server_certificate def _make_httpbin_site(reactor, threadpool_factory=ThreadPool): """ Return a :py:class:`Site` that hosts an ``httpbin`` WSGI application. :param reactor: The reactor. :param threadpool_factory: (optional) A callable that creates a :py:class:`ThreadPool`. :return: A :py:class:`Site` that hosts ``httpbin`` """ wsgi_threads = threadpool_factory() wsgi_threads.start() reactor.addSystemEventTrigger("before", "shutdown", wsgi_threads.stop) wsgi_resource = WSGIResource(reactor, wsgi_threads, httpbin.app) return Site(wsgi_resource) @inlineCallbacks def _serve_tls(reactor, host, port, site): """ Serve a site over TLS. :param reactor: The reactor. :param host: The host on which to listen. :type host: :py:class:`str` :param port: The host on which to listen. :type port: :py:class:`int` :type site: The :py:class:`Site` to serve. :return: A :py:class:`Deferred` that fires with a :py:class:`_HTTPBinDescription` """ ( ca_cert, private_key, certificate, ) = _certificates_for_authority_and_server(host) context_factory = CertificateOptions(privateKey=private_key, certificate=certificate) endpoint = SSL4ServerEndpoint(reactor, port, sslContextFactory=context_factory, interface=host) port = yield endpoint.listen(site) description = _HTTPBinDescription(host=host, port=port.getHost().port, cacert=ca_cert.dumpPEM().decode('ascii')) return description @inlineCallbacks def _serve_tcp(reactor, host, port, site): """ Serve a site over plain TCP. :param reactor: The reactor. :param host: The host on which to listen. :type host: :py:class:`str` :param port: The host on which to listen. :type port: :py:class:`int` :return: A :py:class:`Deferred` that fires with a :py:class:`_HTTPBinDescription` """ endpoint = TCP4ServerEndpoint(reactor, port, interface=host) port = yield endpoint.listen(site) description = _HTTPBinDescription(host=host, port=port.getHost().port) return description def _output_process_description(description, stdout=sys.stdout): """ Write a process description to standard out. :param description: The process description. :type description: :py:class:`_HTTPBinDescription` :param stdout: (optional) Standard out. """ stdout.buffer.write(description.to_json_bytes() + b'\n') stdout.buffer.flush() def _forever_httpbin(reactor, argv, _make_httpbin_site=_make_httpbin_site, _serve_tcp=_serve_tcp, _serve_tls=_serve_tls, _output_process_description=_output_process_description): """ Run ``httpbin`` forever. :param reactor: The Twisted reactor. :param argv: The arguments with which the script was ran. :type argv: :py:class:`list` of :py:class:`str` :return: a :py:class:`Deferred` that never fires. """ parser = argparse.ArgumentParser( description=""" Run httpbin forever. This writes a JSON object to standard out. The host and port properties contain the host and port on which httpbin listens. When run with HTTPS, the cacert property contains the PEM-encode CA certificate that clients must trust. """ ) parser.add_argument("--https", help="Serve HTTPS", action="store_const", dest='serve', const=_serve_tls, default=_serve_tcp) parser.add_argument("--host", help="The host on which the server will listen.", type=str, default="localhost") parser.add_argument("--port", help="The on which the server will listen.", type=int, default=0) arguments = parser.parse_args(argv) site = _make_httpbin_site(reactor) description_deferred = arguments.serve(reactor, arguments.host, arguments.port, site) description_deferred.addCallback(_output_process_description) description_deferred.addCallback(lambda _: Deferred()) return description_deferred if __name__ == '__main__': react(_forever_httpbin, (sys.argv[1:],)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/parent.py0000644000175100001660000001231114673126560021403 0ustar00runnerdocker""" Spawn and monitor an ``httpbin`` child process. """ import attr import signal import sys import os from twisted.protocols import basic, policies from twisted.internet import protocol, endpoints, error from twisted.internet.defer import Deferred, succeed from .shared import _HTTPBinDescription class _HTTPBinServerProcessProtocol(basic.LineOnlyReceiver): """ Manage the lifecycle of an ``httpbin`` process. """ delimiter = b'\n' def __init__(self, all_data_received, terminated): """ Manage the lifecycle of an ``httpbin`` process. :param all_data_received: A Deferred that will be called back with an :py:class:`_HTTPBinDescription` object :type all_data_received: :py:class:`Deferred` :param terminated: A Deferred that will be called back when the process has ended. :type terminated: :py:class:`Deferred` """ self._all_data_received = all_data_received self._received = False self._terminated = terminated def lineReceived(self, line): if self._received: raise RuntimeError("Unexpected line: {!r}".format(line)) description = _HTTPBinDescription.from_json_bytes(line) self._received = True # Remove readers and writers that leave the reactor in a dirty # state after a test. self.transport.closeStdin() self.transport.closeStdout() self.transport.closeStderr() self._all_data_received.callback(description) def connectionLost(self, reason): if not self._received: self._all_data_received.errback(reason) self._terminated.errback(reason) @attr.s class _HTTPBinProcess: """ Manage an ``httpbin`` server process. :ivar _all_data_received: See :py:attr:`_HTTPBinServerProcessProtocol.all_data_received` :ivar _terminated: See :py:attr:`_HTTPBinServerProcessProtocol.terminated` """ _https = attr.ib() _error_log_path = attr.ib(default='httpbin-server-error.log') _all_data_received = attr.ib(init=False, default=attr.Factory(Deferred)) _terminated = attr.ib(init=False, default=attr.Factory(Deferred)) _process = attr.ib(init=False, default=None) _process_description = attr.ib(init=False, default=None) _open = staticmethod(open) def _spawn_httpbin_process(self, reactor): """ Spawn an ``httpbin`` process, returning a :py:class:`Deferred` that fires with the process transport and result. """ server = _HTTPBinServerProcessProtocol( all_data_received=self._all_data_received, terminated=self._terminated ) argv = [ sys.executable, '-m', 'treq.test.local_httpbin.child', ] if self._https: argv.append('--https') with self._open(self._error_log_path, 'wb') as error_log: endpoint = endpoints.ProcessEndpoint( reactor, sys.executable, argv, env=os.environ, childFDs={ 1: 'r', 2: error_log.fileno(), }, ) # Processes are spawned synchronously. spawned = endpoint.connect( # ProtocolWrapper, WrappingFactory's protocol, has a # disconnecting attribute. See # https://twistedmatrix.com/trac/ticket/6606 policies.WrappingFactory( protocol.Factory.forProtocol(lambda: server), ), ) def wait_for_protocol(connected_protocol): process = connected_protocol.transport return self._all_data_received.addCallback( return_result_and_process, process, ) def return_result_and_process(description, process): return description, process return spawned.addCallback(wait_for_protocol) def server_description(self, reactor): """ Return a :py:class:`Deferred` that fires with the the process' :py:class:`_HTTPBinDescription`, spawning the process if necessary. """ if self._process is None: ready = self._spawn_httpbin_process(reactor) def store_and_schedule_termination(description_and_process): description, process = description_and_process self._process = process self._process_description = description reactor.addSystemEventTrigger("before", "shutdown", self.kill) return self._process_description return ready.addCallback(store_and_schedule_termination) else: return succeed(self._process_description) def kill(self): """ Kill the ``httpbin`` process. """ if not self._process: return self._process.signalProcess("KILL") def suppress_process_terminated(exit_failure): exit_failure.trap(error.ProcessTerminated) if exit_failure.value.signal != signal.SIGKILL: return exit_failure return self._terminated.addErrback(suppress_process_terminated) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/shared.py0000644000175100001660000000201514673126560021360 0ustar00runnerdocker""" Things shared between the ``httpbin`` child and parent processes """ import attr import json @attr.s class _HTTPBinDescription: """ Describe an ``httpbin`` process. :param host: The host on which the process listens. :type host: :py:class:`str` :param port: The port on which the process listens. :type port: :py:class:`int` :param cacert: (optional) The PEM-encoded certificate authority's certificate. The calling process' treq must trust this when running HTTPS tests. :type cacert: :py:class:`bytes` or :py:class:`None` """ host = attr.ib() port = attr.ib() cacert = attr.ib(default=None) @classmethod def from_json_bytes(cls, json_data): """ Deserialize an instance from JSON bytes. """ return cls(**json.loads(json_data.decode('ascii'))) def to_json_bytes(self): """ Serialize an instance from JSON bytes. """ return json.dumps(attr.asdict(self), sort_keys=True).encode('ascii') ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3221886 treq-24.9.1/src/treq/test/local_httpbin/test/0000755000175100001660000000000014673126570020522 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/test/__init__.py0000644000175100001660000000000014673126560022620 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/test/test_child.py0000644000175100001660000003045714673126560023226 0ustar00runnerdocker""" Tests for :py:mod:`treq.test.local_httpbin.child` """ import attr from cryptography.hazmat.primitives.asymmetric import padding import functools import io from twisted.trial.unittest import SynchronousTestCase try: from twisted.internet.testing import MemoryReactor except ImportError: from twisted.test.proto_helpers import MemoryReactor from twisted.internet import defer from treq.test.util import skip_on_windows_because_of_199 from twisted.web.server import Site from twisted.web.resource import Resource from service_identity.cryptography import verify_certificate_hostname from .. import child, shared skip = skip_on_windows_because_of_199() class CertificatesForAuthorityAndServerTests(SynchronousTestCase): """ Tests for :py:func:`child._certificates_for_authority_and_server` """ def setUp(self): self.hostname = ".example.org" ( self.ca_cert, self.server_private_key, self.server_x509_cert, ) = child._certificates_for_authority_and_server( self.hostname, ) def test_pkey_x509_paired(self): """ The returned private key corresponds to the X.509 certificate's public key. """ server_private_key = self.server_private_key.to_cryptography_key() server_x509_cert = self.server_x509_cert.to_cryptography() plaintext = b"plaintext" ciphertext = server_x509_cert.public_key().encrypt( plaintext, padding.PKCS1v15(), ) self.assertEqual( server_private_key.decrypt( ciphertext, padding.PKCS1v15(), ), plaintext, ) def test_ca_signed_x509(self): """ The returned X.509 certificate was signed by the returned certificate authority's certificate. """ ca_cert = self.ca_cert.original.to_cryptography() server_x509_cert = self.server_x509_cert.to_cryptography() # Raises an InvalidSignature exception on failure. ca_cert.public_key().verify( server_x509_cert.signature, server_x509_cert.tbs_certificate_bytes, padding.PKCS1v15(), server_x509_cert.signature_hash_algorithm, ) def test_x509_matches_hostname(self): """ The returned X.509 certificate is valid for the hostname. """ verify_certificate_hostname( self.server_x509_cert.to_cryptography(), self.hostname, ) @attr.s class FakeThreadPoolState(object): """ State for :py:class:`FakeThreadPool`. """ init_call_count = attr.ib(default=0) start_call_count = attr.ib(default=0) @attr.s class FakeThreadPool(object): """ A fake :py:class:`twisted.python.threadpool.ThreadPool` """ _state = attr.ib() def init(self): self._state.init_call_count += 1 return self def start(self): """ See :py:meth:`twisted.python.threadpool.ThreadPool.start` """ self._state.start_call_count += 1 def stop(self): """ See :py:meth:`twisted.python.threadpool.ThreadPool.stop` """ class MakeHTTPBinSiteTests(SynchronousTestCase): """ Tests for :py:func:`_make_httpbin_site`. """ def setUp(self): self.fake_threadpool_state = FakeThreadPoolState() self.fake_threadpool = FakeThreadPool(self.fake_threadpool_state) self.reactor = MemoryReactor() def test_threadpool_management(self): """ A thread pool is created that will be shut down when the reactor shuts down. """ child._make_httpbin_site( self.reactor, threadpool_factory=self.fake_threadpool.init, ) self.assertEqual(self.fake_threadpool_state.init_call_count, 1) self.assertEqual(self.fake_threadpool_state.start_call_count, 1) self.assertEqual(len(self.reactor.triggers["before"]["shutdown"]), 1) [(stop, _, _)] = self.reactor.triggers["before"]["shutdown"] self.assertEqual(stop, self.fake_threadpool.stop) class ServeTLSTests(SynchronousTestCase): """ Tests for :py:func:`_serve_tls` """ def setUp(self): self.reactor = MemoryReactor() self.site = Site(Resource()) def test_tls_listener_matches_description(self): """ An SSL listener is established on the requested host and port, and the host, port, and CA certificate are returned in its description. """ expected_host = "host" expected_port = 123 description_deferred = child._serve_tls( self.reactor, host=expected_host, port=expected_port, site=self.site, ) self.assertEqual(len(self.reactor.sslServers), 1) [(actual_port, actual_site, _, _, actual_host)] = self.reactor.sslServers self.assertEqual(actual_host, expected_host) self.assertEqual(actual_port, expected_port) self.assertIs(actual_site, self.site) description = self.successResultOf(description_deferred) self.assertEqual(description.host, expected_host) self.assertEqual(description.port, expected_port) self.assertTrue(description.cacert) class ServeTCPTests(SynchronousTestCase): """ Tests for :py:func:`_serve_tcp` """ def setUp(self): self.reactor = MemoryReactor() self.site = Site(Resource) def test_tcp_listener_matches_description(self): """ A TCP listeneris established on the request host and port, and the host and port are returned in its description. """ expected_host = "host" expected_port = 123 description_deferred = child._serve_tcp( self.reactor, host=expected_host, port=expected_port, site=self.site, ) self.assertEqual(len(self.reactor.tcpServers), 1) [(actual_port, actual_site, _, actual_host)] = self.reactor.tcpServers self.assertEqual(actual_host, expected_host) self.assertEqual(actual_port, expected_port) self.assertIs(actual_site, self.site) description = self.successResultOf(description_deferred) self.assertEqual(description.host, expected_host) self.assertEqual(description.port, expected_port) self.assertFalse(description.cacert) @attr.s class FlushableBytesIOState(object): """ State for :py:class:`FlushableBytesIO` """ bio = attr.ib(default=attr.Factory(io.BytesIO)) flush_count = attr.ib(default=0) @attr.s class FlushableBytesIO(object): """ A :py:class:`io.BytesIO` wrapper that records flushes. """ _state = attr.ib() def write(self, data): self._state.bio.write(data) def flush(self): self._state.flush_count += 1 @attr.s class BufferedStandardOut(object): """ A standard out that whose ``buffer`` is a :py:class:`FlushableBytesIO` instance. """ buffer = attr.ib() class OutputProcessDescriptionTests(SynchronousTestCase): """ Tests for :py:func:`_output_process_description` """ def setUp(self): self.stdout_state = FlushableBytesIOState() self.stdout = BufferedStandardOut(FlushableBytesIO(self.stdout_state)) def test_description_written(self): """ An :py:class:`shared._HTTPBinDescription` is written to standard out and the line flushed. """ description = shared._HTTPBinDescription(host="host", port=123, cacert="cacert") child._output_process_description(description, self.stdout) written = self.stdout_state.bio.getvalue() self.assertEqual( written, b'{"cacert": "cacert", "host": "host", "port": 123}' + b"\n", ) self.assertEqual(self.stdout_state.flush_count, 1) class ForeverHTTPBinTests(SynchronousTestCase): """ Tests for :py:func:`_forever_httpbin` """ def setUp(self): self.make_httpbin_site_returns = Site(Resource()) self.serve_tcp_calls = [] self.serve_tcp_returns = defer.Deferred() self.serve_tls_calls = [] self.serve_tls_returns = defer.Deferred() self.output_process_description_calls = [] self.output_process_description_returns = None self.reactor = MemoryReactor() self.forever_httpbin = functools.partial( child._forever_httpbin, _make_httpbin_site=self.make_httpbin_site, _serve_tcp=self.serve_tcp, _serve_tls=self.serve_tls, _output_process_description=self.output_process_description, ) def make_httpbin_site(self, reactor, *args, **kwargs): """ A fake :py:func:`child._make_httpbin_site`. """ return self.make_httpbin_site_returns def serve_tcp(self, reactor, host, port, site): """ A fake :py:func:`child._serve_tcp`. """ self.serve_tcp_calls.append((reactor, host, port, site)) return self.serve_tcp_returns def serve_tls(self, reactor, host, port, site): """ A fake :py:func:`child._serve_tls`. """ self.serve_tls_calls.append((reactor, host, port, site)) return self.serve_tls_returns def output_process_description(self, description, *args, **kwargs): """ A fake :py:func:`child._output_process_description` """ self.output_process_description_calls.append(description) return self.output_process_description_returns def assertDescriptionAndDeferred(self, description_deferred, forever_deferred): """ Assert that firing ``description_deferred`` outputs the description but that ``forever_deferred`` never fires. """ description_deferred.callback("description") self.assertEqual(self.output_process_description_calls, ["description"]) self.assertNoResult(forever_deferred) def test_default_arguments(self): """ The default command line arguments host ``httpbin`` on ``localhost`` and a randomly-assigned port, returning a :py:class:`Deferred` that never fires. """ deferred = self.forever_httpbin(self.reactor, []) self.assertEqual( self.serve_tcp_calls, [(self.reactor, "localhost", 0, self.make_httpbin_site_returns)], ) self.assertDescriptionAndDeferred( description_deferred=self.serve_tcp_returns, forever_deferred=deferred, ) def test_https(self): """ The ``--https`` command line argument serves ``httpbin`` over HTTPS, returning a :py:class:`Deferred` that never fires. """ deferred = self.forever_httpbin(self.reactor, ["--https"]) self.assertEqual( self.serve_tls_calls, [(self.reactor, "localhost", 0, self.make_httpbin_site_returns)], ) self.assertDescriptionAndDeferred( description_deferred=self.serve_tls_returns, forever_deferred=deferred, ) def test_host(self): """ The ``--host`` command line argument serves ``httpbin`` on provided host, returning a :py:class:`Deferred` that never fires. """ deferred = self.forever_httpbin(self.reactor, ["--host", "example.org"]) self.assertEqual( self.serve_tcp_calls, [ ( self.reactor, "example.org", 0, self.make_httpbin_site_returns, ) ], ) self.assertDescriptionAndDeferred( description_deferred=self.serve_tcp_returns, forever_deferred=deferred, ) def test_port(self): """ The ``--port`` command line argument serves ``httpbin`` on the provided port, returning a :py:class:`Deferred` that never fires. """ deferred = self.forever_httpbin(self.reactor, ["--port", "91"]) self.assertEqual( self.serve_tcp_calls, [(self.reactor, "localhost", 91, self.make_httpbin_site_returns)], ) self.assertDescriptionAndDeferred( description_deferred=self.serve_tcp_returns, forever_deferred=deferred, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/test/test_parent.py0000644000175100001660000003675014673126560023436 0ustar00runnerdocker""" Tests for :py:mod:`treq.test.local_httpbin.parent` """ import attr import json import signal import sys from twisted.internet import defer from twisted.internet.interfaces import (IProcessTransport, IReactorCore, IReactorProcess) from twisted.python.failure import Failure from treq.test.util import skip_on_windows_because_of_199 from twisted.internet.error import ProcessTerminated, ConnectionDone try: from twisted.internet.testing import MemoryReactor, StringTransport except ImportError: from twisted.test.proto_helpers import MemoryReactor, StringTransport from twisted.trial.unittest import SynchronousTestCase from zope.interface import implementer, verify from .. import parent, shared skip = skip_on_windows_because_of_199() @attr.s class FakeProcessTransportState(object): """ State for :py:class:`FakeProcessTransport`. """ standard_in_closed = attr.ib(default=False) standard_out_closed = attr.ib(default=False) standard_error_closed = attr.ib(default=False) signals = attr.ib(default=attr.Factory(list)) @implementer(IProcessTransport) @attr.s class FakeProcessTransport(StringTransport, object): """ A fake process transport. """ pid = 1234 _state = attr.ib() def closeStdin(self): """ Close standard in. """ self._state.standard_in_closed = True def closeStdout(self): """ Close standard out. """ self._state.standard_out_closed = True def closeStderr(self): """ Close standard error. """ self._state.standard_error_closed = True def closeChildFD(self, descriptor): """ Close a child's file descriptor. :param descriptor: See :py:class:`IProcessProtocol.closeChildFD` """ def writeToChild(self, childFD, data): """ Write data to a child's file descriptor. :param childFD: See :py:class:`IProcessProtocol.writeToChild` :param data: See :py:class:`IProcessProtocol.writeToChild` """ def signalProcess(self, signalID): """ Send a signal. :param signalID: See :py:class:`IProcessProtocol.signalProcess` """ self._state.signals.append(signalID) class FakeProcessTransportTests(SynchronousTestCase): """ Tests for :py:class:`FakeProcessTransport`. """ def setUp(self): self.state = FakeProcessTransportState() self.transport = FakeProcessTransport(self.state) def test_provides_interface(self): """ Instances provide :py:class:`IProcessTransport`. """ verify.verifyObject(IProcessTransport, self.transport) def test_closeStdin(self): """ Closing standard in updates the state instance. """ self.assertFalse(self.state.standard_in_closed) self.transport.closeStdin() self.assertTrue(self.state.standard_in_closed) def test_closeStdout(self): """ Closing standard out updates the state instance. """ self.assertFalse(self.state.standard_out_closed) self.transport.closeStdout() self.assertTrue(self.state.standard_out_closed) def test_closeStderr(self): """ Closing standard error updates the state instance. """ self.assertFalse(self.state.standard_error_closed) self.transport.closeStderr() self.assertTrue(self.state.standard_error_closed) class HTTPServerProcessProtocolTests(SynchronousTestCase): """ Tests for :py:class:`parent._HTTPBinServerProcessProtocol` """ def setUp(self): self.transport_state = FakeProcessTransportState() self.transport = FakeProcessTransport(self.transport_state) self.all_data_received = defer.Deferred() self.terminated = defer.Deferred() self.protocol = parent._HTTPBinServerProcessProtocol( all_data_received=self.all_data_received, terminated=self.terminated, ) self.protocol.makeConnection(self.transport) def assertStandardInputAndOutputClosed(self): """ The transport's standard in, out, and error are closed. """ self.assertTrue(self.transport_state.standard_in_closed) self.assertTrue(self.transport_state.standard_out_closed) self.assertTrue(self.transport_state.standard_error_closed) def test_receive_http_description(self): """ Receiving a serialized :py:class:`_HTTPBinDescription` fires the ``all_data_received`` :py:class:`Deferred`. """ self.assertNoResult(self.all_data_received) description = shared._HTTPBinDescription("host", 1234, "cert") self.protocol.lineReceived( json.dumps(attr.asdict(description)).encode('ascii') ) self.assertStandardInputAndOutputClosed() self.assertEqual(self.successResultOf(self.all_data_received), description) def test_receive_unexpected_line(self): """ Receiving a line after the description synchronously raises in :py:class:`RuntimeError` """ self.test_receive_http_description() with self.assertRaises(RuntimeError): self.protocol.lineReceived(b"unexpected") def test_connection_lost_before_receiving_data(self): """ If the process terminates before its data is received, both ``all_data_received`` and ``terminated`` errback. """ self.assertNoResult(self.all_data_received) self.protocol.connectionLost(Failure(ConnectionDone("done"))) self.assertIsInstance( self.failureResultOf(self.all_data_received).value, ConnectionDone, ) self.assertIsInstance( self.failureResultOf(self.terminated).value, ConnectionDone, ) def test_connection_lost(self): """ ``terminated`` fires when the connection is lost. """ self.test_receive_http_description() self.protocol.connectionLost(Failure(ConnectionDone("done"))) self.assertIsInstance( self.failureResultOf(self.terminated).value, ConnectionDone, ) @attr.s class SpawnedProcess(object): """ A call to :py:class:`MemoryProcessReactor.spawnProcess`. """ process_protocol = attr.ib() executable = attr.ib() args = attr.ib() env = attr.ib() path = attr.ib() uid = attr.ib() gid = attr.ib() use_pty = attr.ib() child_fds = attr.ib() returned_process_transport = attr.ib() returned_process_transport_state = attr.ib() def send_stdout(self, data): """ Send data from the process' standard out. :param data: The standard out data. """ self.process_protocol.childDataReceived(1, data) def end_process(self, reason): """ End the process. :param reason: The reason. :type reason: :py:class:`Failure` """ self.process_protocol.processEnded(reason) @implementer(IReactorCore, IReactorProcess) class MemoryProcessReactor(MemoryReactor): """ A fake :py:class:`IReactorProcess` and :py:class:`IReactorCore` provider to be used in tests. """ def __init__(self): MemoryReactor.__init__(self) self.spawnedProcesses = [] def spawnProcess(self, processProtocol, executable, args=(), env={}, path=None, uid=None, gid=None, usePTY=0, childFDs=None): """ :ivar process_protocol: Stores the protocol passed to the reactor. :return: An L{IProcessTransport} provider. """ transport_state = FakeProcessTransportState() transport = FakeProcessTransport(transport_state) self.spawnedProcesses.append(SpawnedProcess( process_protocol=processProtocol, executable=executable, args=args, env=env, path=path, uid=uid, gid=gid, use_pty=usePTY, child_fds=childFDs, returned_process_transport=transport, returned_process_transport_state=transport_state, )) processProtocol.makeConnection(transport) return transport class MemoryProcessReactorTests(SynchronousTestCase): """ Tests for :py:class:`MemoryProcessReactor` """ def test_provides_interfaces(self): """ :py:class:`MemoryProcessReactor` instances provide :py:class:`IReactorCore` and :py:class:`IReactorProcess`. """ reactor = MemoryProcessReactor() verify.verifyObject(IReactorCore, reactor) verify.verifyObject(IReactorProcess, reactor) class HTTPBinProcessTests(SynchronousTestCase): """ Tests for :py:class:`_HTTPBinProcesss`. """ def setUp(self): self.reactor = MemoryProcessReactor() self.opened_file_descriptors = [] def fd_recording_open(self, *args, **kwargs): """ Record the file descriptors of files opened by :py:func:`open`. :return: A file object. """ fobj = open(*args, **kwargs) self.opened_file_descriptors.append(fobj.fileno()) return fobj def spawned_process(self): """ Assert that ``self.reactor`` has spawned only one process and return the :py:class:`SpawnedProcess` representing it. :return: The :py:class:`SpawnedProcess`. """ self.assertEqual(len(self.reactor.spawnedProcesses), 1) return self.reactor.spawnedProcesses[0] def assertSpawnAndDescription(self, process, args, description): """ Assert that spawning the given process invokes the command with the given args, that standard error is redirected, that it is killed at reactor shutdown, and that it returns a description that matches the provided one. :param process: :py:class:`_HTTPBinProcesss` instance. :param args: The arguments with which to execute the child process. :type args: :py:class:`tuple` of :py:class:`str` :param description: The expected :py:class:`_HTTPBinDescription`. :return: The returned :py:class:`_HTTPBinDescription` """ process._open = self.fd_recording_open description_deferred = process.server_description(self.reactor) spawned_process = self.spawned_process() self.assertEqual(spawned_process.args, args) self.assertEqual(len(self.opened_file_descriptors), 1) [error_log_fd] = self.opened_file_descriptors self.assertEqual(spawned_process.child_fds.get(2), error_log_fd) self.assertNoResult(description_deferred) spawned_process.send_stdout(description.to_json_bytes() + b'\n') before_shutdown = self.reactor.triggers["before"]["shutdown"] self.assertEqual(len(before_shutdown), 1) [(before_shutdown_function, _, _)] = before_shutdown self.assertEqual(before_shutdown_function, process.kill) self.assertEqual(self.successResultOf(description_deferred), description) def test_server_description_spawns_process(self): """ :py:class:`_HTTPBinProcess.server_description` spawns an ``httpbin`` child process that it monitors with :py:class:`_HTTPBinServerProcessProtocol`, and redirects its standard error to a log file. """ httpbin_process = parent._HTTPBinProcess(https=False) description = shared._HTTPBinDescription(host="host", port=1234) self.assertSpawnAndDescription( httpbin_process, [ sys.executable, '-m', 'treq.test.local_httpbin.child' ], description) def test_server_description_spawns_process_https(self): """ :py:class:`_HTTPBinProcess.server_description` spawns an ``httpbin`` child process that listens over HTTPS, that it monitors with :py:class:`_HTTPBinServerProcessProtocol`, and redirects the process' standard error to a log file. """ httpbin_process = parent._HTTPBinProcess(https=True) description = shared._HTTPBinDescription(host="host", port=1234, cacert="cert") self.assertSpawnAndDescription( httpbin_process, [ sys.executable, '-m', 'treq.test.local_httpbin.child', '--https', ], description) def test_server_description_caches_description(self): """ :py:class:`_HTTPBinProcess.server_description` spawns an ``httpbin`` child process only once, after which it returns a cached :py:class:`_HTTPBinDescription`. """ httpbin_process = parent._HTTPBinProcess(https=False) description_deferred = httpbin_process.server_description(self.reactor) self.spawned_process().send_stdout( shared._HTTPBinDescription(host="host", port=1234).to_json_bytes() + b'\n' ) description = self.successResultOf(description_deferred) cached_description_deferred = httpbin_process.server_description( self.reactor, ) cached_description = self.successResultOf(cached_description_deferred) self.assertIs(description, cached_description) def test_kill_before_spawn(self): """ Killing a process before it has been spawned has no effect. """ parent._HTTPBinProcess(https=False).kill() def test_kill(self): """ Kill terminates the process as quickly as the platform allows, and the termination failure is suppressed. """ httpbin_process = parent._HTTPBinProcess(https=False) httpbin_process.server_description(self.reactor) spawned_process = self.spawned_process() spawned_process.send_stdout( shared._HTTPBinDescription(host="host", port=1234).to_json_bytes() + b'\n' ) termination_deferred = httpbin_process.kill() self.assertEqual( spawned_process.returned_process_transport_state.signals, ['KILL'], ) spawned_process.end_process( Failure(ProcessTerminated(1, signal=signal.SIGKILL)), ) self.successResultOf(termination_deferred) def test_kill_unexpected_exit(self): """ The :py:class:`Deferred` returned by :py:meth:`_HTTPBinProcess.kill` errbacks with the failure when it is not :py:class:`ProcessTerminated`, or its signal does not match the expected signal. """ for error in [ProcessTerminated(1, signal=signal.SIGIO), ConnectionDone("Bye")]: httpbin_process = parent._HTTPBinProcess(https=False) httpbin_process.server_description(self.reactor) spawned_process = self.reactor.spawnedProcesses[-1] spawned_process.send_stdout( shared._HTTPBinDescription(host="host", port=1234).to_json_bytes() + b'\n' ) termination_deferred = httpbin_process.kill() spawned_process.end_process(Failure(error)) self.assertIs(self.failureResultOf(termination_deferred).value, error) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/local_httpbin/test/test_shared.py0000644000175100001660000000243114673126560023400 0ustar00runnerdocker""" Tests for :py:mod:`treq.test.local_httpbin.shared` """ from twisted.trial import unittest from .. import shared class HTTPBinDescriptionTests(unittest.SynchronousTestCase): """ Tests for :py:class:`shared._HTTPBinDescription` """ def test_round_trip(self): """ :py:class:`shared._HTTPBinDescription.from_json_bytes` can deserialize the output of :py:class:`shared._HTTPBinDescription.to_json_bytes` """ original = shared._HTTPBinDescription(host="host", port=123) round_tripped = shared._HTTPBinDescription.from_json_bytes( original.to_json_bytes(), ) self.assertEqual(original, round_tripped) def test_round_trip_cacert(self): """ :py:class:`shared._HTTPBinDescription.from_json_bytes` can deserialize the output of :py:class:`shared._HTTPBinDescription.to_json_bytes` when ``cacert`` is set. """ original = shared._HTTPBinDescription(host="host", port=123, cacert='cacert') round_tripped = shared._HTTPBinDescription.from_json_bytes( original.to_json_bytes(), ) self.assertEqual(original, round_tripped) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_agentspy.py0000644000175100001660000000517514673126560020173 0ustar00runnerdocker# Copyright (c) The treq Authors. # See LICENSE for details. from io import BytesIO from twisted.trial.unittest import SynchronousTestCase from twisted.web.client import FileBodyProducer from twisted.web.http_headers import Headers from twisted.web.iweb import IAgent from treq._agentspy import RequestRecord, agent_spy class APISpyTests(SynchronousTestCase): """ The agent_spy API provides an agent that records each request made to it. """ def test_provides_iagent(self) -> None: """ The agent returned by agent_spy() provides the IAgent interface. """ agent, _ = agent_spy() self.assertTrue(IAgent.providedBy(agent)) def test_records(self) -> None: """ Each request made with the agent is recorded. """ agent, requests = agent_spy() body = FileBodyProducer(BytesIO(b"...")) d1 = agent.request(b"GET", b"https://foo") d2 = agent.request(b"POST", b"http://bar", Headers({})) d3 = agent.request(b"PUT", b"https://baz", None, bodyProducer=body) self.assertEqual( requests, [ RequestRecord(b"GET", b"https://foo", None, None, d1), RequestRecord(b"POST", b"http://bar", Headers({}), None, d2), RequestRecord(b"PUT", b"https://baz", None, body, d3), ], ) def test_record_attributes(self) -> None: """ Each parameter passed to `request` is available as an attribute of the RequestRecord. Additionally, the deferred returned by the call is available. """ agent, requests = agent_spy() headers = Headers() body = FileBodyProducer(BytesIO(b"...")) deferred = agent.request(b"method", b"uri", headers=headers, bodyProducer=body) [rr] = requests self.assertIs(rr.method, b"method") self.assertIs(rr.uri, b"uri") self.assertIs(rr.headers, headers) self.assertIs(rr.bodyProducer, body) self.assertIs(rr.deferred, deferred) def test_type_validation(self) -> None: """ The request method enforces correctness by raising TypeError when passed parameters of the wrong type. """ agent, _ = agent_spy() self.assertRaises(TypeError, agent.request, "method not bytes", b"uri") self.assertRaises(TypeError, agent.request, b"method", "uri not bytes") self.assertRaises( TypeError, agent.request, b"method", b"uri", {"not": "headers"} ) self.assertRaises( TypeError, agent.request, b"method", b"uri", None, b"not ibodyproducer" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_api.py0000644000175100001660000001636314673126560017113 0ustar00runnerdockerfrom __future__ import absolute_import, division from twisted.internet import defer from twisted.trial.unittest import TestCase from twisted.web.client import HTTPConnectionPool from twisted.web.iweb import IAgent from zope.interface import implementer import treq from treq.api import (default_pool, default_reactor, get_global_pool, set_global_pool) try: from twisted.internet.testing import MemoryReactorClock except ImportError: from twisted.test.proto_helpers import MemoryReactorClock class SyntacticAbominationHTTPConnectionPool: """ A HTTP connection pool that always fails to return a connection, but counts the number of requests made. """ requests = 0 def getConnection(self, key, endpoint): """ Count each request, then fail with `IndentationError`. """ self.requests += 1 return defer.fail(TabError()) class TreqAPITests(TestCase): def test_default_pool(self) -> None: """ The module-level API uses the global connection pool by default. """ pool = SyntacticAbominationHTTPConnectionPool() set_global_pool(pool) d = treq.get("http://test.com") self.assertEqual(pool.requests, 1) self.failureResultOf(d, TabError) def test_cached_pool(self) -> None: """ The first use of the module-level API populates the global connection pool, which is used for all subsequent requests. """ pool = SyntacticAbominationHTTPConnectionPool() self.patch(treq.api, "HTTPConnectionPool", lambda reactor, persistent: pool) self.failureResultOf(treq.head("http://test.com"), TabError) self.failureResultOf(treq.get("http://test.com"), TabError) self.failureResultOf(treq.post("http://test.com"), TabError) self.failureResultOf(treq.put("http://test.com"), TabError) self.failureResultOf(treq.delete("http://test.com"), TabError) self.failureResultOf(treq.request("OPTIONS", "http://test.com"), TabError) self.assertEqual(pool.requests, 6) def test_custom_pool(self) -> None: """ `treq.post()` accepts a *pool* argument to use for the request. The global pool is unaffected. """ pool = SyntacticAbominationHTTPConnectionPool() d = treq.post("http://foo", data=b"bar", pool=pool) self.assertEqual(pool.requests, 1) self.failureResultOf(d, TabError) self.assertIsNot(pool, get_global_pool()) def test_custom_agent(self) -> None: """ A custom Agent is used if specified. """ @implementer(IAgent) class CounterAgent: requests = 0 def request(self, method, uri, headers=None, bodyProducer=None): self.requests += 1 return defer.Deferred() custom_agent = CounterAgent() d = treq.get("https://www.example.org/", agent=custom_agent) self.assertNoResult(d) self.assertEqual(1, custom_agent.requests) def test_request_invalid_param(self) -> None: """ `treq.request()` raises `TypeError` when it receives unknown keyword arguments. """ with self.assertRaises(TypeError) as c: treq.request( "GET", "https://foo.bar", invalid=True, pool=SyntacticAbominationHTTPConnectionPool(), ) self.assertIn("invalid", str(c.exception)) def test_post_json_with_data(self) -> None: """ `treq.post()` raises TypeError when the *data* and *json* arguments are mixed. """ with self.assertRaises(TypeError) as c: treq.post( "https://test.example/", data={"hello": "world"}, json={"goodnight": "moon"}, pool=SyntacticAbominationHTTPConnectionPool(), ) self.assertEqual( "Argument 'json' cannot be combined with 'data'.", str(c.exception), ) class DefaultReactorTests(TestCase): """ Test `treq.api.default_reactor()` """ def test_passes_reactor(self) -> None: """ `default_reactor()` returns any reactor passed. """ reactor = MemoryReactorClock() self.assertIs(default_reactor(reactor), reactor) def test_uses_default_reactor(self) -> None: """ `default_reactor()` returns the global reactor when passed ``None``. """ from twisted.internet import reactor self.assertEqual(default_reactor(None), reactor) class DefaultPoolTests(TestCase): """ Test `treq.api.default_pool`. """ def setUp(self) -> None: set_global_pool(None) self.reactor = MemoryReactorClock() def test_persistent_false(self) -> None: """ When *persistent=False* is passed a non-persistent pool is created. """ pool = default_pool(self.reactor, None, False) self.assertTrue(isinstance(pool, HTTPConnectionPool)) self.assertFalse(pool.persistent) def test_persistent_false_not_stored(self) -> None: """ When *persistent=False* is passed the resulting pool is not stored as the global pool. """ pool = default_pool(self.reactor, None, persistent=False) self.assertIsNot(pool, get_global_pool()) def test_persistent_false_new(self) -> None: """ When *persistent=False* is passed a new pool is returned each time. """ pool1 = default_pool(self.reactor, None, persistent=False) pool2 = default_pool(self.reactor, None, persistent=False) self.assertIsNot(pool1, pool2) def test_pool_none_persistent_none(self) -> None: """ When *persistent=None* is passed a _persistent_ pool is created for backwards compatibility. """ pool = default_pool(self.reactor, None, None) self.assertTrue(pool.persistent) def test_pool_none_persistent_true(self) -> None: """ When *persistent=True* is passed a persistent pool is created and stored as the global pool. """ pool = default_pool(self.reactor, None, True) self.assertTrue(isinstance(pool, HTTPConnectionPool)) self.assertTrue(pool.persistent) def test_cached_global_pool(self) -> None: """ When *persistent=True* or *persistent=None* is passed the pool created is cached as the global pool. """ pool1 = default_pool(self.reactor, None, None) pool2 = default_pool(self.reactor, None, True) self.assertEqual(pool1, pool2) def test_specified_pool(self) -> None: """ When the user passes a pool it is returned directly. The *persistent* argument is ignored. It is not cached as the global pool. """ user_pool = HTTPConnectionPool(self.reactor, persistent=True) pool1 = default_pool(self.reactor, user_pool, None) pool2 = default_pool(self.reactor, user_pool, True) pool3 = default_pool(self.reactor, user_pool, False) self.assertIs(pool1, user_pool) self.assertIs(pool2, user_pool) self.assertIs(pool3, user_pool) self.assertIsNot(get_global_pool(), user_pool) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_auth.py0000644000175100001660000001053214673126560017273 0ustar00runnerdocker# Copyright (c) The treq Authors. # See LICENSE for details. from twisted.trial.unittest import SynchronousTestCase from twisted.web.http_headers import Headers from twisted.web.iweb import IAgent from treq._agentspy import agent_spy from treq.auth import _RequestHeaderSetterAgent, add_auth, UnknownAuthConfig class RequestHeaderSetterAgentTests(SynchronousTestCase): def setUp(self): self.agent, self.requests = agent_spy() def test_sets_headers(self): agent = _RequestHeaderSetterAgent( self.agent, Headers({b'X-Test-Header': [b'Test-Header-Value']}), ) agent.request(b'method', b'uri') self.assertEqual( self.requests[0].headers, Headers({b'X-Test-Header': [b'Test-Header-Value']}), ) def test_overrides_per_request_headers(self): agent = _RequestHeaderSetterAgent( self.agent, Headers({b'X-Test-Header': [b'Test-Header-Value']}) ) agent.request( b'method', b'uri', Headers({b'X-Test-Header': [b'Unwanted-Value']}) ) self.assertEqual( self.requests[0].headers, Headers({b'X-Test-Header': [b'Test-Header-Value']}), ) def test_no_mutation(self): """ The agent never mutates the headers passed to its request method. This reproduces https://github.com/twisted/treq/issues/314 """ requestHeaders = Headers({}) agent = _RequestHeaderSetterAgent( self.agent, Headers({b'Added': [b'1']}), ) agent.request(b'method', b'uri', headers=requestHeaders) self.assertEqual(requestHeaders, Headers({})) class AddAuthTests(SynchronousTestCase): def test_add_basic_auth(self): """ add_auth() wraps the given agent with one that adds an ``Authorization: Basic ...`` HTTP header that contains the given credentials. """ agent, requests = agent_spy() authAgent = add_auth(agent, ('username', 'password')) authAgent.request(b'method', b'uri') self.assertTrue(IAgent.providedBy(authAgent)) self.assertEqual( requests[0].headers, Headers({b'authorization': [b'Basic dXNlcm5hbWU6cGFzc3dvcmQ=']}) ) def test_add_basic_auth_huge(self): """ The Authorization header doesn't include linebreaks, even if the credentials are so long that Python's base64 implementation inserts them. """ agent, requests = agent_spy() pwd = ('verylongpasswordthatextendsbeyondthepointwheremultiplel' 'inesaregenerated') expectedAuth = ( b'Basic dXNlcm5hbWU6dmVyeWxvbmdwYXNzd29yZHRoYXRleHRlbmRzY' b'mV5b25kdGhlcG9pbnR3aGVyZW11bHRpcGxlbGluZXNhcmVnZW5lcmF0ZWQ=' ) authAgent = add_auth(agent, ('username', pwd)) authAgent.request(b'method', b'uri') self.assertEqual( requests[0].headers, Headers({b'authorization': [expectedAuth]}), ) def test_add_basic_auth_utf8(self): """ Basic auth username and passwords given as `str` are encoded as UTF-8. https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication#Character_encoding_of_HTTP_authentication """ agent, requests = agent_spy() auth = (u'\u16d7', u'\u16b9') authAgent = add_auth(agent, auth) authAgent.request(b'method', b'uri') self.assertEqual( requests[0].headers, Headers({b'Authorization': [b'Basic 4ZuXOuGauQ==']}), ) def test_add_basic_auth_bytes(self): """ Basic auth can be passed as `bytes`, allowing the user full control over the encoding. """ agent, requests = agent_spy() auth = (b'\x01\x0f\xff', b'\xff\xf0\x01') authAgent = add_auth(agent, auth) authAgent.request(b'method', b'uri') self.assertEqual( requests[0].headers, Headers({b'Authorization': [b'Basic AQ//Ov/wAQ==']}), ) def test_add_unknown_auth(self): """ add_auth() raises UnknownAuthConfig when given anything other than a tuple. """ agent, _ = agent_spy() invalidAuth = 1234 self.assertRaises(UnknownAuthConfig, add_auth, agent, invalidAuth) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_client.py0000644000175100001660000007244414673126560017622 0ustar00runnerdockerfrom collections import OrderedDict from io import BytesIO from unittest import mock from hyperlink import DecodedURL, EncodedURL from twisted.internet.defer import Deferred, succeed, CancelledError from twisted.internet.protocol import Protocol from twisted.python.failure import Failure from twisted.trial.unittest import TestCase from twisted.web.client import Agent, ResponseFailed from twisted.web.http_headers import Headers from treq.test.util import with_clock from treq.client import ( HTTPClient, _BodyBufferingProtocol, _BufferedResponse ) class HTTPClientTests(TestCase): def setUp(self): self.agent = mock.Mock(Agent) self.client = HTTPClient(self.agent) self.fbp_patcher = mock.patch('treq.client.FileBodyProducer') self.FileBodyProducer = self.fbp_patcher.start() self.addCleanup(self.fbp_patcher.stop) self.mbp_patcher = mock.patch('treq.multipart.MultiPartProducer') self.MultiPartProducer = self.mbp_patcher.start() self.addCleanup(self.mbp_patcher.stop) def assertBody(self, expected): body = self.FileBodyProducer.mock_calls[0][1][0] self.assertEqual(body.read(), expected) def test_post(self): self.client.post('http://example.com/') self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_uri_idn(self): self.client.request('GET', u'http://č.net') self.agent.request.assert_called_once_with( b'GET', b'http://xn--bea.net', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_uri_decodedurl(self): """ A URL may be passed as a `hyperlink.DecodedURL` object. It is converted to bytes when passed to the underlying agent. """ url = DecodedURL.from_text(u"https://example.org/foo") self.client.request("GET", url) self.agent.request.assert_called_once_with( b"GET", b"https://example.org/foo", Headers({b"accept-encoding": [b"gzip"]}), None, ) def test_request_uri_encodedurl(self): """ A URL may be passed as a `hyperlink.EncodedURL` object. It is converted to bytes when passed to the underlying agent. """ url = EncodedURL.from_text(u"https://example.org/foo") self.client.request("GET", url) self.agent.request.assert_called_once_with( b"GET", b"https://example.org/foo", Headers({b"accept-encoding": [b"gzip"]}), None, ) def test_request_uri_bytes_pass(self): """ The URL parameter may contain path segments or querystring parameters that are not valid UTF-8. These pass through. """ # This URL is http://example.com/hello?who=you, but "hello", "who", and # "you" are encoded as UTF-16. The particulars of the encoding aren't # important; what matters is that those segments can't be decoded by # Hyperlink's UTF-8 default. self.client.request( "GET", ( "http://example.com/%FF%FEh%00e%00l%00l%00o%00" "?%FF%FEw%00h%00o%00=%FF%FEy%00o%00u%00" ), ) self.agent.request.assert_called_once_with( b'GET', ( b'http://example.com/%FF%FEh%00e%00l%00l%00o%00' b'?%FF%FEw%00h%00o%00=%FF%FEy%00o%00u%00' ), Headers({b'accept-encoding': [b'gzip']}), None, ) def test_request_uri_plus_pass(self): """ URL parameters may contain spaces encoded as ``+``. These remain as such and are not mangled. This reproduces `Klein #339 `_. """ self.client.request( "GET", "https://example.com/?foo+bar=baz+biff", ) self.agent.request.assert_called_once_with( b'GET', b"https://example.com/?foo+bar=baz+biff", Headers({b'accept-encoding': [b'gzip']}), None, ) def test_request_uri_idn_params(self): """ A URL that contains non-ASCII characters can be augmented with querystring parameters. This reproduces treq #264. """ self.client.request('GET', u'http://č.net', params={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'GET', b'http://xn--bea.net/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_uri_hyperlink_params(self): """ The *params* argument augments an instance of `hyperlink.DecodedURL` passed as the *url* parameter, just as if it were a string. """ self.client.request( method="GET", url=DecodedURL.from_text(u"http://č.net"), params={"foo": "bar"}, ) self.agent.request.assert_called_once_with( b"GET", b"http://xn--bea.net/?foo=bar", Headers({b"accept-encoding": [b"gzip"]}), None, ) def test_request_case_insensitive_methods(self): self.client.request('gEt', 'http://example.com/') self.agent.request.assert_called_once_with( b'GET', b'http://example.com/', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_query_params(self): self.client.request('GET', 'http://example.com/', params={'foo': ['bar']}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_tuple_query_values(self): self.client.request('GET', 'http://example.com/', params={'foo': ('bar',)}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_tuple_query_value_coercion(self): """ treq coerces non-string values passed to *params* like `urllib.urlencode()` """ self.client.request('GET', 'http://example.com/', params=[ ('text', u'A\u03a9'), ('text-seq', [u'A\u03a9']), ('bytes', [b'ascii']), ('bytes-seq', [b'ascii']), ('native', ['native']), ('native-seq', ['aa', 'bb']), ('int', 1), ('int-seq', (1, 2, 3)), ('none', None), ('none-seq', [None, None]), ]) self.agent.request.assert_called_once_with( b'GET', ( b'http://example.com/?' b'text=A%CE%A9&text-seq=A%CE%A9' b'&bytes=ascii&bytes-seq=ascii' b'&native=native&native-seq=aa&native-seq=bb' b'&int=1&int-seq=1&int-seq=2&int-seq=3' b'&none=None&none-seq=None&none-seq=None' ), Headers({b'accept-encoding': [b'gzip']}), None, ) def test_request_tuple_query_param_coercion(self): """ treq coerces non-string param names passed to *params* like `urllib.urlencode()` """ # A value used to test that it is never encoded or decoded. # It should be invalid UTF-8 or UTF-32 (at least). raw_bytes = b"\x00\xff\xfb" self.client.request('GET', 'http://example.com/', params=[ (u'text', u'A\u03a9'), (b'bytes', ['ascii', raw_bytes]), ('native', 'native'), (1, 'int'), (None, ['none']), ]) self.agent.request.assert_called_once_with( b'GET', ( b'http://example.com/' b'?text=A%CE%A9&bytes=ascii&bytes=%00%FF%FB' b'&native=native&1=int&None=none' ), Headers({b'accept-encoding': [b'gzip']}), None, ) def test_request_query_param_seps(self): """ When the characters ``&`` and ``#`` are passed to *params* as param names or values they are percent-escaped in the URL. This reproduces https://github.com/twisted/treq/issues/282 """ self.client.request('GET', 'http://example.com/', params=( ('ampersand', '&'), ('&', 'ampersand'), ('octothorpe', '#'), ('#', 'octothorpe'), )) self.agent.request.assert_called_once_with( b'GET', ( b'http://example.com/' b'?ampersand=%26' b'&%26=ampersand' b'&octothorpe=%23' b'&%23=octothorpe' ), Headers({b'accept-encoding': [b'gzip']}), None, ) def test_request_merge_query_params(self): self.client.request('GET', 'http://example.com/?baz=bax', params={'foo': ['bar', 'baz']}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?baz=bax&foo=bar&foo=baz', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_merge_tuple_query_params(self): self.client.request('GET', 'http://example.com/?baz=bax', params=[('foo', 'bar')]) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?baz=bax&foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_dict_single_value_query_params(self): self.client.request('GET', 'http://example.com/', params={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/?foo=bar', Headers({b'accept-encoding': [b'gzip']}), None) def test_request_data_dict(self): self.client.request('POST', 'http://example.com/', data={'foo': ['bar', 'baz']}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar&foo=baz') def test_request_data_single_dict(self): self.client.request('POST', 'http://example.com/', data={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar') def test_request_data_tuple(self): self.client.request('POST', 'http://example.com/', data=[('foo', 'bar')]) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/x-www-form-urlencoded'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'foo=bar') def test_request_data_file(self): temp_fn = self.mktemp() with open(temp_fn, "wb") as temp_file: temp_file.write(b'hello') self.client.request('POST', 'http://example.com/', data=open(temp_fn, 'rb')) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'hello') def test_request_json_dict(self): self.client.request('POST', 'http://example.com/', json={'foo': 'bar'}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'{"foo":"bar"}') def test_request_json_tuple(self): self.client.request('POST', 'http://example.com/', json=('foo', 1)) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'["foo",1]') def test_request_json_number(self): self.client.request('POST', 'http://example.com/', json=1.) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'1.0') def test_request_json_string(self): self.client.request('POST', 'http://example.com/', json='hello') self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'"hello"') def test_request_json_bool(self): self.client.request('POST', 'http://example.com/', json=True) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'true') def test_request_json_none(self): self.client.request('POST', 'http://example.com/', json=None) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({b'Content-Type': [b'application/json; charset=UTF-8'], b'accept-encoding': [b'gzip']}), self.FileBodyProducer.return_value) self.assertBody(b'null') @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_no_name_attachment(self): self.client.request( 'POST', 'http://example.com/', files={"name": BytesIO(b"hello")}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', (None, 'application/octet-stream', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_named_attachment(self): self.client.request( 'POST', 'http://example.com/', files={ "name": ('image.jpg', BytesIO(b"hello"))}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', ('image.jpg', 'image/jpeg', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_named_attachment_and_ctype(self): self.client.request( 'POST', 'http://example.com/', files={ "name": ('image.jpg', 'text/plain', BytesIO(b"hello"))}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call( [('name', ('image.jpg', 'text/plain', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) def test_request_files_tuple_too_short(self): """ The `HTTPClient.request()` *files* argument requires tuples of length 2 or 3. It raises `TypeError` when the tuple is too short. """ with self.assertRaises(TypeError) as c: self.client.request( "POST", b"http://example.com/", files=[("t1", ("foo.txt",))], ) self.assertIn("'t1' tuple has length 1", str(c.exception)) def test_request_files_tuple_too_long(self): """ The `HTTPClient.request()` *files* argument requires tuples of length 2 or 3. It raises `TypeError` when the tuple is too long. """ with self.assertRaises(TypeError) as c: self.client.request( "POST", b"http://example.com/", files=[ ("t4", ("foo.txt", "text/plain", BytesIO(b"...\n"), "extra!")), ], ) self.assertIn("'t4' tuple has length 4", str(c.exception)) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_mixed_params(self): class NamedFile(BytesIO): def __init__(self, val): BytesIO.__init__(self, val) self.name = "image.png" self.client.request( 'POST', 'http://example.com/', data=[("a", "b"), ("key", "val")], files=[ ("file1", ('image.jpg', BytesIO(b"hello"))), ("file2", NamedFile(b"yo"))]) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call([ ('a', 'b'), ('key', 'val'), ('file1', ('image.jpg', 'image/jpeg', FP)), ('file2', ('image.png', 'image/png', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) @mock.patch('treq.client.uuid.uuid4', mock.Mock(return_value="heyDavid")) def test_request_mixed_params_dict(self): self.client.request( 'POST', 'http://example.com/', data={"key": "a", "key2": "b"}, files={"file1": BytesIO(b"hey")}) self.agent.request.assert_called_once_with( b'POST', b'http://example.com/', Headers({ b'accept-encoding': [b'gzip'], b'Content-Type': [b'multipart/form-data; boundary=heyDavid']}), self.MultiPartProducer.return_value) FP = self.FileBodyProducer.return_value self.assertEqual( mock.call([ ('key', 'a'), ('key2', 'b'), ('file1', (None, 'application/octet-stream', FP))], boundary=b'heyDavid'), self.MultiPartProducer.call_args) def test_request_unsupported_params_combination(self): self.assertRaises(ValueError, self.client.request, 'POST', 'http://example.com/', data=BytesIO(b"yo"), files={"file1": BytesIO(b"hey")}) def test_request_json_with_data(self): """ Passing `HTTPClient.request()` both *data* and *json* parameters is invalid because they conflict, producing `TypeError`. """ with self.assertRaises(TypeError): self.client.request( "POST", "http://example.com/", data=BytesIO(b"..."), json=None, # NB: None is a valid value. It encodes to b'null'. ) def test_request_json_with_files(self): """ Passing `HTTPClient.request()` both *files* and *json* parameters is invalid because they confict, producing `TypeError`. """ with self.assertRaises(TypeError): self.client.request( "POST", "http://example.com/", files={"f1": ("foo.txt", "text/plain", BytesIO(b"...\n"))}, json=["this is ignored"], ) def test_request_dict_headers(self): self.client.request('GET', 'http://example.com/', headers={ 'User-Agent': 'treq/0.1dev', 'Accept': ['application/json', 'text/plain'] }) self.agent.request.assert_called_once_with( b'GET', b'http://example.com/', Headers({b'User-Agent': [b'treq/0.1dev'], b'accept-encoding': [b'gzip'], b'Accept': [b'application/json', b'text/plain']}), None) def test_request_headers_object(self): """ The *headers* parameter accepts a `twisted.web.http_headers.Headers` instance. """ self.client.request( "GET", "https://example.com", headers=Headers({"X-Foo": ["bar"]}), ) self.agent.request.assert_called_once_with( b"GET", b"https://example.com", Headers({ "X-Foo": ["bar"], "Accept-Encoding": ["gzip"], }), None, ) def test_request_headers_invalid_type(self): """ `HTTPClient.request()` warns that headers of an unexpected type are invalid and that this behavior is deprecated. """ with self.assertRaises(TypeError): self.client.request('GET', 'http://example.com', headers=[]) def test_request_dict_headers_invalid_values(self): """ `HTTPClient.request()` warns that non-string header values are dropped and that this behavior is deprecated. """ with self.assertRaises(TypeError): self.client.request('GET', 'http://example.com', headers=OrderedDict([ ('none', None), ('one', 1), ('ok', 'string'), ])) def test_request_invalid_param(self): """ `HTTPClient.request()` rejects invalid keyword parameters with `TypeError`. """ self.assertRaises( TypeError, self.client.request, "GET", b"http://example.com", invalid=True, ) @with_clock def test_request_timeout_fired(self, clock): """ Verify the request is cancelled if a response is not received within specified timeout period. """ self.agent.request.return_value = d = Deferred() self.client.request('GET', 'http://example.com', timeout=2) # simulate we haven't gotten a response within timeout seconds clock.advance(3) # a deferred should have been cancelled self.failureResultOf(d, CancelledError) @with_clock def test_request_timeout_cancelled(self, clock): """ Verify timeout is cancelled if a response is received before timeout period elapses. """ self.agent.request.return_value = d = Deferred() self.client.request('GET', 'http://example.com', timeout=2) # simulate a response d.callback(mock.Mock(code=200, headers=Headers({}))) # now advance the clock but since we already got a result, # a cancellation timer should have been cancelled clock.advance(3) self.successResultOf(d) def test_response_is_buffered(self): response = mock.Mock(deliverBody=mock.Mock(), headers=Headers({})) self.agent.request.return_value = succeed(response) d = self.client.get('http://www.example.com') result = self.successResultOf(d) protocol = mock.Mock(Protocol) result.deliverBody(protocol) self.assertEqual(response.deliverBody.call_count, 1) result.deliverBody(protocol) self.assertEqual(response.deliverBody.call_count, 1) def test_response_buffering_is_disabled_with_unbufferred_arg(self): response = mock.Mock(headers=Headers({})) self.agent.request.return_value = succeed(response) d = self.client.get('http://www.example.com', unbuffered=True) # YOLO public attribute. self.assertEqual(self.successResultOf(d).original, response) def test_request_post_redirect_denied(self): response = mock.Mock(code=302, headers=Headers({'Location': ['/']})) self.agent.request.return_value = succeed(response) d = self.client.post('http://www.example.com') self.failureResultOf(d, ResponseFailed) def test_request_browser_like_redirects(self): response = mock.Mock(code=302, headers=Headers({'Location': ['/']})) self.agent.request.return_value = succeed(response) raw = mock.Mock(return_value=[]) final_resp = mock.Mock(code=200, headers=mock.Mock(getRawHeaders=raw)) with mock.patch('twisted.web.client.RedirectAgent._handleRedirect', return_value=final_resp): d = self.client.post('http://www.google.com', browser_like_redirects=True, unbuffered=True) self.assertEqual(self.successResultOf(d).original, final_resp) class BodyBufferingProtocolTests(TestCase): def test_buffers_data(self): buffer = [] protocol = _BodyBufferingProtocol( mock.Mock(Protocol), buffer, None ) protocol.dataReceived("foo") self.assertEqual(buffer, ["foo"]) protocol.dataReceived("bar") self.assertEqual(buffer, ["foo", "bar"]) def test_propagates_data_to_destination(self): destination = mock.Mock(Protocol) protocol = _BodyBufferingProtocol( destination, [], None ) protocol.dataReceived(b"foo") destination.dataReceived.assert_called_once_with(b"foo") protocol.dataReceived(b"bar") destination.dataReceived.assert_called_with(b"bar") def test_fires_finished_deferred(self): finished = Deferred() protocol = _BodyBufferingProtocol( mock.Mock(Protocol), [], finished ) class TestResponseDone(Exception): pass protocol.connectionLost(TestResponseDone()) self.failureResultOf(finished, TestResponseDone) def test_propogates_connectionLost_reason(self): destination = mock.Mock(Protocol) protocol = _BodyBufferingProtocol( destination, [], Deferred().addErrback(lambda ign: None) ) class TestResponseDone(Exception): pass reason = TestResponseDone() protocol.connectionLost(reason) destination.connectionLost.assert_called_once_with(reason) class BufferedResponseTests(TestCase): def test_wraps_protocol(self): wrappers = [] wrapped = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(wrapped) response.deliverBody.assert_called_once_with(wrappers[0]) self.assertNotEqual(wrapped, wrappers[0]) def test_concurrent_receivers(self): wrappers = [] wrapped = mock.Mock(Protocol) unwrapped = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(wrapped) br.deliverBody(unwrapped) response.deliverBody.assert_called_once_with(wrappers[0]) wrappers[0].dataReceived(b"foo") wrapped.dataReceived.assert_called_once_with(b"foo") self.assertEqual(unwrapped.dataReceived.call_count, 0) class TestResponseDone(Exception): pass done = Failure(TestResponseDone()) wrappers[0].connectionLost(done) wrapped.connectionLost.assert_called_once_with(done) unwrapped.dataReceived.assert_called_once_with(b"foo") unwrapped.connectionLost.assert_called_once_with(done) def test_receiver_after_finished(self): wrappers = [] finished = mock.Mock(Protocol) response = mock.Mock(deliverBody=mock.Mock(wraps=wrappers.append)) br = _BufferedResponse(response) br.deliverBody(mock.Mock(Protocol)) wrappers[0].dataReceived(b"foo") class TestResponseDone(Exception): pass done = Failure(TestResponseDone()) wrappers[0].connectionLost(done) br.deliverBody(finished) finished.dataReceived.assert_called_once_with(b"foo") finished.connectionLost.assert_called_once_with(done) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_content.py0000644000175100001660000002443214673126560020010 0ustar00runnerdockerimport unittest from unittest import mock from typing import Optional from twisted.python.failure import Failure from twisted.internet.error import ConnectionDone from twisted.trial.unittest import TestCase from twisted.web.http_headers import Headers from twisted.web.client import ResponseDone, ResponseFailed from twisted.web.http import PotentialDataLoss from twisted.web.resource import Resource from twisted.web.server import NOT_DONE_YET from treq import collect, content, json_content, text_content from treq.content import _encoding_from_headers from treq.client import _BufferedResponse from treq.testing import StubTreq class ContentTests(TestCase): def setUp(self): self.response = mock.Mock() self.protocol = None def deliverBody(protocol): self.protocol = protocol self.response.deliverBody.side_effect = deliverBody self.response = _BufferedResponse(self.response) def test_collect(self): data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'{') self.protocol.dataReceived(b'"msg": "hell') self.protocol.dataReceived(b'o"}') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), None) self.assertEqual(data, [b'{', b'"msg": "hell', b'o"}']) def test_collect_failure(self): data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(ResponseFailed("test failure"))) self.failureResultOf(d, ResponseFailed) self.assertEqual(data, [b'foo']) def test_collect_failure_potential_data_loss(self): """ PotentialDataLoss failures are treated as success. """ data = [] d = collect(self.response, data.append) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(PotentialDataLoss())) self.assertEqual(self.successResultOf(d), None) self.assertEqual(data, [b'foo']) def test_collect_0_length(self): self.response.length = 0 d = collect( self.response, lambda d: self.fail("Unexpectedly called with: {0}".format(d))) self.assertEqual(self.successResultOf(d), None) def test_content(self): d = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.dataReceived(b'bar') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), b'foobar') def test_content_cached(self): d1 = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.dataReceived(b'bar') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d1), b'foobar') def _fail_deliverBody(protocol): self.fail("deliverBody unexpectedly called.") self.response.original.deliverBody.side_effect = _fail_deliverBody d3 = content(self.response) self.assertEqual(self.successResultOf(d3), b'foobar') self.assertNotIdentical(d1, d3) def test_content_multiple_waiters(self): d1 = content(self.response) d2 = content(self.response) self.protocol.dataReceived(b'foo') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d1), b'foo') self.assertEqual(self.successResultOf(d2), b'foo') self.assertNotIdentical(d1, d2) def test_json_content(self): self.response.headers = Headers() d = json_content(self.response) self.protocol.dataReceived(b'{"msg":"hello!"}') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), {"msg": "hello!"}) def test_json_content_unicode(self): """ When Unicode JSON content is received, the JSON text should be correctly decoded. RFC7159 (8.1): "JSON text SHALL be encoded in UTF-8, UTF-16, or UTF-32. The default encoding is UTF-8" """ self.response.headers = Headers() d = json_content(self.response) self.protocol.dataReceived(u'{"msg":"hëlló!"}'.encode('utf-8')) self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), {u'msg': u'hëlló!'}) def test_json_content_utf16(self): """ JSON received is decoded according to the charset given in the Content-Type header. """ self.response.headers = Headers({ b'Content-Type': [b"application/json; charset='UTF-16LE'"], }) d = json_content(self.response) self.protocol.dataReceived(u'{"msg":"hëlló!"}'.encode('UTF-16LE')) self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), {u'msg': u'hëlló!'}) def test_text_content(self): self.response.headers = Headers( {b'Content-Type': [b'text/plain; charset=utf-8']}) d = text_content(self.response) self.protocol.dataReceived(b'\xe2\x98\x83') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\u2603') def test_text_content_default_encoding_no_param(self): self.response.headers = Headers( {b'Content-Type': [b'text/plain']}) d = text_content(self.response) self.protocol.dataReceived(b'\xa1') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\xa1') def test_text_content_default_encoding_no_header(self): self.response.headers = Headers() d = text_content(self.response) self.protocol.dataReceived(b'\xa1') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'\xa1') def test_content_application_json_default_encoding(self): self.response.headers = Headers( {b'Content-Type': [b'application/json']}) d = text_content(self.response) self.protocol.dataReceived(b'gr\xc3\xbcn') self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'grün') def test_text_content_unicode_headers(self): """ Parsing of the Content-Type header is robust in the face of Unicode gunk in the header value, which is ignored. """ self.response.headers = Headers( { b"Content-Type": [ 'text/plain; charset="UTF-16BE"; u=ᛃ'.encode("utf-8") ], } ) d = text_content(self.response) self.protocol.dataReceived(u'ᚠᚡ'.encode('UTF-16BE')) self.protocol.connectionLost(Failure(ResponseDone())) self.assertEqual(self.successResultOf(d), u'ᚠᚡ') class UnfinishedResponse(Resource): """Write some data, but never finish.""" isLeaf = True def __init__(self): Resource.__init__(self) # Track how requests finished. self.request_finishes = [] def render(self, request): request.write(b"HELLO") request.notifyFinish().addBoth(self.request_finishes.append) return NOT_DONE_YET class MoreRealisticContentTests(TestCase): """Tests involving less mocking.""" def test_exception_handling(self): """ An exception in the collector function: 1. Always gets returned in the result ``Deferred`` from ``treq.collect()``. 2. Closes the transport. """ resource = UnfinishedResponse() stub = StubTreq(resource) response = self.successResultOf(stub.request("GET", "http://127.0.0.1/")) self.assertEqual(response.code, 200) def error(data): 1 / 0 d = collect(response, error) # Exceptions in the collector are passed on to the caller via the # response Deferred: self.failureResultOf(d, ZeroDivisionError) # An exception in the protocol results in the transport for the request # being closed. stub.flush() self.assertEqual(len(resource.request_finishes), 1) self.assertIsInstance(resource.request_finishes[0].value, ConnectionDone) class EncodingFromHeadersTests(unittest.TestCase): def _encodingFromContentType(self, content_type: str) -> Optional[str]: """ Invoke `_encoding_from_headers()` for a header value. :param content_type: A Content-Type header value. :returns: The result of `_encoding_from_headers()` """ h = Headers({"Content-Type": [content_type]}) return _encoding_from_headers(h) def test_rfcExamples(self): """ The examples from RFC 9110 § 8.3.1 are normalized to canonical (lowercase) form. """ for example in [ "text/html;charset=utf-8", 'Text/HTML;Charset="utf-8"', 'text/html; charset="utf-8"', "text/html;charset=UTF-8", ]: self.assertEqual("utf-8", self._encodingFromContentType(example)) def test_multipleParams(self): """The charset parameter is extracted even if mixed with other params.""" for example in [ "a/b;c=d;charSet=ascii", "a/b;c=d;charset=ascii; e=f", "a/b;c=d; charsEt=ascii;e=f", "a/b;c=d; charset=ascii; e=f", ]: self.assertEqual("ascii", self._encodingFromContentType(example)) def test_quotedString(self): """Any quotes that surround the value of the charset param are removed.""" self.assertEqual( "ascii", self._encodingFromContentType("foo/bar; charset='ASCII'") ) self.assertEqual( "shift_jis", self._encodingFromContentType('a/b; charset="Shift_JIS"') ) def test_noCharset(self): """None is returned when no valid charset parameter is found.""" for example in [ "application/octet-stream", "text/plain;charset=", "text/plain;charset=''", "text/plain;charset=\"'\"", "text/plain;charset=🙃", ]: self.assertIsNone(self._encodingFromContentType(example)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_cookies.py0000644000175100001660000002304614673126560017772 0ustar00runnerdockerfrom http.cookiejar import CookieJar, Cookie import attrs from twisted.internet.testing import StringTransport from twisted.internet.interfaces import IProtocol from twisted.trial.unittest import SynchronousTestCase from twisted.python.failure import Failure from twisted.web.client import ResponseDone from twisted.web.http_headers import Headers from twisted.web.iweb import IClientRequest, IResponse from zope.interface import implementer from treq._agentspy import agent_spy, RequestRecord from treq.client import HTTPClient from treq.cookies import scoped_cookie, search @implementer(IClientRequest) @attrs.define class _ClientRequest: absoluteURI: bytes headers: Headers method: bytes @implementer(IResponse) class QuickResponse: """A response that immediately delivers the body.""" version = (b"HTTP", 1, 1) code = 200 phrase = "OK" previousResponse = None def __init__( self, record: RequestRecord, headers: Headers, body: bytes = b"" ) -> None: self.request = _ClientRequest( record.uri, record.headers or Headers(), record.method ) self.headers = headers self.length = len(body) self._body = body def deliverBody(self, protocol: IProtocol) -> None: t = StringTransport() protocol.makeConnection(t) if t.producerState != "producing": raise NotImplementedError("pausing IPushProducer") protocol.dataReceived(self._body) protocol.connectionLost(Failure(ResponseDone())) def setPreviousResponse(self, response: IResponse) -> None: raise NotImplementedError class ScopedCookieTests(SynchronousTestCase): """Test `treq.cookies.scoped_cookie()`""" def test_http(self) -> None: """Scoping an HTTP origin produces a non-Secure cookie.""" c = scoped_cookie("http://foo.bar", "x", "y") self.assertEqual(c.domain, "foo.bar") self.assertIsNone(c.port) self.assertFalse(c.port_specified) self.assertFalse(c.secure) def test_https(self) -> None: """ Scoping to an HTTPS origin produces a Secure cookie that won't be sent to HTTP origins. """ c = scoped_cookie("https://foo.bar", "x", "y") self.assertEqual(c.domain, "foo.bar") self.assertIsNone(c.port) self.assertFalse(c.port_specified) self.assertTrue(c.secure) def test_port(self) -> None: """ Setting a non-default port produces a cookie with that port. """ c = scoped_cookie("https://foo.bar:4433", "x", "y") self.assertEqual(c.domain, "foo.bar") self.assertEqual(c.port, "4433") self.assertTrue(c.port_specified) self.assertTrue(c.secure) def test_hostname(self) -> None: """ When the origin has a bare hostname, a ``.local`` suffix is applied to form the cookie domain. """ c = scoped_cookie("http://mynas", "x", "y") self.assertEqual(c.domain, "mynas.local") class SearchTests(SynchronousTestCase): """Test `treq.cookies.search()`""" def test_domain(self) -> None: """`search()` filters by domain.""" jar = CookieJar() jar.set_cookie(scoped_cookie("http://an.example", "http", "a")) jar.set_cookie(scoped_cookie("https://an.example", "https", "b")) jar.set_cookie(scoped_cookie("https://f.an.example", "subdomain", "c")) jar.set_cookie(scoped_cookie("https://f.an.example", "https", "d")) jar.set_cookie(scoped_cookie("https://host", "v", "n")) self.assertEqual( {(c.name, c.value) for c in search(jar, domain="an.example")}, {("http", "a"), ("https", "b")}, ) self.assertEqual( {(c.name, c.value) for c in search(jar, domain="f.an.example")}, {("subdomain", "c"), ("https", "d")}, ) self.assertEqual( {(c.name, c.value) for c in search(jar, domain="host")}, {("v", "n")}, ) def test_name(self) -> None: """`search()` filters by cookie name.""" jar = CookieJar() jar.set_cookie(scoped_cookie("https://host", "a", "1")) jar.set_cookie(scoped_cookie("https://host", "b", "2")) self.assertEqual({c.value for c in search(jar, domain="host", name="a")}, {"1"}) self.assertEqual({c.value for c in search(jar, domain="host", name="b")}, {"2"}) class HTTPClientCookieTests(SynchronousTestCase): """Test how HTTPClient's request methods handle the *cookies* argument.""" def setUp(self) -> None: self.agent, self.requests = agent_spy() self.cookiejar = CookieJar() self.client = HTTPClient(self.agent, self.cookiejar) def test_cookies_in_jars(self) -> None: """ Issuing a request with cookies merges them into the client's cookie jar. Cookies received in a response are also merged into the client's cookie jar. """ self.cookiejar.set_cookie( Cookie( domain="twisted.example", port=None, secure=True, port_specified=False, name="a", value="b", version=0, path="/", expires=None, discard=False, comment=None, comment_url=None, rfc2109=False, path_specified=False, domain_specified=False, domain_initial_dot=False, rest={}, ) ) d = self.client.request("GET", "https://twisted.example", cookies={"b": "c"}) self.assertNoResult(d) [request] = self.requests assert request.headers is not None self.assertEqual(request.headers.getRawHeaders("Cookie"), ["a=b; b=c"]) request.deferred.callback( QuickResponse(request, Headers({"Set-Cookie": ["a=c"]})) ) response = self.successResultOf(d) expected = {"a": "c", "b": "c"} self.assertEqual({c.name: c.value for c in self.cookiejar}, expected) self.assertEqual({c.name: c.value for c in response.cookies()}, expected) def test_cookies_pass_jar(self) -> None: """ Passing the *cookies* argument to `HTTPClient.request()` updates the client's cookie jar and sends cookies with the request. Upon receipt of the response the client's cookie jar is updated. """ self.cookiejar.set_cookie(scoped_cookie("https://tx.example", "a", "a")) self.cookiejar.set_cookie(scoped_cookie("http://tx.example", "p", "q")) self.cookiejar.set_cookie(scoped_cookie("https://rx.example", "b", "b")) jar = CookieJar() jar.set_cookie(scoped_cookie("https://tx.example", "a", "b")) jar.set_cookie(scoped_cookie("https://rx.example", "a", "c")) d = self.client.request("GET", "https://tx.example", cookies=jar) self.assertNoResult(d) self.assertEqual( {(c.domain, c.name, c.value) for c in self.cookiejar}, { ("tx.example", "a", "b"), ("tx.example", "p", "q"), ("rx.example", "a", "c"), ("rx.example", "b", "b"), }, ) [request] = self.requests assert request.headers is not None self.assertEqual(request.headers.getRawHeaders("Cookie"), ["a=b; p=q"]) def test_cookies_dict(self) -> None: """ Passing a dict for the *cookies* argument to `HTTPClient.request()` creates cookies that are bound to the the client's cookie jar and sends cookies with the request. Upon receipt of the response the client's cookie jar is updated. """ d = self.client.request("GET", "https://twisted.example", cookies={"a": "b"}) self.assertNoResult(d) [cookie] = self.cookiejar self.assertEqual(cookie.name, "a") self.assertEqual(cookie.value, "b") # Attributes inferred from the URL: self.assertEqual(cookie.domain, "twisted.example") self.assertFalse(cookie.port_specified) self.assertTrue(cookie.secure) [request] = self.requests assert request.headers is not None self.assertEqual(request.headers.getRawHeaders("Cookie"), ["a=b"]) def test_response_cookies(self) -> None: """ The `_Request.cookies()` method returns a copy of the request cookiejar merged with any cookies from the response. This jar matches the client cookiejar at the instant the request was received. """ self.cookiejar.set_cookie(scoped_cookie("http://twisted.example", "a", "1")) self.cookiejar.set_cookie(scoped_cookie("https://twisted.example", "b", "1")) d = self.client.request("GET", "https://twisted.example") [request] = self.requests request.deferred.callback( QuickResponse(request, Headers({"Set-Cookie": ["a=2; Secure"]})) ) response = self.successResultOf(d) # The client jar was updated. [a] = search(self.cookiejar, domain="twisted.example", name="a") self.assertEqual(a.value, "2") self.assertTrue(a.secure, True) responseJar = response.cookies() self.assertIsNot(self.cookiejar, responseJar) # It's a copy. self.assertIsNot(self.cookiejar, response.cookies()) # Another copy. # They contain the same cookies. self.assertEqual( {(c.name, c.value, c.secure) for c in self.cookiejar}, {(c.name, c.value, c.secure) for c in response.cookies()}, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_multipart.py0000644000175100001660000005322214673126560020356 0ustar00runnerdocker# Copyright (c) Twisted Matrix Laboratories. # See LICENSE for details. from typing import cast, AnyStr from io import BytesIO from .._multipart import MultipartParser from twisted.trial import unittest from zope.interface.verify import verifyObject from twisted.internet import task from twisted.internet.testing import StringTransport from twisted.web.client import FileBodyProducer from twisted.web.iweb import UNKNOWN_LENGTH, IBodyProducer from treq.multipart import MultiPartProducer, _LengthConsumer class MultiPartProducerTestCase(unittest.TestCase): """ Tests for the L{MultiPartProducer} which gets dictionary like object with post parameters, converts them to multipart/form-data format and feeds them to an L{IConsumer}. """ def _termination(self): """ This method can be used as the C{terminationPredicateFactory} for a L{Cooperator}. It returns a predicate which immediately returns C{False}, indicating that no more work should be done this iteration. This has the result of only allowing one iteration of a cooperative task to be run per L{Cooperator} iteration. """ return lambda: True def setUp(self): """ Create a L{Cooperator} hooked up to an easily controlled, deterministic scheduler to use with L{MultiPartProducer}. """ self._scheduled = [] self.cooperator = task.Cooperator( self._termination, self._scheduled.append) def getOutput(self, producer, with_producer=False): """ A convenience function to consume and return output. """ consumer = output = BytesIO() producer.startProducing(consumer) while self._scheduled: self._scheduled.pop(0)() if with_producer: return (output.getvalue(), producer) else: return output.getvalue() def newLines(self, value: AnyStr) -> AnyStr: if isinstance(value, str): return value.replace(u"\n", u"\r\n") else: return value.replace(b"\n", b"\r\n") def test_interface(self): """ L{MultiPartProducer} instances provide L{IBodyProducer}. """ self.assertTrue( verifyObject( IBodyProducer, MultiPartProducer({}))) def test_unknownLength(self) -> None: """ If the L{MultiPartProducer} is constructed with a file-like object passed as a parameter without either a C{seek} or C{tell} method, its C{length} attribute is set to C{UNKNOWN_LENGTH}. """ class CantTell: def seek(self, offset, whence): """ A C{seek} method that is never called because there is no matching C{tell} method. """ class CantSeek: def tell(self): """ A C{tell} method that is never called because there is no matching C{seek} method. """ producer = MultiPartProducer( {"f": ("name", "application/octet-stream", FileBodyProducer(CantTell()))}) self.assertEqual(UNKNOWN_LENGTH, producer.length) producer = MultiPartProducer( {"f": ("name", "application/octet-stream", FileBodyProducer(CantSeek()))}) self.assertEqual(UNKNOWN_LENGTH, producer.length) def test_knownLengthOnFile(self) -> None: """ If the L{MultiPartProducer} is constructed with a file-like object with both C{seek} and C{tell} methods, its C{length} attribute is set to the size of the file as determined by those methods. """ inputBytes = b"here are some bytes" inputFile = BytesIO(inputBytes) inputFile.seek(5) producer = MultiPartProducer({ "field": ('file name', "application/octet-stream", FileBodyProducer( inputFile, cooperator=self.cooperator))}) # Make sure we are generous enough not to alter seek position: self.assertEqual(inputFile.tell(), 5) # Total length is hard to calculate manually # as it contains a lot of headers parameters, newlines and boundaries # let's assert for now that it's no less than the input parameter self.assertNotEqual(producer.length, UNKNOWN_LENGTH) self.assertTrue(cast(int, producer.length) > len(inputBytes)) # Calculating length should not touch producers self.assertTrue(producer._currentProducer is None) def test_defaultCooperator(self) -> None: """ If no L{Cooperator} instance is passed to L{MultiPartProducer}, the global cooperator is used. """ producer = MultiPartProducer({ "field": ("file name", "application/octet-stream", FileBodyProducer( BytesIO(b"yo"), cooperator=self.cooperator)) }) self.assertEqual(task.cooperate, producer._cooperate) def test_startProducing(self) -> None: """ L{MultiPartProducer.startProducing} starts writing bytes from the input file to the given L{IConsumer} and returns a L{Deferred} which fires when they have all been written. """ consumer = output = StringTransport() # We historically accepted bytes for field names and continue to allow # it for compatibility, but the types don't permit it because it makes # them even more complicated and awful. So here we verify that that works. field = cast(str, b"field") producer = MultiPartProducer({ field: ("file name", "text/hello-world", FileBodyProducer( BytesIO(b"Hello, World"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) iterations = 0 while self._scheduled: iterations += 1 self._scheduled.pop(0)() self.assertTrue(iterations > 1) self.assertEqual(self.newLines(b"""--heyDavid Content-Disposition: form-data; name="field"; filename="file name" Content-Type: text/hello-world Content-Length: 12 Hello, World --heyDavid-- """), output.value()) self.assertEqual(None, self.successResultOf(complete)) def test_inputClosedAtEOF(self) -> None: """ When L{MultiPartProducer} reaches end-of-file on the input file given to it, the input file is closed. """ inputFile = BytesIO(b"hello, world!") consumer = StringTransport() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") producer.startProducing(consumer) while self._scheduled: self._scheduled.pop(0)() self.assertTrue(inputFile.closed) def test_failedReadWhileProducing(self) -> None: """ If a read from the input file fails while producing bytes to the consumer, the L{Deferred} returned by L{MultiPartProducer.startProducing} fires with a L{Failure} wrapping that exception. """ class BrokenFile: def read(self, count): raise IOError("Simulated bad thing") producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( BrokenFile(), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(StringTransport()) while self._scheduled: self._scheduled.pop(0)() self.failureResultOf(complete).trap(IOError) def test_stopProducing(self): """ L{MultiPartProducer.stopProducing} stops the underlying L{IPullProducer} and the cooperative task responsible for calling C{resumeProducing} and closes the input file but does not cause the L{Deferred} returned by C{startProducing} to fire. """ inputFile = BytesIO(b"hello, world!") consumer = BytesIO() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) self._scheduled.pop(0)() producer.stopProducing() self.assertTrue(inputFile.closed) self._scheduled.pop(0)() self.assertNoResult(complete) def test_pauseProducing(self) -> None: """ L{MultiPartProducer.pauseProducing} temporarily suspends writing bytes from the input file to the given L{IConsumer}. """ inputFile = BytesIO(b"hello, world!") consumer = output = StringTransport() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") complete = producer.startProducing(consumer) self._scheduled.pop(0)() currentValue = output.value() self.assertTrue(currentValue) producer.pauseProducing() # Sort of depends on an implementation detail of Cooperator: even # though the only task is paused, there's still a scheduled call. If # this were to go away because Cooperator became smart enough to cancel # this call in this case, that would be fine. self._scheduled.pop(0)() # Since the producer is paused, no new data should be here. self.assertEqual(output.value(), currentValue) self.assertNoResult(complete) def test_resumeProducing(self) -> None: """ L{MultoPartProducer.resumeProducing} re-commences writing bytes from the input file to the given L{IConsumer} after it was previously paused with L{MultiPartProducer.pauseProducing}. """ inputFile = BytesIO(b"hello, world!") consumer = output = StringTransport() producer = MultiPartProducer({ "field": ( "file name", "text/hello-world", FileBodyProducer( inputFile, cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid") producer.startProducing(consumer) self._scheduled.pop(0)() currentValue = output.value() self.assertTrue(currentValue) producer.pauseProducing() producer.resumeProducing() self._scheduled.pop(0)() # make sure we started producing new data after resume self.assertTrue(len(currentValue) < len(output.value())) def test_unicodeString(self) -> None: """ Make sure unicode string is passed properly """ output, producer = self.getOutput( MultiPartProducer({ "afield": u"Это моя строчечка\r\n", }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(u"""--heyDavid Content-Disposition: form-data; name="afield" Это моя строчечка --heyDavid-- """.encode("utf-8")) self.assertEqual(producer.length, len(expected)) self.assertEqual(expected, output) def test_bytesPassThrough(self) -> None: """ If byte string is passed as a param it is passed through unchanged. """ output, producer = self.getOutput( MultiPartProducer({ "bfield": b'\x00\x01\x02\x03', }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = ( b"--heyDavid\r\n" b'Content-Disposition: form-data; name="bfield"\r\n' b'\r\n' b'\x00\x01\x02\x03\r\n' b'--heyDavid--\r\n' ) self.assertEqual(producer.length, len(expected)) self.assertEqual(expected, output) def test_failOnUnknownParams(self) -> None: """ If byte string is passed as a param and we don't know the encoding, fail early to prevent corrupted form posts """ # unknown key self.assertRaises( ValueError, MultiPartProducer, { (1, 2): BytesIO(b"yo"), }, cooperator=self.cooperator, boundary=b"heyDavid") # tuple length self.assertRaises( ValueError, MultiPartProducer, { "a": (1,), }, cooperator=self.cooperator, boundary=b"heyDavid") # unknown value type self.assertRaises( ValueError, MultiPartProducer, { "a": {"a": "b"}, }, cooperator=self.cooperator, boundary=b"heyDavid") def test_twoFields(self) -> None: """ Make sure multiple fields are rendered properly. """ output = self.getOutput( MultiPartProducer({ "afield": "just a string\r\n", "bfield": "another string" }, cooperator=self.cooperator, boundary=b"heyDavid")) self.assertEqual(self.newLines(b"""--heyDavid Content-Disposition: form-data; name="afield" just a string --heyDavid Content-Disposition: form-data; name="bfield" another string --heyDavid-- """), output) def test_fieldsAndAttachment(self): """ Make sure multiple fields are rendered properly. """ output, producer = self.getOutput( MultiPartProducer({ "bfield": "just a string\r\n", "cfield": "another string", "afield": ( "file name", "text/hello-world", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="bfield" just a string --heyDavid Content-Disposition: form-data; name="cfield" another string --heyDavid Content-Disposition: form-data; name="afield"; filename="file name" Content-Type: text/hello-world Content-Length: 15 my lovely bytes --heyDavid-- """) self.assertEqual(producer.length, len(expected)) self.assertEqual(output, expected) def test_multipleFieldsAndAttachments(self): """ Make sure multiple fields, attachments etc are rendered properly. """ output, producer = self.getOutput( MultiPartProducer({ "cfield": "just a string\r\n", "bfield": "another string", "efield": ( "ef", "text/html", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes2"), cooperator=self.cooperator)), "xfield": ( "xf", "text/json", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes219"), cooperator=self.cooperator)), "afield": ( "af", "text/xml", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes22"), cooperator=self.cooperator)) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="bfield" another string --heyDavid Content-Disposition: form-data; name="cfield" just a string --heyDavid Content-Disposition: form-data; name="afield"; filename="af" Content-Type: text/xml Content-Length: 17 my lovely bytes22 --heyDavid Content-Disposition: form-data; name="efield"; filename="ef" Content-Type: text/html Content-Length: 16 my lovely bytes2 --heyDavid Content-Disposition: form-data; name="xfield"; filename="xf" Content-Type: text/json Content-Length: 18 my lovely bytes219 --heyDavid-- """) self.assertEqual(producer.length, len(expected)) self.assertEqual(output, expected) def test_unicodeAttachmentName(self): """ Make sure unicode attachment names are supported. """ output, producer = self.getOutput( MultiPartProducer({ "field": ( u'Так себе имя.jpg', "image/jpeg", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator ) ) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(u"""--heyDavid Content-Disposition: form-data; name="field"; filename="Так себе имя.jpg" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """.encode("utf-8")) self.assertEqual(len(expected), producer.length) self.assertEqual(expected, output) def test_missingAttachmentName(self): """ Make sure attachments without names are supported """ output, producer = self.getOutput( MultiPartProducer({ "field": ( None, "image/jpeg", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator, ) ) }, cooperator=self.cooperator, boundary=b"heyDavid"), with_producer=True) expected = self.newLines(b"""--heyDavid Content-Disposition: form-data; name="field" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """) self.assertEqual(len(expected), producer.length) self.assertEqual(expected, output) def test_newLinesInParams(self): """ Make sure we generate proper format even with newlines in attachments """ output = self.getOutput( MultiPartProducer({ "field": ( u'\r\noops.j\npg', "image/jp\reg\n", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes"), cooperator=self.cooperator ) ) }, cooperator=self.cooperator, boundary=b"heyDavid" ) ) self.assertEqual(self.newLines(u"""--heyDavid Content-Disposition: form-data; name="field"; filename="oops.jpg" Content-Type: image/jpeg Content-Length: 15 my lovely bytes --heyDavid-- """.encode("utf-8")), output) def test_worksWithMultipart(self): """ Make sure the stuff we generated can actually be parsed by the `multipart` module. """ output = self.getOutput( MultiPartProducer([ ("cfield", "just a string\r\n"), ("cfield", "another string"), ("efield", ('ef', "text/html", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes2"), cooperator=self.cooperator, ))), ("xfield", ('xf', "text/json", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes219"), cooperator=self.cooperator, ))), ("afield", ('af', "text/xml", FileBodyProducer( inputFile=BytesIO(b"my lovely bytes22"), cooperator=self.cooperator, ))) ], cooperator=self.cooperator, boundary=b"heyDavid" ) ) form = MultipartParser( stream=BytesIO(output), boundary=b"heyDavid", content_length=len(output), ) self.assertEqual( [b'just a string\r\n', b'another string'], [f.raw for f in form.get_all('cfield')], ) self.assertEqual(b'my lovely bytes2', form.get('efield').raw) self.assertEqual(b'my lovely bytes219', form.get('xfield').raw) self.assertEqual(b'my lovely bytes22', form.get('afield').raw) class LengthConsumerTestCase(unittest.TestCase): """ Tests for the _LengthConsumer, an L{IConsumer} which is used to compute the length of a produced content. """ def test_scalarsUpdateCounter(self): """ When an int is written, _LengthConsumer updates its internal counter. """ consumer = _LengthConsumer() self.assertEqual(consumer.length, 0) consumer.write(1) self.assertEqual(consumer.length, 1) consumer.write(2147483647) self.assertEqual(consumer.length, 2147483648) def test_stringUpdatesCounter(self): """ Use the written string length to update the internal counter """ a = (b"Cantami, o Diva, del Pelide Achille\n l'ira funesta che " b"infiniti addusse\n lutti agli Achei") consumer = _LengthConsumer() self.assertEqual(consumer.length, 0) consumer.write(a) self.assertEqual(consumer.length, 89) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_response.py0000644000175100001660000001110014673126560020160 0ustar00runnerdockerfrom decimal import Decimal from twisted.trial.unittest import SynchronousTestCase from twisted.python.failure import Failure from twisted.web.client import ResponseDone from twisted.web.iweb import UNKNOWN_LENGTH from twisted.web.http_headers import Headers from treq.response import _Response class FakeResponse: def __init__(self, code, headers, body=()): self.code = code self.headers = headers self.previousResponse = None self._body = body self.length = sum(len(c) for c in body) def setPreviousResponse(self, response): self.previousResponse = response def deliverBody(self, protocol): for chunk in self._body: protocol.dataReceived(chunk) protocol.connectionLost(Failure(ResponseDone())) class ResponseTests(SynchronousTestCase): def test_repr_content_type(self): """ When the response has a Content-Type header its value is included in the response. """ headers = Headers({'Content-Type': ['text/html']}) original = FakeResponse(200, headers, body=[b'']) self.assertEqual( "", repr(_Response(original, None)), ) def test_repr_content_type_missing(self): """ A request with no Content-Type just displays an empty field. """ original = FakeResponse(204, Headers(), body=[b'']) self.assertEqual( "", repr(_Response(original, None)), ) def test_repr_content_type_hostile(self): """ Garbage in the Content-Type still produces a reasonable representation. """ headers = Headers({'Content-Type': [u'\u2e18', ' x/y']}) original = FakeResponse(418, headers, body=[b'']) self.assertEqual( r"", repr(_Response(original, None)), ) def test_repr_unknown_length(self): """ A HTTP 1.0 or chunked response displays an unknown length. """ headers = Headers({'Content-Type': ['text/event-stream']}) original = FakeResponse(200, headers) original.length = UNKNOWN_LENGTH self.assertEqual( "", repr(_Response(original, None)), ) def test_collect(self): original = FakeResponse(200, Headers(), body=[b'foo', b'bar', b'baz']) calls = [] _Response(original, None).collect(calls.append) self.assertEqual([b'foo', b'bar', b'baz'], calls) def test_content(self): original = FakeResponse(200, Headers(), body=[b'foo', b'bar', b'baz']) self.assertEqual( b'foobarbaz', self.successResultOf(_Response(original, None).content()), ) def test_json(self): original = FakeResponse(200, Headers(), body=[b'{"foo": ', b'"bar"}']) self.assertEqual( {'foo': 'bar'}, self.successResultOf(_Response(original, None).json()), ) def test_json_customized(self): original = FakeResponse(200, Headers(), body=[b'{"foo": ', b'1.0000000000000001}']) self.assertEqual( self.successResultOf(_Response(original, None).json( parse_float=Decimal) )["foo"], Decimal("1.0000000000000001") ) def test_text(self): headers = Headers({b'content-type': [b'text/plain;charset=utf-8']}) original = FakeResponse(200, headers, body=[b'\xe2\x98', b'\x83']) self.assertEqual( u'\u2603', self.successResultOf(_Response(original, None).text()), ) def test_history(self): redirect1 = FakeResponse( 301, Headers({'location': ['http://example.com/']}) ) redirect2 = FakeResponse( 302, Headers({'location': ['https://example.com/']}) ) redirect2.setPreviousResponse(redirect1) final = FakeResponse(200, Headers({})) final.setPreviousResponse(redirect2) wrapper = _Response(final, None) history = wrapper.history() self.assertEqual(wrapper.code, 200) self.assertEqual(history[0].code, 301) self.assertEqual(history[1].code, 302) def test_no_history(self): wrapper = _Response(FakeResponse(200, Headers({})), None) self.assertEqual(wrapper.history(), []) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_testing.py0000644000175100001660000005476614673126560020030 0ustar00runnerdocker""" In-memory treq returns stubbed responses. """ from functools import partial from inspect import getmembers, isfunction from json import dumps from unittest.mock import ANY from twisted.trial.unittest import TestCase from twisted.web.client import ResponseFailed from twisted.web.error import SchemeNotSupported from twisted.web.resource import Resource from twisted.web.server import NOT_DONE_YET import treq from treq.testing import ( HasHeaders, RequestSequence, StringStubbingResource, StubTreq ) class _StaticTestResource(Resource): """Resource that always returns 418 "I'm a teapot""" isLeaf = True def render(self, request): request.setResponseCode(418) request.setHeader(b"x-teapot", b"teapot!") return b"I'm a teapot" class _RedirectResource(Resource): """ Resource that redirects to a different domain. """ isLeaf = True def render(self, request): if b'redirected' not in request.uri: request.redirect(b'https://example.org/redirected') return dumps( { key.decode("charmap"): [ value.decode("charmap") for value in values ] for key, values in request.requestHeaders.getAllRawHeaders()} ).encode("utf-8") class _NonResponsiveTestResource(Resource): """Resource that returns NOT_DONE_YET and never finishes the request""" isLeaf = True def render(self, request): return NOT_DONE_YET class _EventuallyResponsiveTestResource(Resource): """ Resource that returns NOT_DONE_YET and stores the request so that something else can finish the response later. """ isLeaf = True def render(self, request): self.stored_request = request return NOT_DONE_YET class _SessionIdTestResource(Resource): """ Resource that returns the current session ID. """ isLeaf = True def __init__(self): super().__init__() # keep track of all sessions created, so we can manually expire them later self.sessions = [] def render(self, request): session = request.getSession() if session not in self.sessions: # new session, add to internal list self.sessions.append(session) uid = session.uid return uid def expire_sessions(self): """ Manually expire all sessions created by this resource. """ for session in self.sessions: session.expire() self.sessions = [] class StubbingTests(TestCase): """ Tests for :class:`StubTreq`. """ def test_stubtreq_provides_all_functions_in_treq_all(self): """ Every single function and attribute exposed by :obj:`treq.__all__` is provided by :obj:`StubTreq`. """ treq_things = [(name, obj) for name, obj in getmembers(treq) if name in treq.__all__] stub = StubTreq(_StaticTestResource()) api_things = [(name, obj) for name, obj in treq_things if obj.__module__ == "treq.api"] content_things = [(name, obj) for name, obj in treq_things if obj.__module__ == "treq.content"] # sanity checks - this test should fail if treq exposes a new API # without changes being made to StubTreq and this test. msg = ("At the time this test was written, StubTreq only knew about " "treq exposing functions from treq.api and treq.content. If " "this has changed, StubTreq will need to be updated, as will " "this test.") self.assertTrue(all(isfunction(obj) for name, obj in treq_things), msg) self.assertEqual(set(treq_things), set(api_things + content_things), msg) for name, obj in api_things: self.assertTrue( isfunction(getattr(stub, name, None)), "StubTreq.{0} should be a function.".format(name)) for name, obj in content_things: self.assertIs( getattr(stub, name, None), obj, "StubTreq.{0} should just expose treq.{0}".format(name)) def test_providing_resource_to_stub_treq(self): """ The resource provided to StubTreq responds to every request no matter what the URI or parameters or data. """ verbs = ('GET', 'PUT', 'HEAD', 'PATCH', 'DELETE', 'POST') urls = ( 'http://supports-http.com', 'https://supports-https.com', 'http://this/has/a/path/and/invalid/domain/name', 'https://supports-https.com:8080', 'http://supports-http.com:8080', ) params = (None, {}, {b'page': [1]}) headers = (None, {}, {b'x-random-header': [b'value', b'value2']}) data = (None, b"", b'some data', b'{"some": "json"}') stub = StubTreq(_StaticTestResource()) combos = ( (verb, {"url": url, "params": p, "headers": h, "data": d}) for verb in verbs for url in urls for p in params for h in headers for d in data ) for combo in combos: verb, kwargs = combo deferreds = (stub.request(verb, **kwargs), getattr(stub, verb.lower())(**kwargs)) for d in deferreds: resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual([b'teapot!'], resp.headers.getRawHeaders(b'x-teapot')) self.assertEqual(b"" if verb == "HEAD" else b"I'm a teapot", self.successResultOf(stub.content(resp))) def test_handles_invalid_schemes(self): """ Invalid URLs errback with a :obj:`SchemeNotSupported` failure, and does so even after a successful request. """ stub = StubTreq(_StaticTestResource()) self.failureResultOf(stub.get("x-unknown-1:"), SchemeNotSupported) self.successResultOf(stub.get("http://url.com")) self.failureResultOf(stub.get("x-unknown-2:"), SchemeNotSupported) def test_files_are_rejected(self): """ StubTreq does not handle files yet - it should reject requests which attempt to pass files. """ stub = StubTreq(_StaticTestResource()) self.assertRaises( AssertionError, stub.request, 'method', 'http://url', files=b'some file') def test_passing_in_strange_data_is_rejected(self): """ StubTreq rejects data that isn't list/dictionary/tuple/bytes/unicode. """ stub = StubTreq(_StaticTestResource()) self.assertRaises( AssertionError, stub.request, 'method', 'http://url', data=object()) self.successResultOf(stub.request('method', 'http://url', data={})) self.successResultOf(stub.request('method', 'http://url', data=[])) self.successResultOf(stub.request('method', 'http://url', data=())) self.successResultOf( stub.request('method', 'http://url', data=b"")) self.successResultOf( stub.request('method', 'http://url', data="")) def test_handles_failing_asynchronous_requests(self): """ Handle a resource returning NOT_DONE_YET and then canceling the request. """ stub = StubTreq(_NonResponsiveTestResource()) d = stub.request('method', 'http://url', data=b"1234") self.assertNoResult(d) d.cancel() self.failureResultOf(d, ResponseFailed) def test_handles_successful_asynchronous_requests(self): """ Handle a resource returning NOT_DONE_YET and then later finishing the response. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data=b"1234") self.assertNoResult(d) rsrc.stored_request.finish() stub.flush() resp = self.successResultOf(d) self.assertEqual(resp.code, 200) def test_handles_successful_asynchronous_requests_with_response_data(self): """ Handle a resource returning NOT_DONE_YET and then sending some data in the response. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data=b"1234") self.assertNoResult(d) chunks = [] rsrc.stored_request.write(b'spam ') rsrc.stored_request.write(b'eggs') stub.flush() resp = self.successResultOf(d) d = stub.collect(resp, chunks.append) self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'spam eggs') rsrc.stored_request.finish() stub.flush() self.successResultOf(d) def test_handles_successful_asynchronous_requests_with_streaming(self): """ Handle a resource returning NOT_DONE_YET and then streaming data back gradually over time. """ rsrc = _EventuallyResponsiveTestResource() stub = StubTreq(rsrc) d = stub.request('method', 'http://example.com/', data="1234") self.assertNoResult(d) chunks = [] rsrc.stored_request.write(b'spam ') rsrc.stored_request.write(b'eggs') stub.flush() resp = self.successResultOf(d) d = stub.collect(resp, chunks.append) self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'spam eggs') del chunks[:] rsrc.stored_request.write(b'eggs\r\nspam\r\n') stub.flush() self.assertNoResult(d) self.assertEqual(b''.join(chunks), b'eggs\r\nspam\r\n') rsrc.stored_request.finish() stub.flush() self.successResultOf(d) def test_session_persistence_between_requests(self): """ Calling request.getSession() in the wrapped resource will return a session with the same ID, until the sessions are cleaned; in other words, cookies are propagated between requests when the result of C{response.cookies()} is passed to the next request. """ rsrc = _SessionIdTestResource() stub = StubTreq(rsrc) # request 1, getting original session ID d = stub.request("method", "http://example.com/") resp = self.successResultOf(d) cookies = resp.cookies() sid_1 = self.successResultOf(resp.content()) # request 2, ensuring session ID stays the same d = stub.request("method", "http://example.com/", cookies=cookies) resp = self.successResultOf(d) sid_2 = self.successResultOf(resp.content()) self.assertEqual(sid_1, sid_2) # request 3, ensuring the session IDs are different after cleaning # or expiring the sessions # manually expire the sessions. rsrc.expire_sessions() d = stub.request("method", "http://example.com/") resp = self.successResultOf(d) cookies = resp.cookies() sid_3 = self.successResultOf(resp.content()) self.assertNotEqual(sid_1, sid_3) # request 4, ensuring that once again the session IDs are the same d = stub.request("method", "http://example.com/", cookies=cookies) resp = self.successResultOf(d) sid_4 = self.successResultOf(resp.content()) self.assertEqual(sid_3, sid_4) def test_cookies_not_sent_to_different_domains(self): """ Cookies manually specified as part of a dictionary are not relayed through redirects to different domains. (This is really more of a test for scoping of cookies within treq itself, rather than just for testing.) """ rsrc = _RedirectResource() stub = StubTreq(rsrc) d = stub.request( "GET", "http://example.com/", cookies={"not-across-redirect": "nope"} ) resp = self.successResultOf(d) received = self.successResultOf(resp.json()) self.assertNotIn('not-across-redirect', received.get('Cookie', [''])[0]) def test_cookies_sent_for_same_domain(self): """ Cookies manually specified as part of a dictionary are relayed through redirects to the same domain. (This is really more of a test for scoping of cookies within treq itself, rather than just for testing.) """ rsrc = _RedirectResource() stub = StubTreq(rsrc) d = stub.request( "GET", "https://example.org/", cookies={'sent-to-same-domain': 'yes'} ) resp = self.successResultOf(d) received = self.successResultOf(resp.json()) self.assertIn('sent-to-same-domain', received.get('Cookie', [''])[0]) def test_cookies_sent_with_explicit_port(self): """ Cookies will be sent for URLs that specify a non-default port for their scheme. (This is really more of a test for scoping of cookies within treq itself, rather than just for testing.) """ rsrc = _RedirectResource() stub = StubTreq(rsrc) d = stub.request( "GET", "http://example.org:8080/redirected", cookies={'sent-to-non-default-port': 'yes'} ) resp = self.successResultOf(d) received = self.successResultOf(resp.json()) self.assertIn('sent-to-non-default-port', received.get('Cookie', [''])[0]) d = stub.request( "GET", "https://example.org:8443/redirected", cookies={'sent-to-non-default-port': 'yes'} ) resp = self.successResultOf(d) received = self.successResultOf(resp.json()) self.assertIn('sent-to-non-default-port', received.get('Cookie', [''])[0]) class HasHeadersTests(TestCase): """ Tests for :obj:`HasHeaders`. """ def test_equality_and_strict_subsets_succeed(self): """ The :obj:`HasHeaders` returns True if both sets of headers are equivalent, or the first is a strict subset of the second. """ self.assertEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two', 'three']}, "Equivalent headers do not match.") self.assertEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two', 'three', 'four'], 'ten': ['six']}, "Strict subset headers do not match") def test_partial_or_zero_intersection_subsets_fail(self): """ The :obj:`HasHeaders` returns False if both sets of headers overlap but the first is not a strict subset of the second. It also returns False if there is no overlap. """ self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['three', 'four']}, "Partial value overlap matches") self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'one': ['two']}, "Missing value matches") self.assertNotEqual(HasHeaders({'one': ['two', 'three']}), {'ten': ['six']}, "Complete inequality matches") def test_case_insensitive_keys(self): """ The :obj:`HasHeaders` equality function ignores the case of the header keys. """ self.assertEqual(HasHeaders({b'A': [b'1'], b'b': [b'2']}), {b'a': [b'1'], b'B': [b'2']}) def test_case_sensitive_values(self): """ The :obj:`HasHeaders` equality function does care about the case of the header value. """ self.assertNotEqual(HasHeaders({b'a': [b'a']}), {b'a': [b'A']}) def test_bytes_encoded_forms(self): """ The :obj:`HasHeaders` equality function compares the bytes-encoded forms of both sets of headers. """ self.assertEqual(HasHeaders({b'a': [b'a']}), {u'a': [u'a']}) self.assertEqual(HasHeaders({u'b': [u'b']}), {b'b': [b'b']}) def test_repr(self): """ :obj:`HasHeaders` returns a nice string repr. """ self.assertEqual( "HasHeaders({b'a': [b'b']})", repr(HasHeaders({b"A": [b"b"]})), ) class StringStubbingTests(TestCase): """ Tests for :obj:`StringStubbingResource`. """ def _get_response_for(self, expected_args, response): """ Make a :obj:`IStringResponseStubs` that checks the expected args and returns the given response. """ method, url, params, headers, data = expected_args def get_response_for(_method, _url, _params, _headers, _data): self.assertEqual((method, url, params, data), (_method, _url, _params, _data)) self.assertEqual(HasHeaders(headers), _headers) return response return get_response_for def test_interacts_successfully_with_istub(self): """ The :obj:`IStringResponseStubs` is passed the correct parameters with which to evaluate the response, and the response is returned. """ resource = StringStubbingResource(self._get_response_for( (b'DELETE', 'http://what/a/thing', {b'page': [b'1']}, {b'x-header': [b'eh']}, b'datastr'), (418, {b'x-response': b'responseheader'}, b'response body'))) stub = StubTreq(resource) d = stub.delete('http://what/a/thing', headers={b'x-header': b'eh'}, params={b'page': b'1'}, data=b'datastr') resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual([b'responseheader'], resp.headers.getRawHeaders(b'x-response')) self.assertEqual(b'response body', self.successResultOf(stub.content(resp))) class RequestSequenceTests(TestCase): """ Tests for :obj:`RequestSequence`. """ def setUp(self): """ Set up a way to report failures asynchronously. """ self.async_failures = [] def test_mismatched_request_causes_failure(self): """ If a request is made that is not expected as the next request, causes a failure. """ sequence = RequestSequence( [((b'get', 'https://anything/', {b'1': [b'2']}, HasHeaders({b'1': [b'1']}), b'what'), (418, {}, b'body')), ((b'get', 'http://anything', {}, HasHeaders({b'2': [b'1']}), b'what'), (202, {}, b'deleted'))], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) get = partial(stub.get, 'https://anything?1=2', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(get()) self.assertEqual(418, resp.code) self.assertEqual(b'body', self.successResultOf(stub.content(resp))) self.assertEqual([], self.async_failures) resp = self.successResultOf(get()) self.assertEqual(500, resp.code) self.assertEqual(1, len(self.async_failures)) self.assertIn("Expected the next request to be", self.async_failures[0]) self.assertFalse(sequence.consumed()) def test_unexpected_number_of_request_causes_failure(self): """ If there are no more expected requests, making a request causes a failure. """ sequence = RequestSequence( [], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) d = stub.get('https://anything', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(d) self.assertEqual(500, resp.code) self.assertEqual(b'StubbingError', self.successResultOf(resp.content())) self.assertEqual(1, len(self.async_failures)) self.assertIn("No more requests expected, but request", self.async_failures[0]) # the expected requests have all been made self.assertTrue(sequence.consumed()) def test_works_with_mock_any(self): """ :obj:`mock.ANY` can be used with the request parameters. """ sequence = RequestSequence( [((ANY, ANY, ANY, ANY, ANY), (418, {}, b'body'))], async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) with sequence.consume(sync_failure_reporter=self.fail): d = stub.get('https://anything', data=b'what', headers={b'1': b'1'}) resp = self.successResultOf(d) self.assertEqual(418, resp.code) self.assertEqual(b'body', self.successResultOf(stub.content(resp))) self.assertEqual([], self.async_failures) # the expected requests have all been made self.assertTrue(sequence.consumed()) def test_consume_context_manager_fails_on_remaining_requests(self): """ If the `consume` context manager is used, if there are any remaining expecting requests, the test case will be failed. """ sequence = RequestSequence( [((ANY, ANY, ANY, ANY, ANY), (418, {}, b'body'))] * 2, async_failure_reporter=self.async_failures.append) stub = StubTreq(StringStubbingResource(sequence)) consume_failures = [] with sequence.consume(sync_failure_reporter=consume_failures.append): self.successResultOf(stub.get('https://anything', data=b'what', headers={b'1': b'1'})) self.assertEqual(1, len(consume_failures)) self.assertIn( "Not all expected requests were made. Still expecting:", consume_failures[0]) self.assertIn( "{0}(url={0}, params={0}, headers={0}, data={0})".format( repr(ANY)), consume_failures[0]) # no asynchronous failures (mismatches, etc.) self.assertEqual([], self.async_failures) def test_async_failures_logged(self): """ When no `async_failure_reporter` is passed async failures are logged by default. """ sequence = RequestSequence([]) stub = StubTreq(StringStubbingResource(sequence)) with sequence.consume(self.fail): self.successResultOf(stub.get('https://example.com')) [failure] = self.flushLoggedErrors() self.assertIsInstance(failure.value, AssertionError) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/test_treq_integration.py0000644000175100001660000002303014673126560021705 0ustar00runnerdockerfrom io import BytesIO from twisted.python.url import URL from twisted.trial.unittest import TestCase from twisted.internet.defer import CancelledError, inlineCallbacks from twisted.internet.task import deferLater from twisted.internet import reactor from twisted.internet.tcp import Client from twisted.internet.ssl import Certificate, trustRootFromCertificates from twisted.web.client import (Agent, BrowserLikePolicyForHTTPS, HTTPConnectionPool, ResponseFailed) from treq.test.util import DEBUG, skip_on_windows_because_of_199 from .local_httpbin.parent import _HTTPBinProcess import treq skip = skip_on_windows_because_of_199() @inlineCallbacks def print_response(response): if DEBUG: print() print('---') print(response.code) print(response.headers) print(response.request.headers) text = yield treq.text_content(response) print(text) print('---') def with_baseurl(method): def _request(self, url, *args, **kwargs): return method(self.baseurl + url, *args, agent=self.agent, pool=self.pool, **kwargs) return _request class TreqIntegrationTests(TestCase): get = with_baseurl(treq.get) head = with_baseurl(treq.head) post = with_baseurl(treq.post) put = with_baseurl(treq.put) patch = with_baseurl(treq.patch) delete = with_baseurl(treq.delete) _httpbin_process = _HTTPBinProcess(https=False) @inlineCallbacks def setUp(self): description = yield self._httpbin_process.server_description( reactor) self.baseurl = URL(scheme=u"http", host=description.host, port=description.port).asText() self.agent = Agent(reactor) self.pool = HTTPConnectionPool(reactor, False) def tearDown(self): def _check_fds(_): # This appears to only be necessary for HTTPS tests. # For the normal HTTP tests then closeCachedConnections is # sufficient. fds = set(reactor.getReaders() + reactor.getReaders()) if not [fd for fd in fds if isinstance(fd, Client)]: return return deferLater(reactor, 0, _check_fds, None) return self.pool.closeCachedConnections().addBoth(_check_fds) @inlineCallbacks def assert_data(self, response, expected_data): body = yield treq.json_content(response) self.assertIn('data', body) self.assertEqual(body['data'], expected_data) @inlineCallbacks def assert_sent_header(self, response, header, expected_value): body = yield treq.json_content(response) self.assertIn(header, body['headers']) self.assertEqual(body['headers'][header], expected_value) @inlineCallbacks def test_get(self): response = yield self.get('/get') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_get_headers(self): response = yield self.get('/get', {b'X-Blah': [b'Foo', b'Bar']}) self.assertEqual(response.code, 200) yield self.assert_sent_header(response, 'X-Blah', 'Foo,Bar') yield print_response(response) @inlineCallbacks def test_get_headers_unicode(self): response = yield self.get('/get', {u'X-Blah': [u'Foo', b'Bar']}) self.assertEqual(response.code, 200) yield self.assert_sent_header(response, 'X-Blah', 'Foo,Bar') yield print_response(response) @inlineCallbacks def test_get_302_absolute_redirect(self): response = yield self.get( '/redirect-to?url={0}/get'.format(self.baseurl)) self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_get_302_relative_redirect(self): response = yield self.get('/relative-redirect/1') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_get_302_redirect_disallowed(self): response = yield self.get('/redirect/1', allow_redirects=False) self.assertEqual(response.code, 302) yield print_response(response) @inlineCallbacks def test_head(self): response = yield self.head('/get') body = yield treq.content(response) self.assertEqual(b'', body) yield print_response(response) @inlineCallbacks def test_head_302_absolute_redirect(self): response = yield self.head( '/redirect-to?url={0}/get'.format(self.baseurl)) self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_head_302_relative_redirect(self): response = yield self.head('/relative-redirect/1') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_head_302_redirect_disallowed(self): response = yield self.head('/redirect/1', allow_redirects=False) self.assertEqual(response.code, 302) yield print_response(response) @inlineCallbacks def test_post(self): response = yield self.post('/post', b'Hello!') self.assertEqual(response.code, 200) yield self.assert_data(response, 'Hello!') yield print_response(response) @inlineCallbacks def test_multipart_post(self): class FileLikeObject(BytesIO): def __init__(self, val): BytesIO.__init__(self, val) self.name = "david.png" def read(*args, **kwargs): return BytesIO.read(*args, **kwargs) response = yield self.post( '/post', data={"a": "b"}, files={"file1": FileLikeObject(b"file")}) self.assertEqual(response.code, 200) body = yield treq.json_content(response) self.assertEqual('b', body['form']['a']) self.assertEqual('file', body['files']['file1']) yield print_response(response) @inlineCallbacks def test_post_headers(self): response = yield self.post( '/post', b'{msg: "Hello!"}', headers={'Content-Type': ['application/json']} ) self.assertEqual(response.code, 200) yield self.assert_sent_header( response, 'Content-Type', 'application/json') yield self.assert_data(response, '{msg: "Hello!"}') yield print_response(response) @inlineCallbacks def test_put(self): response = yield self.put('/put', data=b'Hello!') yield print_response(response) @inlineCallbacks def test_patch(self): response = yield self.patch('/patch', data=b'Hello!') self.assertEqual(response.code, 200) yield self.assert_data(response, 'Hello!') yield print_response(response) @inlineCallbacks def test_delete(self): response = yield self.delete('/delete') self.assertEqual(response.code, 200) yield print_response(response) @inlineCallbacks def test_gzip(self): response = yield self.get('/gzip') self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertTrue(json['gzipped']) @inlineCallbacks def test_basic_auth(self): response = yield self.get('/basic-auth/treq/treq', auth=('treq', 'treq')) self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertTrue(json['authenticated']) self.assertEqual(json['user'], 'treq') @inlineCallbacks def test_failed_basic_auth(self): response = yield self.get('/basic-auth/treq/treq', auth=('not-treq', 'not-treq')) self.assertEqual(response.code, 401) yield print_response(response) @inlineCallbacks def test_timeout(self): """ Verify a timeout fires if a request takes too long. """ yield self.assertFailure(self.get('/delay/2', timeout=1), CancelledError, ResponseFailed) @inlineCallbacks def test_cookie(self): response = yield self.get('/cookies', cookies={'hello': 'there'}) self.assertEqual(response.code, 200) yield print_response(response) json = yield treq.json_content(response) self.assertEqual(json['cookies']['hello'], 'there') @inlineCallbacks def test_set_cookie(self): response = yield self.get('/cookies/set', allow_redirects=False, params={'hello': 'there'}) # self.assertEqual(response.code, 200) yield print_response(response) self.assertEqual(response.cookies()['hello'], 'there') class HTTPSTreqIntegrationTests(TreqIntegrationTests): _httpbin_process = _HTTPBinProcess(https=True) @inlineCallbacks def setUp(self): description = yield self._httpbin_process.server_description( reactor) self.baseurl = URL(scheme=u"https", host=description.host, port=description.port).asText() root = trustRootFromCertificates( [Certificate.loadPEM(description.cacert)], ) self.agent = Agent( reactor, contextFactory=BrowserLikePolicyForHTTPS(root), ) self.pool = HTTPConnectionPool(reactor, False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/test/util.py0000644000175100001660000000143314673126560016250 0ustar00runnerdockerimport os import platform from unittest import mock from twisted.internet import reactor from twisted.internet.task import Clock DEBUG = os.getenv("TREQ_DEBUG", False) == "true" is_pypy = platform.python_implementation() == 'PyPy' def with_clock(fn): def wrapper(*args, **kwargs): clock = Clock() with mock.patch.object(reactor, 'callLater', clock.callLater): return fn(*(args + (clock,)), **kwargs) return wrapper def skip_on_windows_because_of_199(): """ Return a skip describing issue #199 under Windows. :return: A :py:class:`str` skip reason. """ if platform.system() == 'Windows': return ("HTTPBin process cannot run under Windows." " See https://github.com/twisted/treq/issues/199") return None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786928.0 treq-24.9.1/src/treq/testing.py0000644000175100001660000004746714673126560016012 0ustar00runnerdocker""" In-memory version of treq for testing. """ from contextlib import contextmanager from functools import wraps try: from twisted.internet.testing import MemoryReactorClock except ImportError: from twisted.test.proto_helpers import MemoryReactorClock from twisted.test import iosim from twisted.internet.address import IPv4Address from twisted.internet.defer import succeed from twisted.internet.interfaces import ISSLTransport from twisted.logger import Logger from twisted.python.failure import Failure from twisted.python.urlpath import URLPath from twisted.internet.endpoints import TCP4ClientEndpoint from twisted.web.client import Agent from twisted.web.error import SchemeNotSupported from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer from twisted.web.resource import Resource from twisted.web.server import Session, Site from zope.interface import directlyProvides, implementer import treq from treq.client import HTTPClient import attr @implementer(IAgentEndpointFactory) @attr.s class _EndpointFactory: """ An endpoint factory used by :class:`RequestTraversalAgent`. :ivar reactor: The agent's reactor. :type reactor: :class:`MemoryReactorClock` """ reactor = attr.ib() def endpointForURI(self, uri): """ Create an endpoint that represents an in-memory connection to a URI. Note: This always creates a :class:`~twisted.internet.endpoints.TCP4ClientEndpoint` on the assumption :class:`RequestTraversalAgent` ignores everything about the endpoint but its port. :param uri: The URI to connect to. :type uri: :class:`~twisted.web.client.URI` :return: The endpoint. :rtype: An :class:`~twisted.internet.interfaces.IStreamClientEndpoint` provider. """ if uri.scheme not in {b'http', b'https'}: raise SchemeNotSupported("Unsupported scheme: %r" % (uri.scheme,)) return TCP4ClientEndpoint(self.reactor, "127.0.0.1", uri.port) @implementer(IAgent) class RequestTraversalAgent: """ :obj:`~twisted.web.iweb.IAgent` implementation that issues an in-memory request rather than going out to a real network socket. """ def __init__(self, rootResource): """ :param rootResource: The Twisted `IResource` at the root of the resource tree. """ self._memoryReactor = MemoryReactorClock() self._realAgent = Agent.usingEndpointFactory( reactor=self._memoryReactor, endpointFactory=_EndpointFactory(self._memoryReactor)) self._rootResource = rootResource self._serverFactory = Site(self._rootResource, reactor=self._memoryReactor) self._serverFactory.sessionFactory = lambda site, uid: Session( site, uid, reactor=self._memoryReactor, ) self._pumps = set() def request(self, method, uri, headers=None, bodyProducer=None): """ Implement IAgent.request. """ # We want to use Agent to parse the HTTP response, so let's ask it to # make a request against our in-memory reactor. response = self._realAgent.request(method, uri, headers, bodyProducer) # If the request has already finished, just propagate the result. In # reality this would only happen in failure, but if the agent ever adds # a local cache this might be a success. already_called = [] def check_already_called(r): already_called.append(r) return r response.addBoth(check_already_called) if already_called: return response # That will try to establish an HTTP connection with the reactor's # connectTCP method, and MemoryReactor will place Agent's factory into # the tcpClients list. Alternately, it will try to establish an HTTPS # connection with the reactor's connectSSL method, and MemoryReactor # will place it into the sslClients list. We'll extract that. scheme = URLPath.fromBytes(uri).scheme host, port, factory, timeout, bindAddress = ( self._memoryReactor.tcpClients[-1]) serverAddress = IPv4Address('TCP', '127.0.0.1', port) clientAddress = IPv4Address('TCP', '127.0.0.1', 31337) # Create the protocol and fake transport for the client and server, # using the factory that was passed to the MemoryReactor for the # client, and a Site around our rootResource for the server. serverProtocol = self._serverFactory.buildProtocol(clientAddress) serverTransport = iosim.FakeTransport( serverProtocol, isServer=True, hostAddress=serverAddress, peerAddress=clientAddress) clientProtocol = factory.buildProtocol(None) clientTransport = iosim.FakeTransport( clientProtocol, isServer=False, hostAddress=clientAddress, peerAddress=serverAddress) if scheme == b"https": # Provide ISSLTransport on both transports, so everyone knows that # this is HTTPS. directlyProvides(serverTransport, ISSLTransport) directlyProvides(clientTransport, ISSLTransport) # Make a pump for wiring the client and server together. pump = iosim.connect( serverProtocol, serverTransport, clientProtocol, clientTransport) self._pumps.add(pump) return response def flush(self): """ Flush all data between pending client/server pairs. This is only necessary if a :obj:`Resource` under test returns :obj:`NOT_DONE_YET` from its ``render`` method, making a response asynchronous. In that case, after each write from the server, :meth:`flush()` must be called so the client can see it. """ old_pumps = self._pumps new_pumps = self._pumps = set() for p in old_pumps: p.flush() if p.clientIO.disconnected and p.serverIO.disconnected: continue new_pumps.add(p) @implementer(IBodyProducer) class _SynchronousProducer: """ A partial implementation of an :obj:`IBodyProducer` which produces its entire payload immediately. There is no way to access to an instance of this object from :obj:`RequestTraversalAgent` or :obj:`StubTreq`, or even a :obj:`Resource: passed to :obj:`StubTreq`. This does not implement the :func:`IBodyProducer.stopProducing` method, because that is very difficult to trigger. (The request from `RequestTraversalAgent` would have to be canceled while it is still in the transmitting state), and the intent is to use `RequestTraversalAgent` to make synchronous requests. """ def __init__(self, body): """ Create a synchronous producer with some bytes. """ self.body = body msg = ("StubTreq currently only supports url-encodable types, bytes, " "or unicode as data.") assert isinstance(body, (bytes, str)), msg if isinstance(body, str): self.body = body.encode('utf-8') self.length = len(body) def startProducing(self, consumer): """ Immediately produce all data. """ consumer.write(self.body) return succeed(None) def stopProducing(self): raise NotImplementedError() def pauseProducing(self): raise NotImplementedError() def resumeProducing(self): raise NotImplementedError() def _reject_files(f): """ Decorator that rejects the 'files' keyword argument to the request functions, because that is not handled by this yet. """ @wraps(f) def wrapper(*args, **kwargs): if 'files' in kwargs: raise AssertionError("StubTreq cannot handle files.") return f(*args, **kwargs) return wrapper class StubTreq: """ A fake version of the treq module that can be used for testing that provides all the function calls exposed in :obj:`treq.__all__`. """ def __init__(self, resource): """ Construct a client, and pass through client methods and/or treq.content functions. :param resource: A :obj:`Resource` object that provides the fake responses """ self._agent = RequestTraversalAgent(resource) _client = HTTPClient(agent=self._agent, data_to_body_producer=_SynchronousProducer) for function_name in treq.__all__: function = getattr(_client, function_name, None) if function is None: function = getattr(treq, function_name) else: function = _reject_files(function) setattr(self, function_name, function) self.flush = self._agent.flush class StringStubbingResource(Resource): """ A resource that takes a callable with 5 parameters ``(method, url, params, headers, data)`` and returns ``(code, headers, body)``. The resource uses the callable to return a real response as a result of a request. The parameters for the callable are: - ``method``, the HTTP method as `bytes`. - ``url``, the full URL of the request as text. - ``params``, a dictionary of query parameters mapping query keys lists of values (sorted alphabetically). - ``headers``, a dictionary of headers mapping header keys to a list of header values (sorted alphabetically). - ``data``, the request body as `bytes`. The callable must return a ``tuple`` of (code, headers, body) where the code is the HTTP status code, the headers is a dictionary of bytes (unlike the `headers` parameter, which is a dictionary of lists), and body is a string that will be returned as the response body. If there is a stubbing error, the return value is undefined (if an exception is raised, :obj:`~twisted.web.resource.Resource` will just eat it and return 500 in its place). The callable, or whomever creates the callable, should have a way to handle error reporting. """ isLeaf = True def __init__(self, get_response_for): """ See :class:`StringStubbingResource`. """ Resource.__init__(self) self._get_response_for = get_response_for def render(self, request): """ Produce a response according to the stubs provided. """ params = request.args headers = {} for k, v in request.requestHeaders.getAllRawHeaders(): headers[k] = v for dictionary in (params, headers): for k in dictionary: dictionary[k] = sorted(dictionary[k]) # The incoming request does not have the absoluteURI property, because # an incoming request is a IRequest, not an IClientRequest, so it # the absolute URI needs to be synthesized. # But request.URLPath() only returns the scheme and hostname, because # that is the URL for this resource (because this resource handles # everything from the root on down). # So we need to add the request.path (not request.uri, which includes # the query parameters) absoluteURI = str(request.URLPath().click(request.path)) status_code, headers, body = self._get_response_for( request.method, absoluteURI, params, headers, request.content.read()) request.setResponseCode(status_code) for k, v in headers.items(): request.setHeader(k, v) return body def _maybeEncode(someStr): """ Encode `someStr` to ASCII if required. """ if isinstance(someStr, str): return someStr.encode('ascii') return someStr def _maybeEncodeHeaders(headers): """ Convert a headers mapping to its bytes-encoded form. """ return {_maybeEncode(k).lower(): [_maybeEncode(v) for v in vs] for k, vs in headers.items()} class HasHeaders: """ Since Twisted adds headers to a request, such as the host and the content length, it's necessary to test whether request headers CONTAIN the expected headers (the ones that are not automatically added by Twisted). This wraps a set of headers, and can be used in an equality test against a superset if the provided headers. The headers keys are lowercased, and keys and values are compared in their bytes-encoded forms. Headers should be provided as a mapping from strings or bytes to a list of strings or bytes. """ def __init__(self, headers): self._headers = _maybeEncodeHeaders(headers) def __repr__(self): return "HasHeaders({0})".format(repr(self._headers)) def __eq__(self, other_headers): compare_to = _maybeEncodeHeaders(other_headers) return (set(self._headers.keys()).issubset(set(compare_to.keys())) and all([set(v).issubset(set(compare_to[k])) for k, v in self._headers.items()])) def __ne__(self, other_headers): return not self.__eq__(other_headers) class RequestSequence: """ For an example usage, see :meth:`RequestSequence.consume`. Takes a sequence of:: [((method, url, params, headers, data), (code, headers, body)), ...] Expects the requests to arrive in sequence order. If there are no more responses, or the request's parameters do not match the next item's expected request parameters, calls `sync_failure_reporter` or `async_failure_reporter`. For the expected request tuples: - ``method`` should be :class:`bytes` normalized to lowercase. - ``url`` should be a `str` normalized as per the `transformations in that (usually) preserve semantics `_. A URL to `http://something-that-looks-like-a-directory` would be normalized to `http://something-that-looks-like-a-directory/` and a URL to `http://something-that-looks-like-a-page/page.html` remains unchanged. - ``params`` is a dictionary mapping :class:`bytes` to :class:`list` of :class:`bytes`. - ``headers`` is a dictionary mapping :class:`bytes` to :class:`list` of :class:`bytes` -- note that :class:`twisted.web.client.Agent` may add its own headers which are not guaranteed to be present (for instance, `user-agent` or `content-length`), so it's better to use some kind of matcher like :class:`HasHeaders`. - ``data`` is a :class:`bytes`. For the response tuples: - ``code`` is an integer representing the HTTP status code to return. - ``headers`` is a dictionary mapping :class:`bytes` to :class:`bytes` or :class:`str`. Note that the value is *not* a list. - ``body`` is a :class:`bytes`. :ivar list sequence: A sequence of (request tuple, response tuple) two-tuples, as described above. :ivar async_failure_reporter: An optional callable that takes a :class:`str` message indicating a failure. It's asynchronous because it cannot just raise an exception—if it does, :meth:`Resource.render ` will just convert that into a 500 response, and there will be no other failure reporting mechanism. When the `async_failure_reporter` parameter is not passed, async failures will be reported via a :class:`twisted.logger.Logger` instance, which Trial's test case classes (:class:`twisted.trial.unittest.TestCase` and :class:`~twisted.trial.unittest.SynchronousTestCase`) will translate into a test failure. .. note:: Some versions of :class:`twisted.trial.unittest.SynchronousTestCase` report logged errors on the wrong test: see `Twisted #9267 `_. .. TODO Update the above note to say what version of SynchronousTestCase is fixed once Twisted >17.5.0 is released. When not subclassing Trial's classes you must pass `async_failure_reporter` and implement equivalent behavior or errors will pass silently. For example:: async_failures = [] sequence_stubs = RequestSequence([...], async_failures.append) stub_treq = StubTreq(StringStubbingResource(sequence_stubs)) with sequence_stubs.consume(self.fail): # self = unittest.TestCase stub_treq.get('http://fakeurl.com') self.assertEqual([], async_failures) """ _log = Logger() def __init__(self, sequence, async_failure_reporter=None): self._sequence = sequence self._async_reporter = async_failure_reporter or self._log_async_error def _log_async_error(self, message): """ The default async failure reporter—see `async_failure_reporter`. Logs a failure which wraps an :ex:`AssertionError`. :param str message: Failure message """ # Passing message twice may look redundant, but Trial only preserves # the Failure, not the log message. self._log.failure( "RequestSequence async error: {message}", message=message, failure=Failure(AssertionError(message)), ) def consumed(self): """ :return: `bool` representing whether the entire sequence has been consumed. This is useful in tests to assert that the expected requests have all been made. """ return len(self._sequence) == 0 @contextmanager def consume(self, sync_failure_reporter): """ Usage:: sequence_stubs = RequestSequence([...]) stub_treq = StubTreq(StringStubbingResource(sequence_stubs)) # self = twisted.trial.unittest.SynchronousTestCase with sequence_stubs.consume(self.fail): stub_treq.get('http://fakeurl.com') stub_treq.get('http://another-fake-url.com') If there are still remaining expected requests to be made in the sequence, fails the provided test case. :param sync_failure_reporter: A callable that takes a single message reporting failures. This can just raise an exception - it does not need to be asynchronous, since the exception would not get raised within a Resource. :return: a context manager that can be used to ensure all expected requests have been made. """ yield if not self.consumed(): sync_failure_reporter("\n".join( ["Not all expected requests were made. Still expecting:"] + ["- {0}(url={1}, params={2}, headers={3}, data={4})".format( *expected) for expected, _ in self._sequence])) def __call__(self, method, url, params, headers, data): """ :return: the next response in the sequence, provided that the parameters match the next in the sequence. """ if len(self._sequence) == 0: self._async_reporter( "No more requests expected, but request {0!r} made.".format( (method, url, params, headers, data))) return (500, {}, b"StubbingError") expected, response = self._sequence[0] e_method, e_url, e_params, e_headers, e_data = expected checks = [ (e_method == method.lower(), "method"), (e_url == url, "url"), (e_params == params, 'parameters'), (e_headers == headers, "headers"), (e_data == data, "data") ] mismatches = [param for success, param in checks if not success] if mismatches: self._async_reporter( "\nExpected the next request to be: {0!r}" "\nGot request : {1!r}\n" "\nMismatches: {2!r}" .format(expected, (method, url, params, headers, data), mismatches)) return (500, {}, b"StubbingError") self._sequence = self._sequence[1:] return response ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1726786936.3221886 treq-24.9.1/src/treq.egg-info/0000755000175100001660000000000014673126570015434 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786936.0 treq-24.9.1/src/treq.egg-info/PKG-INFO0000644000175100001660000000666614673126570016547 0ustar00runnerdockerMetadata-Version: 2.1 Name: treq Version: 24.9.1 Summary: High-level Twisted HTTP Client API Home-page: https://github.com/twisted/treq Author: David Reid Author-email: dreid@dreid.org Maintainer: Tom Most Maintainer-email: twm@freecog.net License: MIT/X Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Framework :: Twisted Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Requires-Python: >=3.7 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: incremental Requires-Dist: requests>=2.1.0 Requires-Dist: hyperlink>=21.0.0 Requires-Dist: Twisted[tls]>=22.10.0 Requires-Dist: attrs Requires-Dist: typing_extensions>=3.10.0 Provides-Extra: dev Requires-Dist: pep8; extra == "dev" Requires-Dist: pyflakes; extra == "dev" Requires-Dist: httpbin==0.7.0; extra == "dev" Requires-Dist: werkzeug==2.0.3; extra == "dev" Provides-Extra: docs Requires-Dist: sphinx<7.0.0; extra == "docs" treq: High-level Twisted HTTP Client API ======================================== .. |pypi| image:: https://img.shields.io/pypi/v/treq.svg :alt: PyPI :target: https://pypi.org/project/treq/ .. |calver| image:: https://img.shields.io/badge/calver-YY.MM.MICRO-22bfda.svg :alt: calver: YY.MM.MICRO :target: https://calver.org/ .. |coverage| image:: https://coveralls.io/repos/github/twisted/treq/badge.svg :alt: Coverage :target: https://coveralls.io/github/twisted/treq .. |documentation| image:: https://readthedocs.org/projects/treq/badge/ :alt: Documentation :target: https://treq.readthedocs.org |pypi| |calver| |coverage| |documentation| ``treq`` is an HTTP library inspired by `requests `_ but written on top of `Twisted `_'s `Agents `_. It provides a simple, higher level API for making HTTP requests when using Twisted. .. code-block:: python >>> import treq >>> async def main(reactor): ... response = await treq.get("https://github.com") ... print(response.code) ... body = await response.text() ... print("" in body) >>> from twisted.internet.task import react >>> react(main) 200 True For more info `read the docs `_. Contributing ------------ ``treq`` development is hosted on `GitHub `_. We welcome contributions: feel free to fork and send contributions over. See `CONTRIBUTING.rst `_ for more info. Code of Conduct --------------- Refer to the `Twisted code of conduct `_. Copyright and License --------------------- ``treq`` is made available under the MIT license. See `LICENSE <./LICENSE>`_ for legal details and copyright notices. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786936.0 treq-24.9.1/src/treq.egg-info/SOURCES.txt0000644000175100001660000000343114673126570017321 0ustar00runnerdocker.coveragerc CHANGELOG.rst CONTRIBUTING.rst LICENSE MANIFEST.in README.rst SECURITY.md pyproject.toml setup.py docs/Makefile docs/api.rst docs/changelog.rst docs/conf.py docs/howto.rst docs/index.rst docs/make.bat docs/testing.rst docs/_static/.keepme docs/examples/_utils.py docs/examples/basic_auth.py docs/examples/basic_get.py docs/examples/basic_post.py docs/examples/basic_url.py docs/examples/custom_agent.py docs/examples/disable_redirects.py docs/examples/download_file.py docs/examples/iresource.py docs/examples/json_post.py docs/examples/query_params.py docs/examples/redirects.py docs/examples/response_history.py docs/examples/testing_seq.py docs/examples/using_cookies.py src/treq/__init__.py src/treq/_agentspy.py src/treq/_multipart.py src/treq/_types.py src/treq/_version.py src/treq/api.py src/treq/auth.py src/treq/client.py src/treq/content.py src/treq/cookies.py src/treq/multipart.py src/treq/py.typed src/treq/response.py src/treq/testing.py src/treq.egg-info/PKG-INFO src/treq.egg-info/SOURCES.txt src/treq.egg-info/dependency_links.txt src/treq.egg-info/requires.txt src/treq.egg-info/top_level.txt src/treq/test/__init__.py src/treq/test/test_agentspy.py src/treq/test/test_api.py src/treq/test/test_auth.py src/treq/test/test_client.py src/treq/test/test_content.py src/treq/test/test_cookies.py src/treq/test/test_multipart.py src/treq/test/test_response.py src/treq/test/test_testing.py src/treq/test/test_treq_integration.py src/treq/test/util.py src/treq/test/local_httpbin/__init__.py src/treq/test/local_httpbin/child.py src/treq/test/local_httpbin/parent.py src/treq/test/local_httpbin/shared.py src/treq/test/local_httpbin/test/__init__.py src/treq/test/local_httpbin/test/test_child.py src/treq/test/local_httpbin/test/test_parent.py src/treq/test/local_httpbin/test/test_shared.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786936.0 treq-24.9.1/src/treq.egg-info/dependency_links.txt0000644000175100001660000000000114673126570021502 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786936.0 treq-24.9.1/src/treq.egg-info/requires.txt0000644000175100001660000000025514673126570020036 0ustar00runnerdockerincremental requests>=2.1.0 hyperlink>=21.0.0 Twisted[tls]>=22.10.0 attrs typing_extensions>=3.10.0 [dev] pep8 pyflakes httpbin==0.7.0 werkzeug==2.0.3 [docs] sphinx<7.0.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726786936.0 treq-24.9.1/src/treq.egg-info/top_level.txt0000644000175100001660000000000514673126570020161 0ustar00runnerdockertreq