././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.875993 trio-0.19.0/0000755000076500000000000000000000000000000012120 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/CODE_OF_CONDUCT.md0000644000076500000000000000014200000000000014714 0ustar00quentinwheelFor the Trio code of conduct, see: https://trio.readthedocs.io/en/latest/code-of-conduct.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/CONTRIBUTING.md0000644000076500000000000000014200000000000014346 0ustar00quentinwheelFor the Trio contributing guide, see: https://trio.readthedocs.io/en/latest/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/LICENSE0000644000076500000000000000027600000000000013132 0ustar00quentinwheelThis software is made available under the terms of *either* of the licenses found in LICENSE.APACHE2 or LICENSE.MIT. Contributions to Trio are made under the terms of *both* these licenses. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/LICENSE.APACHE20000644000076500000000000002613600000000000014137 0ustar00quentinwheel Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/LICENSE.MIT0000644000076500000000000000202600000000000013555 0ustar00quentinwheelThe MIT License (MIT) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/MANIFEST.in0000644000076500000000000000034400000000000013657 0ustar00quentinwheelinclude LICENSE LICENSE.MIT LICENSE.APACHE2 include README.rst include CODE_OF_CONDUCT.md CONTRIBUTING.md include test-requirements.txt recursive-include trio/tests/test_ssl_certs *.pem recursive-include docs * prune docs/build ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8752286 trio-0.19.0/PKG-INFO0000644000076500000000000001020600000000000013214 0ustar00quentinwheelMetadata-Version: 2.1 Name: trio Version: 0.19.0 Summary: A friendly Python library for async concurrency and I/O Home-page: https://github.com/python-trio/trio Author: Nathaniel J. Smith Author-email: njs@pobox.com License: MIT -or- Apache License 2.0 Keywords: async,io,networking,trio Platform: UNKNOWN Classifier: Development Status :: 3 - Alpha Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: BSD Classifier: Operating System :: Microsoft :: Windows Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Topic :: System :: Networking Classifier: Framework :: Trio Requires-Python: >=3.6 License-File: LICENSE.APACHE2 License-File: LICENSE.MIT License-File: LICENSE .. image:: https://raw.githubusercontent.com/python-trio/trio/9b0bec646a31e0d0f67b8b6ecc6939726faf3e17/logo/logo-with-background.svg :width: 200px :align: right The Trio project's goal is to produce a production-quality, `permissively licensed `__, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do **multiple things at the same time** with **parallelized I/O**. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections at the same time, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on **usability** and **correctness**. Concurrency is complicated; we try to make it *easy* to get things *right*. Trio was built from the ground up to take advantage of the `latest Python features `__, and draws inspiration from `many sources `__, in particular Dave Beazley's `Curio `__. The resulting design is radically simpler than older competitors like `asyncio `__ and `Twisted `__, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. `Perhaps you'll find the same `__. This project is young and still somewhat experimental: the overall design is solid and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We *do* encourage you do use it, but you should `read and subscribe to issue #1 `__ to get warning and a chance to give feedback about any compatibility-breaking changes. Vital statistics: * Supported environments: Linux, macOS, or Windows running some kind of Python 3.6-or-better (either CPython or PyPy3 is fine). \*BSD and illumos likely work too, but are not tested. * Install: ``python3 -m pip install -U trio`` (or on Windows, maybe ``py -3 -m pip install -U trio``). No compiler needed. * Tutorial and reference manual: https://trio.readthedocs.io * Bug tracker and source code: https://github.com/python-trio/trio * Real-time chat: https://gitter.im/python-trio/general * Discussion forum: https://trio.discourse.group * License: MIT or Apache 2, your choice * Contributor guide: https://trio.readthedocs.io/en/latest/contributing.html * Code of conduct: Contributors are requested to follow our `code of conduct `_ in all project spaces. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/README.rst0000644000076500000000000001502400000000000013611 0ustar00quentinwheel.. image:: https://img.shields.io/badge/chat-join%20now-blue.svg :target: https://gitter.im/python-trio/general :alt: Join chatroom .. image:: https://img.shields.io/badge/forum-join%20now-blue.svg :target: https://trio.discourse.group :alt: Join forum .. image:: https://img.shields.io/badge/docs-read%20now-blue.svg :target: https://trio.readthedocs.io :alt: Documentation .. image:: https://img.shields.io/pypi/v/trio.svg :target: https://pypi.org/project/trio :alt: Latest PyPi version .. image:: https://img.shields.io/conda/vn/conda-forge/trio.svg :target: https://anaconda.org/conda-forge/trio :alt: Latest conda-forge version .. image:: https://codecov.io/gh/python-trio/trio/branch/master/graph/badge.svg :target: https://codecov.io/gh/python-trio/trio :alt: Test coverage Trio – a friendly Python library for async concurrency and I/O ============================================================== .. image:: https://raw.githubusercontent.com/python-trio/trio/9b0bec646a31e0d0f67b8b6ecc6939726faf3e17/logo/logo-with-background.svg :width: 200px :align: right The Trio project aims to produce a production-quality, `permissively licensed `__, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do **multiple things at the same time** with **parallelized I/O**. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections simultaneously, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on **usability** and **correctness**. Concurrency is complicated; we try to make it *easy* to get things *right*. Trio was built from the ground up to take advantage of the `latest Python features `__, and draws inspiration from `many sources `__, in particular Dave Beazley's `Curio `__. The resulting design is radically simpler than older competitors like `asyncio `__ and `Twisted `__, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. `Perhaps you'll find the same `__. This project is young and still somewhat experimental: the overall design is solid, and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We *do* encourage you to use it, but you should `read and subscribe to issue #1 `__ to get a warning and a chance to give feedback about any compatibility-breaking changes. Where to next? -------------- **I want to try it out!** Awesome! We have a `friendly tutorial `__ to get you started; no prior experience with async coding is required. **Ugh, I don't want to read all that – show me some code!** If you're impatient, then here's a `simple concurrency example `__, an `echo client `__, and an `echo server `__. **How does Trio make programs easier to read and reason about than competing approaches?** Trio is based on a new way of thinking that we call "structured concurrency". The best theoretical introduction is the article `Notes on structured concurrency, or: Go statement considered harmful `__. Or, `check out this talk at PyCon 2018 `__ to see a demonstration of implementing the "Happy Eyeballs" algorithm in an older library versus Trio. **Cool, but will it work on my system?** Probably! As long as you have some kind of Python 3.6-or-better (CPython or the latest PyPy3 are both fine), and are using Linux, macOS, Windows, or FreeBSD, then Trio will work. Other environments might work too, but those are the ones we test on. And all of our dependencies are pure Python, except for CFFI on Windows, which has wheels available, so installation should be easy (no C compiler needed). **I tried it, but it's not working.** Sorry to hear that! You can try asking for help in our `chat room `__ or `forum `__, `filing a bug `__, or `posting a question on StackOverflow `__, and we'll do our best to help you out. **Trio is awesome, and I want to help make it more awesome!** You're the best! There's tons of work to do – filling in missing functionality, building up an ecosystem of Trio-using libraries, usability testing (e.g., maybe try teaching yourself or a friend to use Trio and make a list of every error message you hit and place where you got confused?), improving the docs, ... check out our `guide for contributors `__! **I don't have any immediate plans to use it, but I love geeking out about I/O library design!** That's a little weird? But let's be honest, you'll fit in great around here. We have a `whole sub-forum for discussing structured concurrency `__ (developers of other systems welcome!). Or check out our `discussion of design choices `__, `reading list `__, and `issues tagged design-discussion `__. **I want to make sure my company's lawyers won't get angry at me!** No worries, Trio is permissively licensed under your choice of MIT or Apache 2. See `LICENSE `__ for details. Code of conduct --------------- Contributors are requested to follow our `code of conduct `__ in all project spaces. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.406471 trio-0.19.0/docs/0000755000076500000000000000000000000000000013050 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/Makefile0000644000076500000000000000113500000000000014510 0ustar00quentinwheel# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build SPHINXPROJ = Trio SOURCEDIR = source BUILDDIR = build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/make.bat0000644000076500000000000000145400000000000014461 0ustar00quentinwheel@ECHO OFF pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set SOURCEDIR=source set BUILDDIR=build set SPHINXPROJ=Trio if "%1" == "" goto help %SPHINXBUILD% >NUL 2>NUL if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% goto end :help %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% :end popd ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/notes.txt0000644000076500000000000000120700000000000014741 0ustar00quentinwheelit's possible from extension/configuration modules to get sphinx to know about new roles and even new autodoc types. See curio docs, which cribbed from the python docs and have a link to them and also: https://github.com/aio-libs/sphinxcontrib-asyncio/pull/1/files which added autodoc hooks to sphinxcontrib-asyncio it looks like there's a table of cross-reference roles in sphinx/domains/python.py (look for PyXRefRole), which inherits from sphinx.roles.XRefRole, which has some notes on how to subclass and change rendering (see 'result_nodes' method) so..... it might even be possible to give async functions/methods their own color :-) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.4392498 trio-0.19.0/docs/source/0000755000076500000000000000000000000000000014350 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.4516244 trio-0.19.0/docs/source/_static/0000755000076500000000000000000000000000000015776 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_static/.gitkeep0000644000076500000000000000000000000000000017415 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_static/favicon-32.png0000644000076500000000000000345200000000000020357 0ustar00quentinwheelPNG  IHDR szzsBIT|d pHYstEXtSoftwarewww.inkscape.org<IDATXV{Tsڀ QH.oX`KK[QY1bm) D &҅-,-k Qm-m!XRE?fΝaP w9wE'_oTО%[&Y>'ɓ$Zgo%YI<|=H25{Q/ YՏ(^L,@fU`%{'¸VmoPf7JBzAlwyP mrɗ fpR8@ᅖ1 5G[KK;t&arR[Ϭ.b!m>gv^\zs2DYͩT7~`uLf[ (#9 -_! jPu+>|dsHf~l.p^ROq:͈TlBHS%Sf{N-Vn4'?]bМ- \e '&+ԒmeVA8CF/ R 8Ehdp?GދZk"\aHzkvupfvG;ۧ9gck唷 .ύEtNCfwJ9ݭێŮJIC &uʋw *tJ܅V]exfd$uMOSWxR>` zN5MdPQ溓|t3$A.S#oꞁ6_lT) IϕL_0rIw(4cނ ^9BtARTDf"w)^(@_|lD.34>6B?Ⱥ8??vɐ'(+-q˧7'Ai@K\U  ހ]>> @=reu >=U<PiR @DAVm9}C4̃Kĵt _H@n/`"tښ.rl$4|>]@9Acr6#xW6G*,o2:b1Y^L\`. 2/vgdz*e}4@Q822;!3o:3Kj8R $ao<y;I!˵xfYKx&,z&6ܿ @s)-01hlQNYgXYpqƃ4]l7Y0o.0xa%_윍˝Lh7w}grw:qG;,?08hVÞ?.o=6y$4 u ۚowSgһ7MŌOum R~[pʛ厡ʼ ѡM[ou%E9=[;oܾkNt!"?moU5hycRyIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_static/favicon.svg0000644000076500000000000002335200000000000020151 0ustar00quentinwheel image/svg+xml ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_static/hackrtd.css0000644000076500000000000000655500000000000020143 0ustar00quentinwheel/* Temporary hack to work around bug in rtd theme 2.0 through 2.4 See https://github.com/rtfd/sphinx_rtd_theme/pull/382 */ pre { line-height: normal !important; } /* Make .. deprecation:: blocks visible * (by default they're entirely unstyled) */ .deprecated { background-color: #ffe13b; } /* Add a snakey triskelion ornament to
* https://stackoverflow.com/questions/8862344/css-hr-with-ornament/18541258#18541258 * but only do it to
s in the content box, b/c the RTD popup control panel * thingummy also has an
in it, and putting the ornament on that looks * *really weird*. (In particular, the background color is wrong.) */ .rst-content hr:after { /* This .svg gets displayed on top of the middle of the hrule. It has a box * behind the logo that's colored to match the RTD theme body background * color (#fcfcfc), which hides the middle part of the hrule to make it * look like there's a gap in it. The size of the box determines the size * of the gap. */ background: url('ornament.svg') no-repeat top center; background-size: contain; content: ""; display: block; height: 30px; position: relative; top: -15px; } /* Hacks to make the upper-left logo area look nicer */ .wy-side-nav-search { /* Lighter background color to match logo */ background-color: #d2e7fa !important; } .wy-side-nav-search > a { color: #306998 !important; } .wy-side-nav-search > a.logo { display: block !important; padding-bottom: 0.809em !important; } .wy-side-nav-search > a img.logo { display: inline !important; padding: 0 !important; } .trio-version { display: inline; /* I *cannot* figure out how to get the version text vertically centered on the logo. Oh well... height: 32px; line-height: 32px; */ } .wy-side-nav-search > a { /* Mostly this is just to simplify things, so we don't have margin/padding * on both the and the inside it */ margin: 0 !important; padding: 0 !important; } /* Get rid of the weird super dark "Contents" label that wastes vertical space */ .wy-menu-vertical > p.caption { display: none !important; } /* I do not like RTD's use of Roboto Slab for headlines. So force it back to * Lato (or whatever fallback it's using if Lato isn't available for some * reason). I also experimented with using Montserrat to be extra obnoxiously * on brand, but honestly you couldn't really tell so there wasn't much point * in adding page weight for that, and this is going to match the body text * better. (Montserrat for body text *definitely* didn't look good, alas.) */ h1, h2, h3, h4, h5, h6, legend, .rst-content .toctree-wrapper p.caption { font-family: inherit !important; } /* Get rid of the horrible red for literal content */ .rst-content tt.literal, .rst-content tt.literal, .rst-content code.literal { color: #222 !important; } /* Style the "Need help?" text just underneath the search box */ .trio-help-hint { line-height: normal; margin-bottom: 0; /* font-size: 12px; */ font-size: 80%; /* matches the "Search docs" box */ padding-top: 6px; color: #306998; text-align: center; } a.trio-help-hint, .trio-help-hint a:link, .trio-help-hint a:visited { color: inherit; /* Like text-decoration: underline, but with a thinner line */ text-decoration: none; border-bottom: 1px solid; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_static/ornament.svg0000644000076500000000000002521700000000000020351 0ustar00quentinwheel image/svg+xml ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.4568636 trio-0.19.0/docs/source/_templates/0000755000076500000000000000000000000000000016505 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_templates/.gitkeep0000644000076500000000000000000000000000000020124 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/_templates/layout.html0000644000076500000000000000160100000000000020706 0ustar00quentinwheel{# https://stackoverflow.com/questions/25243482/how-to-add-sphinx-generated-index-to-the-sidebar-when-using-read-the-docs-theme #} {% extends "!layout.html" %} {% block sidebartitle %} {% include "searchbox.html" %}

Need help? Live chat, forum, StackOverflow.

{% endblock %} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/awesome-trio-libraries.rst0000644000076500000000000001622300000000000021473 0ustar00quentinwheelAwesome Trio Libraries ====================== .. List of Trio Libraries A list of libraries that support Trio, similar to the awesome-python list here: https://github.com/vinta/awesome-python/ .. currentmodule:: trio You have completed the tutorial, and are enthusiastic about building great new applications and libraries with async functionality. However, to get much useful work done you will want to use some of the great libraries that support Trio-flavoured concurrency. This list is not complete, but gives a starting point. Another great way to find Trio-compatible libraries is to search on PyPI for the ``Framework :: Trio`` tag -> `PyPI Search `__ Getting Started --------------- * `cookiecutter-trio `__ - This is a cookiecutter template for Python projects that use Trio. It makes it easy to start a new project, by providing a bunch of preconfigured boilerplate. * `pytest-trio `__ - Pytest plugin to test async-enabled Trio functions. * `sphinxcontrib-trio `__ - Make Sphinx better at documenting Python functions and methods. In particular, it makes it easy to document async functions. Web and HTML ------------ * `httpx `__ - HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2. * `trio-websocket `__ - A WebSocket client and server implementation striving for safety, correctness, and ergonomics. * `quart-trio `__ - Like Flask, but for Trio. A simple and powerful framework for building async web applications and REST APIs. Tip: this is an ASGI-based framework, so you'll also need an HTTP server with ASGI support. * `hypercorn `__ - An HTTP server for hosting your ASGI apps. Supports HTTP/1.1, HTTP/2, HTTP/3, and Websockets. Can be run as a standalone server, or embedded in a larger Trio app. Use it with ``quart-trio``, or any other Trio-compatible ASGI framework. * `DeFramed `__ - DeFramed is a Web non-framework that supports a 99%-server-centric approach to Web coding, including support for the `Remi `__ GUI library. * `pura `__ - A simple web framework for embedding realtime graphical visualization into Trio apps, enabling inspection and manipulation of program state during development. * `pyscalpel `__ - A fast and powerful webscraping library. * `muffin `_ - Muffin is a fast, simple ASGI web-framework * `asgi-tools `_ - Tools to quickly build lightest ASGI apps (also contains a test client with lifespan, websocket support) Database -------- * `triopg `__ - PostgreSQL client for Trio based on asyncpg. * `trio-mysql `__ - Pure Python MySQL Client. * `sqlalchemy_aio `__ - Add asyncio and Trio support to SQLAlchemy core, derived from alchimia. * `redio `__ - Redis client, pure Python and Trio. * `trio_redis `__ - A Redis client for Trio. Depends on hiredis-py. * `asyncakumuli `__ - Client for the `Akumuli `__ time series database. IOT --- * `DistMQTT `__ - DistMQTT is an open source MQTT client and broker implementation. It is a fork of hbmqtt with support for anyio and DistKV. * `asyncgpio `__ - Allows easy access to the GPIO pins on your Raspberry Pi or similar embedded computer. * `asyncowfs `__ - High-level, object-oriented access to 1wire sensors and actors. * `DistKV `__ - a persistent, distributed, master-less key/value storage with async notification and some IoT-related plug-ins. Building Command Line Apps -------------------------- * `trio-click `__ - Python composable command line utility, trio-compatible version. * `urwid `__ - Urwid is a console user interface library for Python. Building GUI Apps ----------------- * `QTrio `__ - Integration between Trio and either the PyQt or PySide Qt wrapper libraries. Uses Trio's :ref:`guest mode `. Multi-Core/Multiprocessing -------------------------- * `tractor `__ - An experimental, trionic (aka structured concurrent) "actor model" for distributed multi-core Python. * `Trio run_in_process `__ - Trio based API for running code in a separate process. * `trio-parallel `__ - CPU parallelism for Trio RPC --- * `purepc `__ - Asynchronous pure Python gRPC client and server implementation using anyio. * `trio-jsonrpc `__ - JSON-RPC v2.0 for Trio. Testing ------- * `pytest-trio `__ - Pytest plugin for trio. * `hypothesis-trio `__ - Hypothesis plugin for trio. * `trustme `__ - #1 quality TLS certs while you wait, for the discerning tester. * `pytest-aio `_ - Pytest plugin with support for trio, curio, asyncio Tools and Utilities ------------------- * `trio-typing `__ - Type hints for Trio and related projects. * `trio-util `__ - An assortment of utilities for the Trio async/await framework. * `tricycle `__ - This is a library of interesting-but-maybe-not-yet-fully-proven extensions to Trio. * `tenacity `__ - Retrying library for Python with async/await support. * `perf-timer `__ - A code timer with Trio async support (see ``TrioPerfTimer``). Collects execution time of a block of code excluding time when the coroutine isn't scheduled, such as during blocking I/O and sleep. Also offers ``trio_perf_counter()`` for low-level timing. Trio/Asyncio Interoperability ----------------------------- * `anyio `__ - AnyIO is a asynchronous compatibility API that allows applications and libraries written against it to run unmodified on asyncio, curio and trio. * `sniffio `__ - This is a tiny package whose only purpose is to let you detect which async library your code is running under. * `trio-asyncio `__ - Trio-Asyncio lets you use many asyncio libraries from your Trio app. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/code-of-conduct.rst0000644000076500000000000002522600000000000020062 0ustar00quentinwheel.. _code-of-conduct: Code of Conduct =============== This code of conduct applies to the Trio project, and all associated projects in the `python-trio `__ organization. .. _coc-when-something-happens: When Something Happens ---------------------- If you see a Code of Conduct violation, follow these steps: 1. Let the person know that what they did is not appropriate and ask them to stop and/or edit their message(s) or commits. 2. That person should immediately stop the behavior and correct the issue. 3. If this doesn’t happen, or if you're uncomfortable speaking up, :ref:`contact the maintainers `. 4. As soon as possible, a maintainer will look into the issue, and take :ref:`further action (see below) `, starting with a warning, then temporary block, then long-term repo or organization ban. When reporting, please include any relevant details, links, screenshots, context, or other information that may be used to better understand and resolve the situation. **The maintainer team will prioritize the well-being and comfort of the recipients of the violation over the comfort of the violator.** See :ref:`some examples below `. Our Pledge ---------- In the interest of fostering an open and welcoming environment, we as contributors and maintainers of this project pledge to making participation in our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, technical preferences, nationality, personal appearance, race, religion, or sexual identity and orientation. Our Standards ------------- Examples of behavior that contributes to creating a positive environment include: - Using welcoming and inclusive language. - Being respectful of differing viewpoints and experiences. - Gracefully accepting constructive feedback. - Focusing on what is best for the community. - Showing empathy and kindness towards other community members. - Encouraging and raising up your peers in the project so you can all bask in hacks and glory. Examples of unacceptable behavior by participants include: - The use of sexualized language or imagery and unwelcome sexual attention or advances, including when simulated online. The only exception to sexual topics is channels/spaces specifically for topics of sexual identity. - Casual mention of slavery or indentured servitude and/or false comparisons of one's occupation or situation to slavery. Please consider using or asking about alternate terminology when referring to such metaphors in technology. - Making light of/making mocking comments about trigger warnings and content warnings. - Trolling, insulting/derogatory comments, and personal or political attacks. - Public or private harassment, deliberate intimidation, or threats. - Publishing others' private information, such as a physical or electronic address, without explicit permission. This includes any sort of "outing" of any aspect of someone's identity without their consent. - Publishing private screenshots or quotes of interactions in the context of this project without all quoted users' *explicit* consent. - Publishing of private communication that doesn't have to do with reporting harassment. - Any of the above even when `presented as "ironic" or "joking" `__. - Any attempt to present "reverse-ism" versions of the above as violations. Examples of reverse-isms are "reverse racism", "reverse sexism", "heterophobia", and "cisphobia". - Unsolicited explanations under the assumption that someone doesn't already know it. Ask before you teach! Don't assume what people's knowledge gaps are. - `Feigning or exaggerating surprise `__ when someone admits to not knowing something. - "`Well-actuallies `__" - Other conduct which could reasonably be considered inappropriate in a professional or community setting. Scope ----- This Code of Conduct applies both within spaces involving this project and in other spaces involving community members. This includes the repository, its Pull Requests and Issue tracker, its Twitter community, private email communications in the context of the project, and any events where members of the project are participating, as well as adjacent communities and venues affecting the project's members. Depending on the violation, the maintainers may decide that violations of this code of conduct that have happened outside of the scope of the community may deem an individual unwelcome, and take appropriate action to maintain the comfort and safety of its members. .. _coc-other-community-standards: Other Community Standards ~~~~~~~~~~~~~~~~~~~~~~~~~ As a project on GitHub, this project is additionally covered by the `GitHub Community Guidelines `__. Enforcement of those guidelines after violations overlapping with the above are the responsibility of the entities, and enforcement may happen in any or all of the services/communities. Maintainer Enforcement Process ------------------------------ Once the maintainers get involved, they will follow a documented series of steps and do their best to preserve the well-being of project members. This section covers actual concrete steps. .. _coc-contacting-maintainers: Contacting Maintainers ~~~~~~~~~~~~~~~~~~~~~~ As a small and young project, we don't yet have a Code of Conduct enforcement team. Hopefully that will be addressed as we grow, but for now, any issues should be addressed to `Nathaniel J. Smith `__, via `email `__ or any other medium that you feel comfortable with. Using words like "Trio code of conduct" in your subject will help make sure your message is noticed quickly. .. _coc-further-enforcement: Further Enforcement ~~~~~~~~~~~~~~~~~~~ If you've already followed the :ref:`initial enforcement steps `, these are the steps maintainers will take for further enforcement, as needed: 1. Repeat the request to stop. 2. If the person doubles down, they will have offending messages removed or edited by a maintainers given an official warning. The PR or Issue may be locked. 3. If the behavior continues or is repeated later, the person will be blocked from participating for 24 hours. 4. If the behavior continues or is repeated after the temporary block, a long-term (6-12mo) ban will be used. 5. If after this the behavior still continues, a permanent ban may be enforced. On top of this, maintainers may remove any offending messages, images, contributions, etc, as they deem necessary. Maintainers reserve full rights to skip any of these steps, at their discretion, if the violation is considered to be a serious and/or immediate threat to the health and well-being of members of the community. These include any threats, serious physical or verbal attacks, and other such behavior that would be completely unacceptable in any social setting that puts our members at risk. Members expelled from events or venues with any sort of paid attendance will not be refunded. Who Watches the Watchers? ~~~~~~~~~~~~~~~~~~~~~~~~~ Maintainers and other leaders who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. These may include anything from removal from the maintainer team to a permanent ban from the community. Additionally, as a project hosted on GitHub, :ref:`their Code of Conduct may be applied against maintainers of this project `, externally of this project's procedures. .. _coc-enforcement-examples: Enforcement Examples -------------------- The Best Case ~~~~~~~~~~~~~ The vast majority of situations work out like this. This interaction is common, and generally positive. Alex: "Yeah I used X and it was really crazy!" Patt (not a maintainer): "Hey, could you not use that word? What about 'ridiculous' instead?" Alex: "oh sorry, sure." -> edits old comment to say "it was really confusing!" The Maintainer Case ~~~~~~~~~~~~~~~~~~~ Sometimes, though, you need to get maintainers involved. Maintainers will do their best to resolve conflicts, but people who were harmed by something **will take priority**. Patt: "Honestly, sometimes I just really hate using $library and anyone who uses it probably sucks at their job." Alex: "Whoa there, could you dial it back a bit? There's a CoC thing about attacking folks' tech use like that." Patt: "I'm not attacking anyone, what's your problem?" Alex: "@maintainers hey uh. Can someone look at this issue? Patt is getting a bit aggro. I tried to nudge them about it, but nope." KeeperOfCommitBits: (on issue) "Hey Patt, maintainer here. Could you tone it down? This sort of attack is really not okay in this space." Patt: "Leave me alone I haven't said anything bad wtf is wrong with you." KeeperOfCommitBits: (deletes user's comment), "@patt I mean it. Please refer to the CoC over at (URL to this CoC) if you have questions, but you can consider this an actual warning. I'd appreciate it if you reworded your messages in this thread, since they made folks there uncomfortable. Let's try and be kind, yeah?" Patt: "@keeperofbits Okay sorry. I'm just frustrated and I'm kinda burnt out and I guess I got carried away. I'll DM Alex a note apologizing and edit my messages. Sorry for the trouble." KeeperOfCommitBits: "@patt Thanks for that. I hear you on the stress. Burnout sucks :/. Have a good one!" The Nope Case ~~~~~~~~~~~~~ PepeTheFrog🐸: "Hi, I am a literal actual nazi and I think white supremacists are quite fashionable." Patt: "NOOOOPE. OH NOPE NOPE." Alex: "JFC NO. NOPE. @keeperofbits NOPE NOPE LOOK HERE" KeeperOfCommitBits: "👀 Nope. NOPE NOPE NOPE. 🔥" PepeTheFrog🐸 has been banned from all organization or user repositories belonging to KeeperOfCommitBits. Attribution ----------- This Code of Conduct was generated using `WeAllJS Code of Conduct Generator `__, which is based on the `WeAllJS Code of Conduct `__, which is itself based on `Contributor Covenant `__, version 1.4, available at http://contributor-covenant.org/version/1/4, and the LGBTQ in Technology Slack `Code of Conduct `__. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/conf.py0000755000076500000000000001654300000000000015663 0ustar00quentinwheel#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # Trio documentation build configuration file, created by # sphinx-quickstart on Sat Jan 21 19:11:14 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys # For our local_customization module sys.path.insert(0, os.path.abspath('.')) # For trio itself sys.path.insert(0, os.path.abspath('../..')) # https://docs.readthedocs.io/en/stable/builds.html#build-environment if "READTHEDOCS" in os.environ: import glob if glob.glob("../../newsfragments/*.*.rst"): print("-- Found newsfragments; running towncrier --", flush=True) import subprocess subprocess.run( ["towncrier", "--yes", "--date", "not released yet"], cwd="../..", check=True, ) # Warn about all references to unknown targets nitpicky = True # Except for these ones, which we expect to point to unknown targets: nitpick_ignore = [ ("py:class", "CapacityLimiter-like object"), ("py:class", "bytes-like"), ("py:class", "None"), # Was removed but still shows up in changelog ("py:class", "trio.lowlevel.RunLocal"), # trio.abc is documented at random places scattered throughout the docs ("py:mod", "trio.abc"), ("py:class", "math.inf"), ("py:exc", "Anything else"), ("py:class", "async function"), ("py:class", "sync function"), # https://github.com/sphinx-doc/sphinx/issues/7722 ("py:class", "SendType"), ("py:class", "ReceiveType"), ] autodoc_inherit_docstrings = False default_role = "obj" # XX hack the RTD theme until # https://github.com/rtfd/sphinx_rtd_theme/pull/382 # is shipped (should be in the release after 0.2.4) # ...note that this has since grown to contain a bunch of other CSS hacks too # though. def setup(app): app.add_css_file("hackrtd.css") # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'sphinx.ext.napoleon', 'sphinxcontrib_trio', 'local_customization', ] intersphinx_mapping = { "python": ('https://docs.python.org/3', None), "outcome": ('https://outcome.readthedocs.io/en/latest/', None), } autodoc_member_order = "bysource" # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Trio' copyright = '2017, Nathaniel J. Smith' author = 'Nathaniel J. Smith' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. import trio version = trio.__version__ # The full version, including alpha/beta/rc tags. release = version html_favicon = "_static/favicon-32.png" html_logo = "../../logo/wordmark-transparent.svg" # & down below in html_theme_options we set logo_only=True # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'default' highlight_language = 'python3' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # This avoids a warning by the epub builder that it can't figure out # the MIME type for our favicon. suppress_warnings = ["epub.unknown_project_files"] # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # #html_theme = 'alabaster' # We have to set this ourselves, not only because it's useful for local # testing, but also because if we don't then RTD will throw away our # html_theme_options. import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = { # default is 2 # show deeper nesting in the RTD theme's sidebar TOC # https://stackoverflow.com/questions/27669376/ # I'm not 100% sure this actually does anything with our current # versions/settings... "navigation_depth": 4, "logo_only": True, 'prev_next_buttons_location': 'both' } # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'Triodoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'Trio.tex', 'Trio Documentation', 'Nathaniel J. Smith', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'trio', 'Trio Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'Trio', 'Trio Documentation', author, 'Trio', 'One line description of project.', 'Miscellaneous'), ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/contributing.rst0000644000076500000000000006012700000000000017617 0ustar00quentinwheel.. _contributing: Contributing to Trio and related projects ========================================= So you're interested in contributing to Trio or `one of our associated projects `__? That's awesome! Trio is an open-source project maintained by an informal group of volunteers. Our goal is to make async I/O in Python more fun, easy, and reliable, and we can't do it without help from people like you. We welcome contributions from anyone willing to work in good faith with other contributors and the community (see also our :ref:`code-of-conduct`). There are many ways to contribute, no contribution is too small, and all contributions are valued. For example, you could: - Hang out in our `chatroom `__ and help people with questions. - Sign up for our `forum `__, set up your notifications so you notice interesting conversations, and join in. - Answer questions on StackOverflow (`recent questions `__). - Use Trio in a project, and give us feedback on what worked and what didn't. - Write a blog post about your experiences with Trio, good or bad. - Release open-source programs and libraries that use Trio. - Improve documentation. - Comment on issues. - Add tests. - Fix bugs. - Add features. We want contributing to be enjoyable and mutually beneficial; this document tries to give you some tips to help that happen, and applies to all of the projects under the `python-trio organization on Github `__. If you have thoughts on how it can be improved then please let us know. Getting started --------------- If you're new to open source in general, you might find it useful to check out `opensource.guide's How to Contribute to Open Source tutorial `__, or if video's more your thing, `egghead.io has a short free video course `__. Trio and associated projects are developed on GitHub, under the `python-trio `__ organization. Code and documentation changes are made through pull requests (see :ref:`preparing-pull-requests` below). We also have an unusual policy for managing commit rights: anyone whose pull request is merged is automatically invited to join the GitHub organization, and gets commit rights to all of our repositories. See :ref:`joining-the-team` below for more details. If you're looking for a good place to start, then check out our issues labeled `good first issue `__, or feel free to ask `on the forum `__ or `in chat `__. Providing support ----------------- When helping others use Trio, please remember that you are representing our community, and we want this to be a friendly and welcoming place. Concurrency is *really confusing* when you're first learning. When talking to beginners, remember that you were a beginner once too, and the whole goal here is to make a top-tier concurrency library that's accessible to everyone and a joy to use. If people are showing up with beginner questions, *that means we're succeeding*. How we respond to questions is part of that developer experience, just as much as our API, documentation, or testing tools. And as a bonus, helping beginners is often the best way to discover ideas for improvements. If you start getting burned out and cranky, we've all been there, and it's OK to take a break until you feel better. But it's not OK to take that out on random users. Please remember that the authors and users of competing projects are smart, thoughtful people doing their best to balance complicated and conflicting requirements, just like us. Of course it's totally fine to make specific technical critiques ("In project X, this is handled by doing Y, Trio does Z instead, which I prefer because...") or talk about your personal experience ("I tried using X but I got super frustrated and confused"), but refrain from generic statements like "X sucks" or "I can't believe anyone uses X". Please try not to make assumptions about people's gender, and in particular remember that we're not all dudes. If you don't have a specific reason to assume otherwise, then `singular they `__ makes a fine pronoun, and there are plenty of gender-neutral collective terms: "Hey folks", "Hi all", ... We also like the Recurse Center's `social rules `__: * no feigning surprise (also available in a `sweet comic version `__) * no well-actually's * no subtle -isms (`more details `__) .. _preparing-pull-requests: Preparing pull requests ----------------------- If you want to submit a documentation or code change to one of the Trio projects, then that's done by preparing a Github pull request (or "PR" for short). We'll do our best to review your PR quickly. If it's been a week or two and you're still waiting for a response, feel free to post a comment poking us. (This can just be a comment with the single word "ping"; it's not rude at all.) Here's a quick checklist for putting together a good PR, with details in separate sections below: * :ref:`pull-request-scope`: Does your PR address a single, self-contained issue? * :ref:`pull-request-tests`: Are your tests passing? Did you add any necessary tests? Code changes pretty much always require test changes, because if it's worth fixing the code then it's worth adding a test to make sure it stays fixed. * :ref:`pull-request-formatting`: If you changed Python code, then did you run ``black setup.py trio``? (Or for other packages, replace ``trio`` with the package name.) * :ref:`pull-request-release-notes`: If your change affects user-visible functionality, then did you add a release note to the ``newsfragments/`` directory? * :ref:`pull-request-docs`: Did you make any necessary documentation updates? * License: by submitting a PR to a Trio project, you're offering your changes under that project's license. For most projects, that's dual MIT/Apache 2, except for cookiecutter-trio, which is CC0. .. _pull-request-scope: What to put in a PR ~~~~~~~~~~~~~~~~~~~ Each PR should, as much as possible, address just one issue and be self-contained. If you have ten small, unrelated changes, then go ahead and submit ten PRs – it's much easier to review ten small changes than one big change with them all mixed together, and this way if there's some problem with one of the changes it won't hold up all the others. If you're uncertain about whether a change is a good idea and want some feedback before putting time into it, feel free to ask in an issue or in the chat room. If you have a partial change that you want to get feedback on, feel free to submit it as a PR. (In this case it's traditional to start the PR title with ``[WIP]``, for "work in progress".) When you are submitting your PR, you can include ``Closes #123``, ``Fixes: #123`` or `some variation `__ in either your commit message or the PR description, in order to automatically close the referenced issue when the PR is merged. This keeps us closer to the desired state where each open issue reflects some work that still needs to be done. .. _pull-request-tests: Tests ~~~~~ We use `pytest `__ for testing. To run the tests locally, you should run: .. code-block:: shell cd path/to/trio/checkout/ pip install -r test-requirements.txt # possibly using a virtualenv pytest trio This doesn't try to be completely exhaustive – it only checks that things work on your machine, and it may skip some slow tests. But it's a good way to quickly check that things seem to be working, and we'll automatically run the full test suite when your PR is submitted, so you'll have a chance to see and fix any remaining issues then. Every change should have 100% coverage for both code and tests. But, you can use ``# pragma: no cover`` to mark lines where lack-of-coverage isn't something that we'd want to fix (as opposed to it being merely hard to fix). For example:: else: # pragma: no cover raise AssertionError("this can't happen!") We use Codecov to track coverage, because it makes it easy to combine coverage from running in different configurations. Running coverage locally can be useful (``pytest --cov=PACKAGENAME --cov-report=html``), but don't be surprised if you get lower coverage than when looking at Codecov reports, because there are some lines that are only executed on Windows, or macOS, or PyPy, or CPython, or... you get the idea. After you create a PR, Codecov will automatically report back with the coverage, so you can check how you're really doing. (But note that the results can be inaccurate until all the tests are passing. If the tests failed, then fix that before worrying about coverage.) Some rules for writing good tests: * `Tests MUST pass deterministically `__. Flakey tests make for miserable developers. One common source of indeterminism is scheduler ordering; if you're having trouble with this, then :mod:`trio.testing` provides powerful tools to help control ordering, like :func:`trio.testing.wait_all_tasks_blocked`, :class:`trio.testing.Sequencer`, and :class:`trio.testing.MockClock` (usually used as a fixture: ``async def test_whatever(autojump_clock): ...``). And if you need more tools than this then we should add them. * (Trio package only) Slow tests – anything that takes more than about 0.25 seconds – should be marked with ``@slow``. This makes it so they only run if you do ``pytest trio --run-slow``. Our CI scripts do run slow tests, so you can be sure that the code will still be thoroughly tested, and this way you don't have to sit around waiting for a few irrelevant multi-second tests to run while you're iterating on a change locally. You can check for slow tests by passing ``--durations=10`` to pytest. Most tests should take 0.01 seconds or less. * Speaking of waiting around for tests: Tests should never sleep unless *absolutely* necessary. However, calling :func:`trio.sleep` when using ``autojump_clock`` is fine, because that's not really sleeping, and doesn't waste developers time waiting for the test to run. * We like tests to exercise real functionality. For example, if you're adding subprocess spawning functionality, then your tests should spawn at least one process! Sometimes this is tricky – for example, Trio's :class:`KeyboardInterrupt` tests have to jump through quite some hoops to generate real SIGINT signals at the right times to exercise different paths. But it's almost always worth it. * For cases where real testing isn't relevant or sufficient, then we strongly prefer fakes or stubs over mocks. Useful articles: * `Test Doubles - Fakes, Mocks and Stubs `__ * `Mocks aren't stubs `__ * `Write test doubles you can trust using verified fakes `__ Most major features have both real tests and tests using fakes or stubs. For example, :class:`~trio.SSLStream` has some tests that use Trio to make a real socket connection to real SSL server implemented using blocking I/O, because it sure would be embarrassing if that didn't work. And then there are also a bunch of tests that use a fake in-memory transport stream where we have complete control over timing and can make sure all the subtle edge cases work correctly. Writing reliable tests for obscure corner cases is often harder than implementing a feature in the first place, but stick with it: it's worth it! And don't be afraid to ask for help. Sometimes a fresh pair of eyes can be helpful when trying to come up with devious tricks. .. _pull-request-formatting: Code formatting ~~~~~~~~~~~~~~~ Instead of wasting time arguing about code formatting, we use `black `__ to automatically format all our code to a standard style. While you're editing code you can be as sloppy as you like about whitespace; and then before you commit, just run:: pip install -U black black setup.py trio to fix it up. (And don't worry if you forget – when you submit a pull request then we'll automatically check and remind you.) Hopefully this will let you focus on more important style issues like choosing good names, writing useful comments, and making sure your docstrings are nicely formatted. (black doesn't reformat comments or docstrings.) Very occasionally, you'll want to override black formatting. To do so, you can can add ``# fmt: off`` and ``# fmt: on`` comments. If you want to see what changes black will make, you can use:: black --diff setup.py trio (``--diff`` displays a diff, versus the default mode which fixes files in-place.) .. _pull-request-release-notes: Release notes ~~~~~~~~~~~~~ We use `towncrier `__ to manage our `release notes `__. Basically, every pull request that has a user visible effect should add a short file to the ``newsfragments/`` directory describing the change, with a name like ``..rst``. See `newsfragments/README.rst `__ for details. This way we can keep a good list of changes as we go, which makes the release manager happy, which means we get more frequent releases, which means your change gets into users' hands faster. .. _pull-request-commit-messages: Commit messages ~~~~~~~~~~~~~~~ We don't enforce any particular format on commit messages. In your commit messages, try to give the context to explain *why* a change was made. The target audience for release notes is users, who want to find out about changes that might affect how they use the library, or who are trying to figure out why something changed after they upgraded. The target audience for commit messages is some hapless developer (think: you in six months... or five years) who is trying to figure out why some code looks the way it does. Including links to issues and any other discussion that led up to the commit is *strongly* recommended. .. _pull-request-docs: Documentation ~~~~~~~~~~~~~ We take pride in providing friendly and comprehensive documentation. Documentation is stored in ``docs/source/*.rst`` and is rendered using `Sphinx `__ with the `sphinxcontrib-trio `__ extension. Documentation is hosted at `Read the Docs `__, who take care of automatically rebuilding it after every commit. For docstrings, we use `the Google docstring format `__. If you add a new function or class, there's no mechanism for automatically adding that to the docs: you'll have to at least add a line like ``.. autofunction:: `` in the appropriate place. In many cases it's also nice to add some longer-form narrative documentation around that. We enable Sphinx's "nitpick mode", which turns dangling references into an error – this helps catch typos. (This will be automatically checked when your PR is submitted.) If you intentionally want to allow a dangling reference, you can add it to the `nitpick_ignore `__ whitelist in ``docs/source/conf.py``. To build the docs locally, use our handy ``docs-requirements.txt`` file to install all of the required packages (possibly using a virtualenv). After that, build the docs using ``make html`` in the docs directory. The whole process might look something like this:: cd path/to/project/checkout/ pip install -r docs-requirements.txt cd docs make html You can then browse the docs using Python's builtin http server: ``python -m http.server 8000 --bind 127.0.0.1 --directory build/html`` and then opening ``http://127.0.0.1:8000/`` in your web browser. .. _joining-the-team: Joining the team ---------------- After your first PR is merged, you should receive a Github invitation to join the ``python-trio`` organization. If you don't, that's not your fault, it's because we made a mistake on our end. Give us a nudge on chat or `send @njsmith an email `__ and we'll fix it. It's totally up to you whether you accept or not, and if you do accept, you're welcome to participate as much or as little as you want. We're offering the invitation because we'd love for you to join us in making Python concurrency more friendly and robust, but there's no pressure: life is too short to spend volunteer time on things that you don't find fulfilling. At this point people tend to have questions. **How can you trust me with this kind of power? What if I mess everything up?!?** Relax, you got this! And we've got your back. Remember, it's just software, and everything's in version control: worst case we'll just roll things back and brainstorm ways to avoid the issue happening again. We think it's more important to welcome people and help them grow than to worry about the occasional minor mishap. **I don't think I really deserve this.** It's up to you, but we wouldn't be offering if we didn't think you did. **What exactly happens if I accept? Does it mean I'll break everything if I click the wrong button?** Concretely, if you accept the invitation, this does three things: * It lets you manage incoming issues on all of the ``python-trio`` projects by labelling them, closing them, etc. * It lets you merge pull requests on all of the ``python-trio`` projects by clicking Github's big green "Merge" button, but only if all their tests have passed. * It automatically subscribes you to notifications on the ``python-trio`` repositories (but you can unsubscribe again if you want through the Github interface) Note that it does *not* allow you to push changes directly to Github without submitting a PR, and it doesn't let you merge broken PRs – this is enforced through Github's "branch protection" feature, and it applies to everyone from the newest contributor up to the project founder. **Okay, that's what I CAN do, but what SHOULD I do?** Short answer: whatever you feel comfortable with. We do have one rule, which is the same one most F/OSS projects use: don't merge your own PRs. We find that having another person look at each PR leads to better quality. Beyond that, it all comes down to what you feel up to. If you don't feel like you know enough to review a complex code change, then you don't have to – you can just look it over and make some comments, even if you don't feel up to making the final merge/no-merge decision. Or you can just stick to merging trivial doc fixes and adding tags to issues, that's helpful too. If after hanging around for a while you start to feel like you have better handle on how things work and want to start doing more, that's excellent; if it doesn't happen, that's fine too. If at any point you're unsure about whether doing something would be appropriate, feel free to ask. For example, it's *totally OK* if the first time you review a PR, you want someone else to check over your work before you hit the merge button. The best essay I know about reviewing pull request's is Sage Sharp's `The gentle art of patch review `__. The `node.js guide `__ also has some good suggestions, and `so does this blog post `__. Managing issues --------------- As issues come in, they need to be responded to, tracked, and – hopefully! – eventually closed. As a general rule, each open issue should represent some kind of task that we need to do. Sometimes that task might be "figure out what to do here", or even "figure out whether we want to address this issue"; sometimes it will be "answer this person's question". But if there's no followup to be done, then the issue should be closed. Issue labels ~~~~~~~~~~~~ The Trio repository in particular uses a number of labels to try and keep track of issues. The current list is somewhat ad hoc, and may or may not remain useful over time – if you think of a new label that would be useful, a better name for an existing label, or think a label has outlived its usefulness, then speak up. * `good first issue `__: Used to mark issues that are relatively straightforward, and could be good places for a new contributor to start. * `todo soon `__: This marks issues where there aren't questions left about whether or how to do it, it's just waiting for someone to dig in and do the work. * `missing piece `__: This generally marks significant self-contained chunks of missing functionality. If you're looking for a more ambitious project to work on, this might be useful. * `potential API breaker `__: What it says. This is useful because these are issues that we'll want to make sure to review aggressively as Trio starts to stabilize, and certainly before we reach 1.0. * `design discussion `__: This marks issues where there's significant design questions to be discussed; if you like meaty theoretical debates and discussions of API design, then browsing this might be interesting. * `polish `__: Marks issues that it'd be nice to resolve eventually, because it's the Right Thing To Do, but it's addressing a kind of edge case thing that isn't necessary for a minimum viable product. Sometimes overlaps with "user happiness". * `user happiness `__: From the name alone, this could apply to any bug (users certainly are happier when you fix bugs!), but that's not what we mean. This label is used for issues involving places where users stub their toes, or for the kinds of quality-of-life features that leave users surprised and excited – e.g. fancy testing tools that Just Work. Governance ---------- `Nathaniel J. Smith `__ is the Trio `BDFL `__. If the project grows to the point where we'd benefit from more structure, then we'll figure something out. .. Possible references for future additions: """ Jumping into an unfamiliar codebase (or any for that matter) for the first time can be scary. Plus, if it’s your first time contributing to open source, it can even be scarier! But, we at webpack believe: Any (even non-technical) individual should feel welcome to contribute. However you decide to contribute, it should be fun and enjoyable for you! Even after your first commit, you will walk away understanding more about webpack or JavaScript. Consequently, you could become a better developer, writer, designer, etc. along the way, and we are committed to helping foster this growth. """ imposter syndrome disclaimer https://github.com/Unidata/MetPy#contributing checklist https://github.com/nayafia/contributing-template/blob/master/CONTRIBUTING-template.md https://medium.com/the-node-js-collection/healthy-open-source-967fa8be7951 http://sweng.the-davies.net/Home/rustys-api-design-manifesto ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/design.rst0000644000076500000000000006151400000000000016362 0ustar00quentinwheelDesign and internals ==================== .. currentmodule:: trio Here we'll discuss Trio's overall design and architecture: how it fits together and why we made the decisions we did. If all you want to do is use Trio, then you don't need to read this – though you might find it interesting. The main target audience here is (a) folks who want to read the code and potentially contribute, (b) anyone working on similar libraries who want to understand what we're up to, (c) anyone interested in I/O library design generally. There are many valid approaches to writing an async I/O library. This is ours. High-level design principles ---------------------------- Trio's two overriding goals are **usability** and **correctness**: we want to make it *easy* to get things *right*. Of course there are lots of other things that matter too, like speed, maintainability, etc. We want those too, as much as we can get. But sometimes these things come in conflict, and when that happens, these are our priorities. In some sense the entire rest of this document is a description of how these play out, but to give a simple example: Trio's ``KeyboardInterrupt`` handling machinery is a bit tricky and hard to test, so it scores poorly on simplicity and maintainability. But we think the usability+correctness gains outweigh this. There are some subtleties here. Notice that it's specifically "easy to get things right". There are situations (e.g. writing one-off scripts) where the most "usable" tool is the one that will happily ignore errors and keep going no matter what, or that doesn't bother with resource cleanup. (Cf. the success of PHP.) This is a totally valid use case and valid definition of usability, but it's not the one we use: we think it's easier to build reliable and correct systems if exceptions propagate until handled and if the system `catches you when you make potentially dangerous resource handling errors `__, so that's what we optimize for. It's also worth saying something about speed, since it often looms large in comparisons between I/O libraries. This is a rather subtle and complex topic. In general, speed is certainly important – but the fact that people sometimes use Python instead of C is a pretty good indicator that usability often trumps speed in practice. We want to make Trio fast, but it's not an accident that it's left off our list of overriding goals at the top: if necessary we are willing to accept some slowdowns in the service of usability and reliability. To break things down in more detail: First of all, there are the cases where speed directly impacts correctness, like when you hit an accidental ``O(N**2)`` algorithm and your program effectively locks up. Trio is very careful to use algorithms and data structures that have good worst-case behavior (even if this might mean sacrificing a few percentage points of speed in the average case). Similarly, when there's a conflict, we care more about 99th percentile latencies than we do about raw throughput, because insufficient throughput – if it's consistent! – can often be budgeted for and handled with horizontal scaling, but once you lose latency it's gone forever, and latency spikes can easily cross over to become a correctness issue (e.g., an RPC server that responds slowly enough to trigger timeouts is effectively non-functional). Again, of course, this doesn't mean we don't care about throughput – but sometimes engineering requires making trade-offs, especially for early-stage projects that haven't had time to optimize for all use cases yet. And finally: we care about speed on real-world applications quite a bit, but speed on microbenchmarks is just about our lowest priority. We aren't interested in competing to build "the fastest echo server in the West". I mean, it's nice if it happens or whatever, and microbenchmarks are an invaluable tool for understanding a system's behavior. But if you play that game to win then it's very easy to get yourself into a situation with seriously misaligned incentives, where you have to start compromising on features and correctness in order to get a speedup that's totally irrelevant to real-world applications. In most cases (we suspect) it's the application code that's the bottleneck, and you'll get more of a win out of running the whole app under PyPy than out of any heroic optimizations to the I/O layer. (And this is why Trio *does* place a priority on PyPy compatibility.) As a matter of tactics, we also note that at this stage in Trio's lifecycle, it'd probably be a mistake to worry about speed too much. It doesn't make sense to spend lots of effort optimizing an API whose semantics are still in flux. User-level API principles ------------------------- Basic principles ~~~~~~~~~~~~~~~~ Trio is very much a continuation of the ideas explored in `this blog post `__, and in particular the `principles identified there `__ that make curio easier to use correctly than asyncio. So Trio also adopts these rules, in particular: * The only form of concurrency is the task. * Tasks are guaranteed to run to completion. * Task spawning is always explicit. No callbacks, no implicit concurrency, no futures/deferreds/promises/other APIs that involve callbacks. All APIs are `"causal" `__ except for those that are explicitly used for task spawning. * Exceptions are used for error handling; ``try``/``finally`` and ``with`` blocks for handling cleanup. Cancel points and schedule points ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The first major place that Trio departs from curio is in its decision to make a much larger fraction of the API use sync functions rather than async functions, and to provide strong conventions about cancel points and schedule points. (At this point, there are a lot of ways that Trio and curio have diverged. But this was really the origin – the tipping point where I realized that exploring these ideas would require a new library, and couldn't be done inside curio.) The full reasoning here takes some unpacking. First, some definitions: a *cancel point* is a point where your code checks if it has been cancelled – e.g., due to a timeout having expired – and potentially raises a :exc:`Cancelled` error. A *schedule point* is a point where the current task can potentially be suspended, and another task allowed to run. In curio, the convention is that all operations that interact with the run loop in any way are syntactically async, and it's undefined which of these operations are cancel/schedule points; users are instructed to assume that any of them *might* be cancel/schedule points, but with a few exceptions there's no guarantee that any of them are unless they actually block. (I.e., whether a given call acts as a cancel/schedule point is allowed to vary across curio versions and also depending on runtime factors like network load.) But when using an async library, there are good reasons why you need to be aware of cancel and schedule points. They introduce a set of complex and partially conflicting constraints on your code: You need to make sure that every task passes through a cancel point regularly, because otherwise timeouts become ineffective and your code becomes subject to DoS attacks and other problems. So for correctness, it's important to make sure you have enough cancel points. But... every cancel point also increases the chance of subtle bugs in your program, because it's a place where you have to be prepared to handle a :exc:`Cancelled` exception and clean up properly. And while we try to make this as easy as possible, these kinds of clean-up paths are notorious for getting missed in testing and harboring subtle bugs. So the more cancel points you have, the harder it is to make sure your code is correct. Similarly, you need to make sure that every task passes through a schedule point regularly, because otherwise this task could end up hogging the event loop and preventing other code from running, causing a latency spike. So for correctness, it's important to make sure you have enough schedule points. But... you have to be careful here too, because every schedule point is a point where arbitrary other code could run, and alter your program's state out from under you, introducing classic concurrency bugs. So as you add more schedule points, it `becomes exponentially harder to reason about how your code is interleaved and be sure that it's correct `__. So an important question for an async I/O library is: how do we help the user manage these trade-offs? Trio's answer is informed by two further observations: First, any time a task blocks (e.g., because it does an ``await sock.recv()`` but there's no data available to receive), that has to be a cancel point (because if the I/O never arrives, we need to be able to time out), and it has to be a schedule point (because the whole idea of asynchronous programming is that when one task is waiting we can switch to another task to get something useful done). And second, a function which sometimes counts as a cancel/schedule point, and sometimes doesn't, is the worst of both worlds: you have put in the effort to make sure your code handles cancellation or interleaving correctly, but you can't count on it to help meet latency requirements. With all that in mind, Trio takes the following approach: Rule 1: to reduce the number of concepts to keep track of, we collapse cancel points and schedule points together. Every point that is a cancel point is also a schedule point and vice versa. These are distinct concepts both theoretically and in the actual implementation, but we hide that distinction from the user so that there's only one concept they need to keep track of. Rule 2: Cancel+schedule points are determined *statically*. A Trio primitive is either *always* a cancel+schedule point, or *never* a cancel+schedule point, regardless of runtime conditions. This is because we want it to be possible to determine whether some code has "enough" cancel/schedule points by reading the source code. In fact, to make this even simpler, we make it so you don't even have to look at the function arguments: each *function* is either a cancel+schedule point on *every* call or on *no* calls. (Pragmatic exception: a Trio primitive is not required to act as a cancel+schedule point when it raises an exception, even if it would act as one in the case of a successful return. See `issue 474 `__ for more details; basically, requiring checkpoints on all exception paths added a lot of implementation complexity with negligible user-facing benefit.) Observation: since blocking is always a cancel+schedule point, rule 2 implies that any function that *sometimes* blocks is *always* a cancel+schedule point. So that gives us a number of cancel+schedule points: all the functions that can block. Are there any others? Trio's answer is: no. It's easy to add new points explicitly (throw in a ``sleep(0)`` or whatever) but hard to get rid of them when you don't want them. (And this is a real issue – "too many potential cancel points" is definitely a tension `I've felt `__ while trying to build things like task supervisors in curio.) And we expect that most Trio programs will execute potentially-blocking operations "often enough" to produce reasonable behavior. So, rule 3: the *only* cancel+schedule points are the potentially-blocking operations. And now that we know where our cancel+schedule points are, there's the question of how to effectively communicate this information to the user. We want some way to mark out a category of functions that might block or trigger a task switch, so that they're clearly distinguished from functions that don't do this. Wouldn't it be nice if there were some Python feature, that naturally divided functions into two categories, and maybe put some sort of special syntactic marking on with the functions that can do weird things like block and task switch...? What a coincidence, that's exactly how async functions work! Rule 4: in Trio, only the potentially blocking functions are async. So e.g. :meth:`Event.wait` is async, but :meth:`Event.set` is sync. Summing up: out of what's actually a pretty vast space of design possibilities, we declare by fiat that when it comes to Trio primitives, all of these categories are identical: * async functions * functions that can, under at least some circumstances, block * functions where the caller needs to be prepared to handle potential :exc:`Cancelled` exceptions * functions that are guaranteed to notice any pending cancellation * functions where you need to be prepared for a potential task switch * functions that are guaranteed to take care of switching tasks if appropriate This requires some non-trivial work internally – it actually takes a fair amount of care to make those 4 cancel/schedule categories line up, and there are some shenanigans required to let sync and async APIs both interact with the run loop on an equal footing. But this is all invisible to the user, we feel that it pays off in terms of usability and correctness. There is one exception to these rules, for async context managers. Context managers are composed of two operations – enter and exit – and sometimes only one of these is potentially blocking. (Examples: ``async with lock:`` can block when entering but never when exiting; ``async with open_nursery() as ...:`` can block when exiting but never when entering.) But, Python doesn't have "half-asynchronous" context managers: either both operations are async-flavored, or neither is. In Trio we take a pragmatic approach: for this kind of async context manager, we enforce the above rules only on the potentially blocking operation, and the other operation is allowed to be syntactically ``async`` but semantically synchronous. And async context managers should always document which of their operations are schedule+cancel points. Exceptions always propagate ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Another rule that Trio follows is that *exceptions must always propagate*. This is like the `zen `__ line about "Errors should never pass silently", except that in every other concurrency library for Python (threads, asyncio, curio, ...), it's fairly common to end up with an undeliverable exception, which just gets printed to stderr and then discarded. While we understand the pragmatic constraints that motivated these libraries to adopt this approach, we feel that there are far too many situations where no human will ever look at stderr and notice the problem, and insist that Trio APIs find a way to propagate exceptions "up the stack" – whatever that might mean. This is often a challenging rule to follow – for example, the call soon code has to jump through some hoops to make it happen – but its most dramatic influence can seen in Trio's task-spawning interface, where it motivates the use of "nurseries":: async def parent(): async with trio.open_nursery() as nursery: nursery.start_soon(child) (See :ref:`tasks` for full details.) If you squint you can see the conceptual influence of Erlang's "task linking" and "task tree" ideas here, though the details are different. This design also turns out to enforce a remarkable, unexpected invariant. In `the blog post `__ I called out a nice feature of curio's spawning API, which is that since spawning is the only way to break causality, and in curio ``spawn`` is async, which means that in curio sync functions are guaranteed to be causal. One limitation though is that this invariant is actually not very predictive: in curio there are lots of async functions that could spawn off children and violate causality, but most of them don't, but there's no clear marker for the ones that do. Our API doesn't quite give that guarantee, but actually a better one. In Trio: * Sync functions can't create nurseries, because nurseries require an ``async with`` * Any async function can create a nursery and start new tasks... but creating a nursery *allows task starting but does not permit causality breaking*, because the children have to exit before the function is allowed to return. So we can preserve causality without having to give up concurrency! * The only way to violate causality (which is an important feature, just one that needs to be handled carefully) is to explicitly create a nursery object in one task and then pass it into another task. And this provides a very clear and precise signal about where the funny stuff is happening – just watch for the nursery object getting passed around. Introspection, debugging, testing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tools for introspection and debugging are critical to achieving usability and correctness in practice, so they should be first-class considerations in Trio. Similarly, the availability of powerful testing tools has a huge impact on usability and correctness; we consider testing helpers to be very much in scope for the Trio project. Specific style guidelines ------------------------- * As noted above, functions that don't block should be sync-colored, and functions that might block should be async-colored and unconditionally act as cancel+schedule points. * Any function that takes a callable to run should have a signature like:: def call_the_thing(fn, *args, kwonly1, kwonly2, ...):: ... where ``fn(*args)`` is the thing to be called, and ``kwonly1``, ``kwonly2``, ... are keyword-only arguments that belong to ``call_the_thing``. This applies even if ``call_the_thing`` doesn't take any arguments of its own, i.e. in this case its signature looks like:: def call_the_thing(fn, *args):: ... This allows users to skip faffing about with :func:`functools.partial` in most cases, while still providing an unambiguous and extensible way to pass arguments to the caller. (Hat-tip to asyncio, who we stole this convention from.) * Whenever it makes sense, Trio classes should have a method called ``statistics()`` which returns an immutable object with named fields containing internal statistics about the object that are useful for debugging or introspection (:ref:`examples `). * Functions or methods whose purpose is to wait for a condition to become true should be called ``wait_``. This avoids ambiguities like "does ``await readable()`` *check* readability (returning a bool) or *wait for* readability?". Sometimes this leads to the slightly funny looking ``await wait_...``. Sorry. As far as I can tell all the alternatives are worse, and you get used to the convention pretty quick. * If it's desirable to have both blocking and non-blocking versions of a function, then they look like:: async def OPERATION(...): ... def OPERATION_nowait(...): ... and the ``nowait`` version raises :exc:`trio.WouldBlock` if it would block. * ...we should, but currently don't, have a solid convention to distinguish between functions that take an async callable and those that take a sync callable. See `issue #68 `__. A brief tour of Trio's internals -------------------------------- If you want to understand how Trio is put together internally, then the first thing to know is that there's a very strict internal layering: the ``trio._core`` package is a fully self-contained implementation of the core scheduling/cancellation/IO handling logic, and then the other ``trio.*`` modules are implemented in terms of the API it exposes. (If you want to see what this API looks like, then ``import trio; print(trio._core.__all__)``). Everything exported from ``trio._core`` is *also* exported as part of the ``trio``, ``trio.lowlevel``, or ``trio.testing`` namespaces. (See their respective ``__init__.py`` files for details; there's a test to enforce this.) Rationale: currently, Trio is a new project in a novel part of the design space, so we don't make any stability guarantees. But the goal is to reach the point where we *can* declare the API stable. It's unlikely that we'll be able to quickly explore all possible corners of the design space and cover all possible types of I/O. So instead, our strategy is to make sure that it's possible for independent packages to add new features on top of Trio. Enforcing the ``trio`` vs ``trio._core`` split is a way of `eating our own dogfood `__: basic functionality like :class:`trio.Lock` and :mod:`trio.socket` is actually implemented solely in terms of public APIs. And the hope is that by doing this, we increase the chances that someone who comes up with a better kind of queue or wants to add some new functionality like, say, file system change watching, will be able to do that on top of our public APIs without having to modify Trio internals. Inside ``trio._core`` ~~~~~~~~~~~~~~~~~~~~~ There are two notable sub-modules that are largely independent of the rest of Trio, and could (possibly should?) be extracted into their own independent packages: * ``_multierror.py``: Implements :class:`MultiError` and associated infrastructure. * ``_ki.py``: Implements the core infrastructure for safe handling of :class:`KeyboardInterrupt`. The most important submodule, where everything is integrated, is ``_run.py``. (This is also by far the largest submodule; it'd be nice to factor bits of it out where possible, but it's tricky because the core functionality genuinely is pretty intertwined.) Notably, this is where cancel scopes, nurseries, and :class:`~trio.lowlevel.Task` are defined; it's also where the scheduler state and :func:`trio.run` live. The one thing that *isn't* in ``_run.py`` is I/O handling. This is delegated to an ``IOManager`` class, of which there are currently three implementations: * ``EpollIOManager`` in ``_io_epoll.py`` (used on Linux, illumos) * ``KqueueIOManager`` in ``_io_kqueue.py`` (used on macOS, \*BSD) * ``WindowsIOManager`` in ``_io_windows.py`` (used on Windows) The epoll and kqueue backends take advantage of the epoll and kqueue wrappers in the stdlib :mod:`select` module. The windows backend uses CFFI to access to the Win32 API directly (see ``trio/_core/_windows_cffi.py``). In general, we prefer to go directly to the raw OS functionality rather than use :mod:`selectors`, for several reasons: * Controlling our own fate: I/O handling is pretty core to what Trio is about, and :mod:`selectors` is (as of 2017-03-01) somewhat buggy (e.g. `issue 29256 `__, `issue 29255 `__). Which isn't a big deal on its own, but since :mod:`selectors` is part of the standard library we can't fix it and ship an updated version; we're stuck with whatever we get. We want more control over our users' experience than that. * Impedance mismatch: the :mod:`selectors` API isn't particularly well-fitted to how we want to use it. For example, kqueue natively treats an interest in readability of some fd as a separate thing from an interest in that same fd's writability, which neatly matches Trio's model. :class:`selectors.KqueueSelector` goes to some effort internally to lump together all interests in a single fd, and to use it we'd then we'd have to jump through more hoops to reverse this. Of course, the native epoll API is fd-centric in the same way as the :mod:`selectors` API so we do still have to write code to jump through these hoops, but the point is that the :mod:`selectors` abstractions aren't providing a lot of extra value. * (Most important) Access to raw platform capabilities: :mod:`selectors` is highly inadequate on Windows, and even on Unix-like systems it hides a lot of power (e.g. kqueue can do a lot more than just check fd readability/writability!). The ``IOManager`` layer provides a fairly raw exposure of the capabilities of each system, with public API functions that vary between different backends. (This is somewhat inspired by how :mod:`os` works.) These public APIs are then exported as part of :mod:`trio.lowlevel`, and higher-level APIs like :mod:`trio.socket` abstract over these system-specific APIs to provide a uniform experience. Currently the choice of backend is made statically at import time, and there is no provision for "pluggable" backends. The intuition here is that we'd rather focus our energy on making one set of solid, official backends that provide a high-quality experience out-of-the-box on all supported systems. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/glossary.rst0000644000076500000000000000070100000000000016743 0ustar00quentinwheel:orphan: .. _glossary: ******** Glossary ******** .. glossary:: asynchronous file object This is an object with an API identical to a :term:`file object`, with the exception that all methods that do I/O are async functions. The main ways to create an asynchronous file object are by using the :func:`trio.open_file` function or the :meth:`trio.Path.open` method. See :ref:`async-file-io` for more details. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751808.0 trio-0.19.0/docs/source/history.rst0000644000076500000000000017344400000000000016620 0ustar00quentinwheelRelease history =============== .. currentmodule:: trio .. towncrier release notes start Trio 0.19.0 (2021-06-15) ------------------------ Features ~~~~~~~~ - Trio now supports Python 3.10. (`#1921 `__) - Use slots for :class:`~.lowlevel.Task` which should make them slightly smaller and faster. (`#1927 `__) - Make :class:`~.Event` more lightweight by using less objects (about 2 rather than 5, including a nested ParkingLot and attribute dicts) and simpler structures (set rather than OrderedDict). This may benefit applications that create a large number of event instances, such as with the "replace event object on every set()" idiom. (`#1948 `__) Bugfixes ~~~~~~~~ - The event loop now holds on to references of coroutine frames for only the minimum necessary period of time. (`#1864 `__) - The :class:`~.lowlevel.TrioToken` class can now be used as a target of a weak reference. (`#1924 `__) Trio 0.18.0 (2021-01-11) ------------------------ Features ~~~~~~~~ - Add synchronous ``.close()`` methods and context manager (``with x``) support for `.MemorySendChannel` and `.MemoryReceiveChannel`. (`#1797 `__) Bugfixes ~~~~~~~~ - Previously, on Windows, Trio programs using thousands of sockets at the same time could trigger extreme slowdowns in the Windows kernel. Now, Trio works around this issue, so you should be able to use as many sockets as you want. (`#1280 `__) - :func:`trio.from_thread.run` no longer crashes the Trio run if it is executed after the system nursery has been closed but before the run has finished. Calls made at this time will now raise `trio.RunFinishedError`. This fixes a regression introduced in Trio 0.17.0. The window in question is only one scheduler tick long in most cases, but may be longer if async generators need to be cleaned up. (`#1738 `__) - Fix a crash in pypy-3.7 (`#1765 `__) - Trio now avoids creating cyclic garbage as often. This should have a minimal impact on most programs, but can slightly reduce how often the cycle collector GC runs on CPython, which can reduce latency spikes. (`#1770 `__) Deprecations and removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Remove deprecated ``max_refill_bytes`` from :class:`SSLStream`. (`#959 `__) - Remove the deprecated ``tiebreaker`` argument to `trio.testing.wait_all_tasks_blocked`. (`#1558 `__) - Remove the deprecated ``trio.hazmat`` module. (`#1722 `__) - Stop allowing subclassing public classes. This behavior was deprecated in 0.15.0. (`#1726 `__) Trio 0.17.0 (2020-09-15) ------------------------ Headline features ~~~~~~~~~~~~~~~~~ - Trio now supports automatic :ref:`async generator finalization `, so more async generators will work even if you don't wrap them in ``async with async_generator.aclosing():`` blocks. Please see the documentation for important caveats; in particular, yielding within a nursery or cancel scope remains unsupported. (`#265 `__) Features ~~~~~~~~ - `trio.open_tcp_stream` has a new ``local_address=`` keyword argument that can be used on machines with multiple IP addresses to control which IP is used for the outgoing connection. (`#275 `__) - If you pass a raw IP address into ``sendto``, it no longer spends any time trying to resolve the hostname. If you're using UDP, this should substantially reduce your per-packet overhead. (`#1595 `__) - `trio.lowlevel.checkpoint` is now much faster. (`#1613 `__) - We switched to a new, lower-overhead data structure to track upcoming timeouts, which should make your programs faster. (`#1629 `__) Bugfixes ~~~~~~~~ - On macOS and BSDs, explicitly close our wakeup socketpair when we're done with it. (`#1621 `__) - Trio can now be imported when `sys.excepthook` is a `functools.partial` instance, which might occur in a ``pytest-qt`` test function. (`#1630 `__) - The thread cache didn't release its reference to the previous job. (`#1638 `__) - On Windows, Trio now works around the buggy behavior of certain Layered Service Providers (system components that can intercept network activity) that are built on top of a commercially available library called Komodia Redirector. This benefits users of products such as Astrill VPN and Qustodio parental controls. Previously, Trio would crash on startup when run on a system where such a product was installed. (`#1659 `__) Deprecations and removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Remove ``wait_socket_*``, ``notify_socket_closing``, ``notify_fd_closing``, ``run_sync_in_worker_thread`` and ``current_default_worker_thread_limiter``. They were deprecated in 0.12.0. (`#1596 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - When using :ref:`instruments `, you now only "pay for what you use": if there are no instruments installed that override a particular hook such as :meth:`~trio.abc.Instrument.before_task_step`, then Trio doesn't waste any effort on checking its instruments when the event corresponding to that hook occurs. Previously, installing any instrument would incur all the instrumentation overhead, even for hooks no one was interested in. (`#1340 `__) Trio 0.16.0 (2020-06-10) ------------------------ Headline features ~~~~~~~~~~~~~~~~~ - If you want to use Trio, but are stuck with some other event loop like Qt or PyGame, then good news: now you can have both. For details, see: :ref:`guest-mode`. (`#399 `__) Features ~~~~~~~~ - To speed up `trio.to_thread.run_sync`, Trio now caches and re-uses worker threads. And in case you have some exotic use case where you need to spawn threads manually, but want to take advantage of Trio's cache, you can do that using the new `trio.lowlevel.start_thread_soon`. (`#6 `__) - Tasks spawned with `nursery.start() ` aren't treated as direct children of their nursery until they call ``task_status.started()``. This is visible through the task tree introspection attributes such as `Task.parent_nursery `. Sometimes, though, you want to know where the task is going to wind up, even if it hasn't finished initializing yet. To support this, we added a new attribute `Task.eventual_parent_nursery `. For a task spawned with :meth:`~trio.Nursery.start` that hasn't yet called ``started()``, this is the nursery that the task was nominally started in, where it will be running once it finishes starting up. In all other cases, it is ``None``. (`#1558 `__) Bugfixes ~~~~~~~~ - Added a helpful error message if an async function is passed to `trio.to_thread.run_sync`. (`#1573 `__) Deprecations and removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Remove ``BlockingTrioPortal``: it was deprecated in 0.12.0. (`#1574 `__) - The ``tiebreaker`` argument to `trio.testing.wait_all_tasks_blocked` has been deprecated. This is a highly obscure feature that was probably never used by anyone except `trio.testing.MockClock`, and `~trio.testing.MockClock` doesn't need it anymore. (`#1587 `__) - Remove the deprecated ``trio.ssl`` and ``trio.subprocess`` modules. (`#1594 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - We refactored `trio.testing.MockClock` so that it no longer needs to run an internal task to manage autojumping. This should be mostly invisible to users, but there is one semantic change: the interaction between `trio.testing.wait_all_tasks_blocked` and the autojump clock was fixed. Now, the autojump will always wait until after all `~trio.testing.wait_all_tasks_blocked` calls have finished before firing, instead of it depending on which threshold values you passed. (`#1587 `__) Trio 0.15.1 (2020-05-22) ------------------------ Bugfixes ~~~~~~~~ - Fix documentation build. (This must be a new release tag to get readthedocs "stable" to include the changes from 0.15.0.) - Added a helpful error message if an async function is passed to `trio.from_thread.run_sync` or a sync function to `trio.from_thread.run`. (`#1244 `__) Trio 0.15.0 (2020-05-19) ------------------------ Features ~~~~~~~~ - Previously, when `trio.run_process` was cancelled, it always killed the subprocess immediately. Now, on Unix, it first gives the process a chance to clean up by sending ``SIGTERM``, and only escalates to ``SIGKILL`` if the process is still running after 5 seconds. But if you prefer the old behavior, or want to adjust the timeout, then don't worry: you can now pass a custom ``deliver_cancel=`` argument to define your own process killing policy. (`#1104 `__) - It turns out that creating a subprocess can block the parent process for a surprisingly long time. So `trio.open_process` now uses a worker thread to avoid blocking the event loop. (`#1109 `__) - We've added FreeBSD to the list of platforms we support and test on. (`#1118 `__) - On Linux kernels v5.3 or newer, `trio.Process.wait` now uses `the pidfd API `__ to track child processes. This shouldn't have any user-visible change, but it makes working with subprocesses faster and use less memory. (`#1241 `__) - The `trio.Process.returncode` attribute is now automatically updated as needed, instead of only when you call `~trio.Process.poll` or `~trio.Process.wait`. Also, ``repr(process_object)`` now always contains up-to-date information about the process status. (`#1315 `__) Bugfixes ~~~~~~~~ - On Ubuntu systems, the system Python includes a custom unhandled-exception hook to perform `crash reporting `__. Unfortunately, Trio wants to use the same hook to print nice `MultiError` tracebacks, causing a conflict. Previously, Trio would detect the conflict, print a warning, and you just wouldn't get nice `MultiError` tracebacks. Now, Trio has gotten clever enough to integrate its hook with Ubuntu's, so the two systems should Just Work together. (`#1065 `__) - Fixed an over-strict test that caused failures on Alpine Linux. Started testing against Alpine in CI. (`#1499 `__) - Calling `open_signal_receiver` with no arguments used to succeed without listening for any signals. This was confusing, so now it raises TypeError instead. (`#1526 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Remove support for Python 3.5. (`#75 `__) - It turns out that everyone got confused by the name ``trio.hazmat``. So that name has been deprecated, and the new name is :mod:`trio.lowlevel`. (`#476 `__) - Most of the public classes that Trio exports – like `trio.Lock`, `trio.SocketStream`, and so on – weren't designed with subclassing in mind. And we've noticed that some users were trying to subclass them anyway, and ending up with fragile code that we're likely to accidentally break in the future, or else be stuck unable to make changes for fear of breaking subclasses. There are also some classes that were explicitly designed to be subclassed, like the ones in ``trio.abc``. Subclassing these is still supported. However, for all other classes, attempts to subclass will now raise a deprecation warning, and in the future will raise an error. If this causes problems for you, feel free to drop by our `chat room `__ or file a bug, to discuss alternatives or make a case for why some particular class should be designed to support subclassing. (`#1044 `__) - If you want to create a `trio.Process` object, you now have to call `trio.open_process`; calling ``trio.Process()`` directly was deprecated in v0.12.0 and has now been removed. (`#1109 `__) - Remove ``clear`` method on `trio.Event`: it was deprecated in 0.12.0. (`#1498 `__) Trio 0.14.0 (2020-04-27) ------------------------ Features ~~~~~~~~ - If you're using Trio's low-level interfaces like `trio.hazmat.wait_readable ` or similar, and then you close a socket or file descriptor, you're supposed to call `trio.hazmat.notify_closing ` first so Trio can clean up properly. But what if you forget? In the past, Trio would tend to either deadlock or explode spectacularly. Now, it's much more robust to this situation, and should generally survive. (But note that "survive" is not the same as "give you the results you were expecting", so you should still call `~trio.lowlevel.notify_closing` when appropriate. This is about harm reduction and making it easier to debug this kind of mistake, not something you should rely on.) If you're using higher-level interfaces outside of the `trio.hazmat ` module, then you don't need to worry about any of this; those intefaces already take care of calling `~trio.lowlevel.notify_closing` for you. (`#1272 `__) Bugfixes ~~~~~~~~ - A bug related to the following methods has been introduced in version 0.12.0: - `trio.Path.iterdir` - `trio.Path.glob` - `trio.Path.rglob` The iteration of the blocking generators produced by pathlib was performed in the trio thread. With this fix, the previous behavior is restored: the blocking generators are converted into lists in a thread dedicated to blocking IO calls. (`#1308 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Deprecate Python 3.5 (`#1408 `__) - Remove ``trio.open_cancel_scope`` which was deprecated in 0.11.0. (`#1458 `__) Trio 0.13.0 (2019-11-02) ------------------------ Features ~~~~~~~~ - On Windows, the `IOCP subsystem `__ is generally the best way to implement async I/O operations – but it's historically been weak at providing ``select``\-style readiness notifications, like `trio.hazmat.wait_readable ` and `~trio.lowlevel.wait_writable`. We aren't willing to give those up, so previously Trio's Windows backend used a hybrid of ``select`` + IOCP. This was complex, slow, and had `limited scalability `__. Fortunately, we found a way to implement ``wait_*`` with IOCP, so Trio's Windows backend has been completely rewritten, and now uses IOCP exclusively. As a user, the only difference you should notice is that Trio should now be faster on Windows, and can handle many more sockets. This also simplified the code internally, which should allow for more improvements in the future. However, this is somewhat experimental, so if you use Windows then please keep an eye out and let us know if you run into any problems! (`#52 `__) - Use slots for memory channel state and statistics which should make memory channels slightly smaller and faster. (`#1195 `__) Bugfixes ~~~~~~~~ - OpenSSL has a bug in its handling of TLS 1.3 session tickets that can cause deadlocks or data loss in some rare edge cases. These edge cases most frequently happen during tests. (Upstream bug reports: `openssl/openssl#7948 `__, `openssl/openssl#7967 `__.) `trio.SSLStream` now works around this issue, so you don't have to worry about it. (`#819 `__) - Trio now uses `signal.set_wakeup_fd` on all platforms. This is mostly an internal refactoring with no user-visible effect, but in theory it should fix a few extremely-rare race conditions on Unix that could have caused signal delivery to be delayed. (`#109 `__) - Trio no longer crashes when an async function is implemented in C or Cython and then passed directly to `trio.run` or ``nursery.start_soon``. (`#550 `__, `#1191 `__) - When a Trio task makes improper use of a non-Trio async library, Trio now causes an exception to be raised within the task at the point of the error, rather than abandoning the task and raising an error in its parent. This improves debuggability and resolves the `TrioInternalError` that would sometimes result from the old strategy. (`#552 `__) - In 0.12.0 we deprecated ``trio.run_sync_in_worker_thread`` in favor of `trio.to_thread.run_sync`. But, the deprecation message listed the wrong name for the replacement. The message now gives the correct name. (`#810 `__) - Fix regression introduced with cancellation changes in 0.12.0, where a `trio.CancelScope` which isn't cancelled could catch a propagating `trio.Cancelled` exception if shielding were changed while the cancellation was propagating. (`#1175 `__) - Fix a crash that could happen when using ``MockClock`` with autojump enabled and a non-zero rate. (`#1190 `__) - If you nest >1000 cancel scopes within each other, Trio now handles that gracefully instead of crashing with a ``RecursionError``. (`#1235 `__) - Fixed the hash behavior of `trio.Path` to match `pathlib.Path`. Previously `trio.Path`'s hash was inherited from `object` instead of from `pathlib.PurePath`. Thus, hashing two `trio.Path`\'s or a `trio.Path` and a `pathlib.Path` with the same underlying path would yield different results. (`#1259 `__) Trio 0.12.1 (2019-08-01) ------------------------ Bugfixes ~~~~~~~~ - In v0.12.0, we accidentally moved ``BlockingTrioPortal`` from ``trio`` to ``trio.hazmat``. It's now been restored to its proper position. (It's still deprecated though, and will issue a warning if you use it.) (`#1167 `__) Trio 0.12.0 (2019-07-31) ------------------------ Features ~~~~~~~~ - If you have a `~trio.abc.ReceiveStream` object, you can now use ``async for data in stream: ...`` instead of calling `~trio.abc.ReceiveStream.receive_some`. Each iteration gives an arbitrary sized chunk of bytes. And the best part is, the loop automatically exits when you reach EOF, so you don't have to check for it yourself anymore. Relatedly, you no longer need to pick a magic buffer size value before calling `~trio.abc.ReceiveStream.receive_some`; you can ``await stream.receive_some()`` with no arguments, and the stream will automatically pick a reasonable size for you. (`#959 `__) - Threading interfaces have been reworked: ``run_sync_in_worker_thread`` is now `trio.to_thread.run_sync`, and instead of ``BlockingTrioPortal``, use `trio.from_thread.run` and `trio.from_thread.run_sync`. What's neat about this is that these cooperate, so if you're in a thread created by `to_thread.run_sync`, it remembers which Trio created it, and you can call ``trio.from_thread.*`` directly without having to pass around a ``BlockingTrioPortal`` object everywhere. (`#810 `__) - We cleaned up the distinction between the "abstract channel interface" and the "memory channel" concrete implementation. `trio.abc.SendChannel` and `trio.abc.ReceiveChannel` have been slimmed down, `trio.MemorySendChannel` and `trio.MemoryReceiveChannel` are now public types that can be used in type hints, and there's a new `trio.abc.Channel` interface for future bidirectional channels. (`#719 `__) - Add :func:`trio.run_process` as a high-level helper for running a process and waiting for it to finish, like the standard :func:`subprocess.run` does. (`#822 `__) - On Linux, when wrapping a bare file descriptor in a Trio socket object, Trio now auto-detects the correct ``family``, ``type``, and ``protocol``. This is useful, for example, when implementing `systemd socket activation `__. (`#251 `__) - Trio sockets have a new method `~trio.socket.SocketType.is_readable` that allows you to check whether a socket is readable. This is useful for HTTP/1.1 clients. (`#760 `__) - We no longer use runtime code generation to dispatch core functions like `current_time`. Static analysis tools like mypy and pylint should now be able to recognize and analyze all of Trio's top-level functions (though some class attributes are still dynamic... we're working on it). (`#805 `__) - Add `trio.hazmat.FdStream ` for wrapping a Unix file descriptor as a `~trio.abc.Stream`. (`#829 `__) - Trio now gives a reasonable traceback and error message in most cases when its invariants surrounding cancel scope nesting have been violated. (One common source of such violations is an async generator that yields within a cancel scope.) The previous behavior was an inscrutable chain of TrioInternalErrors. (`#882 `__) - MultiError now defines its ``exceptions`` attribute in ``__init__()`` to better support linters and code autocompletion. (`#1066 `__) - Use ``__slots__`` in more places internally, which should make Trio slightly faster. (`#984 `__) Bugfixes ~~~~~~~~ - Destructor methods (``__del__``) are now protected against ``KeyboardInterrupt``. (`#676 `__) - The :class:`trio.Path` methods :meth:`~trio.Path.glob` and :meth:`~trio.Path.rglob` now return iterables of :class:`trio.Path` (not :class:`pathlib.Path`). (`#917 `__) - Inspecting the :attr:`~trio.CancelScope.cancel_called` attribute of a not-yet-exited cancel scope whose deadline is in the past now always returns ``True``, like you might expect. (Previously it would return ``False`` for not-yet-entered cancel scopes, and for active cancel scopes until the first checkpoint after their deadline expiry.) (`#958 `__) - The :class:`trio.Path` classmethods, :meth:`~trio.Path.home` and :meth:`~trio.Path.cwd`, are now async functions. Previously, a bug in the forwarding logic meant :meth:`~trio.Path.cwd` was synchronous and :meth:`~trio.Path.home` didn't work at all. (`#960 `__) - An exception encapsulated within a :class:`MultiError` doesn't need to be hashable anymore. .. note:: This is only supported if you are running python >= 3.6.4. You can refer to `this github PR `_ for details. (`#1005 `__) Improved Documentation ~~~~~~~~~~~~~~~~~~~~~~ - To help any user reading through Trio's function implementations, start using public names (not _core) whenever possible. (`#1017 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - The ``clear`` method on `trio.Event` has been deprecated. (`#637 `__) - ``BlockingTrioPortal`` has been deprecated in favor of the new `trio.from_thread`. (`#810 `__) - ``run_sync_in_worker_thread`` is deprecated in favor of `trio.to_thread.run_sync`. (`#810 `__) - ``current_default_worker_thread_limiter`` is deprecated in favor of `trio.to_thread.current_default_thread_limiter`. (`#810 `__) - Give up on trying to have different low-level waiting APIs on Unix and Windows. All platforms now have `trio.hazmat.wait_readable `, `trio.hazmat.wait_writable `, and `trio.hazmat.notify_closing `. The old platform-specific synonyms ``wait_socket_*``, ``notify_socket_closing``, and ``notify_fd_closing`` have been deprecated. (`#878 `__) - It turns out that it's better to treat subprocess spawning as an async operation. Therefore, direct construction of `Process` objects has been deprecated. Use `trio.open_process` instead. (`#1109 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - The plumbing of Trio's cancellation system has been substantially overhauled to improve performance and ease future planned improvements. Notably, there is no longer any internal concept of a "cancel stack", and checkpoints now take constant time regardless of the cancel scope nesting depth. (`#58 `__) - We've slightly relaxed our definition of which Trio operations act as :ref:`checkpoints `. A Trio async function that exits by throwing an exception is no longer guaranteed to execute a checkpoint; it might or might not. The rules are unchanged for async functions that don't exit with an exception, async iterators, and async context managers. :func:`trio.testing.assert_checkpoints` has been updated to reflect the new behavior: if its ``with`` block exits with an exception, no assertion is made. (`#474 `__) - Calling ``str`` on a :exc:`trio.Cancelled` exception object returns "Cancelled" instead of an empty string. (`#674 `__) - Change the default timeout in :func:`trio.open_tcp_stream` to 0.250 seconds, for consistency with RFC 8305. (`#762 `__) - On win32 we no longer set SO_EXCLUSIVEADDRUSE when binding a socket in :exc:`trio.open_tcp_listeners`. (`#928 `__) - Any attempt to inherit from `CancelScope` or `Nursery` now raises `TypeError`. (Trio has never been able to safely support subclassing here; this change just makes it more obvious.) Also exposed as public classes for type-checking, etc. (`#1021 `__) Trio 0.11.0 (2019-02-09) ------------------------ Features ~~~~~~~~ - Add support for "unbound cancel scopes": you can now construct a :class:`trio.CancelScope` without entering its context, e.g., so you can pass it to another task which will use it to wrap some work that you want to be able to cancel from afar. (`#607 `__) - The test suite now passes with openssl v1.1.1. Unfortunately this required temporarily disabling TLS v1.3 during tests; see openssl bugs `#7948 `__ and `#7967 `__. We believe TLS v1.3 should work in most real use cases, but will be monitoring the situation. (`#817 `__) - Add :attr:`trio.Process.stdio`, which is a :class:`~trio.StapledStream` of :attr:`~trio.Process.stdin` and :attr:`~trio.Process.stdout` if both of those are available, and ``None`` otherwise. This is intended to make it more ergonomic to speak a back-and-forth protocol with a subprocess. (`#862 `__) - :class:`trio.Process` on POSIX systems no longer accepts the error-prone combination of ``shell=False`` with a ``command`` that's a single string, or ``shell=True`` with a ``command`` that's a sequence of strings. These forms are accepted by the underlying :class:`subprocess.Popen` constructor but don't do what most users expect. Also, added an explanation of :ref:`quoting ` to the documentation. (`#863 `__) - Added an internal mechanism for pytest-trio's `Hypothesis `__ integration to make the task scheduler reproducible and avoid flaky tests. (`#890 `__) - :class:`~trio.abc.SendChannel`, :class:`~trio.abc.ReceiveChannel`, :class:`~trio.abc.Listener`, and :func:`~trio.open_memory_channel` can now be referenced using a generic type parameter (the type of object sent over the channel or produced by the listener) using PEP 484 syntax: ``trio.abc.SendChannel[bytes]``, ``trio.abc.Listener[trio.SocketStream]``, ``trio.open_memory_channel[MyMessage](5)``, etc. The added type information does not change the runtime semantics, but permits better integration with external static type checkers. (`#908 `__) Bugfixes ~~~~~~~~ - Fixed several bugs in the new Unix subprocess pipe support, where (a) operations on a closed pipe could accidentally affect another unrelated pipe due to internal file-descriptor reuse, (b) in very rare circumstances, two tasks calling ``send_all`` on the same pipe at the same time could end up with intermingled data instead of a :exc:`BusyResourceError`. (`#661 `__) - Stop :func:`trio.open_tcp_listeners` from crashing on systems that have disabled IPv6. (`#853 `__) - Fixed support for multiple tasks calling :meth:`trio.Process.wait` simultaneously; on kqueue platforms it would previously raise an exception. (`#854 `__) - :exc:`trio.Cancelled` exceptions now always propagate until they reach the outermost unshielded cancelled scope, even if more cancellations occur or shielding is changed between when the :exc:`~trio.Cancelled` is delivered and when it is caught. (`#860 `__) - If you have a :class:`SocketStream` that's already been closed, then ``await socket_stream.send_all(b"")`` will now correctly raise :exc:`ClosedResourceError`. (`#874 `__) - Simplified the Windows subprocess pipe ``send_all`` code, and in the process fixed a theoretical bug where closing a pipe at just the wrong time could produce errors or cause data to be redirected to the wrong pipe. (`#883 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Deprecate ``trio.open_cancel_scope`` in favor of :class:`trio.CancelScope`, which more clearly reflects that creating a cancel scope is just an ordinary object construction and does not need to be immediately paired with entering it. (`#607 `__) - The submodules ``trio.ssl`` and ``trio.subprocess`` are now deprecated. Their nontrivial contents (:class:`~trio.Process`, :class:`~trio.SSLStream`, and :class:`~trio.SSLListener`) have been moved to the main :mod:`trio` namespace. For the numerous constants, exceptions, and other helpers that were previously reexported from the standard :mod:`ssl` and :mod:`subprocess` modules, you should now use those modules directly. (`#852 `__) - Remove all the APIs deprecated in 0.9.0 or earlier (``trio.Queue``, ``trio.catch_signals()``, ``trio.BrokenStreamError``, and ``trio.ResourceBusyError``), except for ``trio.hazmat.UnboundedQueue``, which stays for now since it is used by the obscure lowlevel functions ``monitor_completion_queue()`` and ``monitor_kevent()``. (`#918 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Entering a cancel scope whose deadline is in the past now immediately cancels it, so :exc:`~trio.Cancelled` will be raised by the first checkpoint in the cancel scope rather than the second one. This also affects constructs like ``with trio.move_on_after(0):``. (`#320 `__) Trio 0.10.0 (2019-01-07) ------------------------ Features ~~~~~~~~ - Initial :ref:`subprocess support `. Add :class:`trio.subprocess.Process `, an async wrapper around the stdlib :class:`subprocess.Popen` class, which permits spawning subprocesses and communicating with them over standard Trio streams. ``trio.subprocess`` also reexports all the stdlib :mod:`subprocess` exceptions and constants for convenience. (`#4 `__) - You can now create an unbounded :class:`CapacityLimiter` by initializing with `math.inf` (`#618 `__) - New :mod:`trio.hazmat ` features to allow cleanly switching live coroutine objects between Trio and other coroutine runners. Frankly, we're not even sure this is a good idea, but we want to `try it out in trio-asyncio `__, so here we are. For details see :ref:`live-coroutine-handoff`. (`#649 `__) Bugfixes ~~~~~~~~ - Fixed a race condition on macOS, where Trio's TCP listener would crash if an incoming TCP connection was closed before the listener had a chance to accept it. (`#609 `__) - :func:`trio.open_tcp_stream()` has been refactored to clean up unsuccessful connection attempts more reliably. (`#809 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Remove the APIs deprecated in 0.5.0. (``ClosedStreamError``, ``ClosedListenerError``, ``Result``) (`#812 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - There are a number of methods on :class:`trio.ssl.SSLStream ` that report information about the negotiated TLS connection, like ``selected_alpn_protocol``, and thus cannot succeed until after the handshake has been performed. Previously, we returned None from these methods, like the stdlib :mod:`ssl` module does, but this is confusing, because that can also be a valid return value. Now we raise :exc:`trio.ssl.NeedHandshakeError ` instead. (`#735 `__) Trio 0.9.0 (2018-10-12) ----------------------- Features ~~~~~~~~ - New and improved APIs for inter-task communication: :class:`trio.abc.SendChannel`, :class:`trio.abc.ReceiveChannel`, and :func:`trio.open_memory_channel` (which replaces ``trio.Queue``). This interface uses separate "sender" and "receiver" objects, for consistency with other communication interfaces like :class:`~trio.abc.Stream`. Also, the two objects can now be closed individually, making it much easier to gracefully shut down a channel. Also, check out the nifty ``clone`` API to make it easy to manage shutdown in multiple-producer/multiple-consumer scenarios. Also, the API has been written to allow for future channel implementations that send objects across process boundaries. Also, it supports unbounded buffering if you really need it. Also, help I can't stop writing also. See :ref:`channels` for more details. (`#497 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - ``trio.Queue`` and ``trio.hazmat.UnboundedQueue`` have been deprecated, in favor of :func:`trio.open_memory_channel`. (`#497 `__) Trio 0.8.0 (2018-10-01) ----------------------- Features ~~~~~~~~ - Trio's default internal clock is now based on :func:`time.perf_counter` instead of :func:`time.monotonic`. This makes time-keeping more precise on Windows, and has no effect on other platforms. (`#33 `__) - Reworked :mod:`trio`, :mod:`trio.testing`, and :mod:`trio.socket` namespace construction, making them more understandable by static analysis tools. This should improve tab completion in editors, reduce false positives from pylint, and is a first step towards providing type hints. (`#542 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - ``ResourceBusyError`` is now a deprecated alias for the new :exc:`BusyResourceError`, and ``BrokenStreamError`` is a deprecated alias for the new :exc:`BrokenResourceError`. (`#620 `__) Trio 0.7.0 (2018-09-03) ----------------------- Features ~~~~~~~~ - The length of typical exception traces coming from Trio has been greatly reduced. This was done by eliminating many of the exception frames related to details of the implementation. For examples, see the `blog post `__. (`#56 `__) - New and improved signal catching API: :func:`open_signal_receiver`. (`#354 `__) - The low level ``trio.hazmat.wait_socket_readable``, ``wait_socket_writable``, and ``notify_socket_close`` now work on bare socket descriptors, instead of requiring a :func:`socket.socket` object. (`#400 `__) - If you're using :func:`trio.hazmat.wait_task_rescheduled ` and other low-level routines to implement a new sleeping primitive, you can now use the new :data:`trio.hazmat.Task.custom_sleep_data ` attribute to pass arbitrary data between the sleeping task, abort function, and waking task. (`#616 `__) Bugfixes ~~~~~~~~ - Prevent crashes when used with Sentry (raven-python). (`#599 `__) - The nursery context manager was rewritten to avoid use of ``@asynccontextmanager`` and ``@async_generator``. This reduces extraneous frames in exception traces and addresses bugs regarding `StopIteration` and `StopAsyncIteration` exceptions not propagating correctly. (`#612 `__) - Updates the formatting of exception messages raised by :func:`trio.open_tcp_stream` to correctly handle a hostname passed in as bytes, by converting the hostname to a string. (`#633 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - ``trio.catch_signals`` has been deprecated in favor of :func:`open_signal_receiver`. The main differences are: it takes \*-args now to specify the list of signals (so ``open_signal_receiver(SIGINT)`` instead of ``catch_signals({SIGINT})``), and, the async iterator now yields individual signals, instead of "batches" (`#354 `__) - Remove all the APIs deprecated in 0.3.0 and 0.4.0. (`#623 `__) Trio 0.6.0 (2018-08-13) ----------------------- Features ~~~~~~~~ - Add :func:`trio.hazmat.WaitForSingleObject ` async function to await Windows handles. (`#233 `__) - The `sniffio `__ library can now detect when Trio is running. (`#572 `__) Bugfixes ~~~~~~~~ - Make trio.socket._SocketType.connect *always* close the socket on cancellation (`#247 `__) - Fix a memory leak in :class:`trio.CapacityLimiter`, that could occur when ``acquire`` or ``acquire_on_behalf_of`` was cancelled. (`#548 `__) - Some version of macOS have a buggy ``getaddrinfo`` that was causing spurious test failures; we now detect those systems and skip the relevant test when found. (`#580 `__) - Prevent crashes when used with Sentry (raven-python). (`#599 `__) Trio 0.5.0 (2018-07-20) ----------------------- Features ~~~~~~~~ - Suppose one task is blocked trying to use a resource – for example, reading from a socket – and while it's doing this, another task closes the resource. Previously, this produced undefined behavior. Now, closing a resource causes pending operations on that resource to terminate immediately with a :exc:`ClosedResourceError`. ``ClosedStreamError`` and ``ClosedListenerError`` are now aliases for :exc:`ClosedResourceError`, and deprecated. For this to work, Trio needs to know when a resource has been closed. To facilitate this, new functions have been added: ``trio.hazmat.notify_fd_close`` and ``trio.hazmat.notify_socket_close``. If you're using Trio's built-in wrappers like :class:`~trio.SocketStream` or :mod:`trio.socket`, then you don't need to worry about this, but if you're using the low-level functions like :func:`trio.hazmat.wait_readable `, you should make sure to call these functions at appropriate times. (`#36 `__) - Tasks created by :func:`~trio.lowlevel.spawn_system_task` now no longer inherit the creator's :mod:`contextvars` context, instead using one created at :func:`~trio.run`. (`#289 `__) - Add support for ``trio.Queue`` with ``capacity=0``. Queue's implementation is also faster now. (`#473 `__) - Switch to using standalone `Outcome `__ library for Result objects. (`#494 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - ``trio.hazmat.Result``, ``trio.hazmat.Value`` and ``trio.hazmat.Error`` have been replaced by the equivalent classes in the `Outcome `__ library. Trio 0.4.0 (2018-04-10) ----------------------- Features ~~~~~~~~ - Add unix client socket support. (`#401 `__) - Add support for :mod:`contextvars` (see :ref:`task-local storage `), and add :class:`trio.hazmat.RunVar ` as a similar API for run-local variables. Deprecate ``trio.TaskLocal`` and ``trio.hazmat.RunLocal`` in favor of these new APIs. (`#420 `__) - Add :func:`trio.hazmat.current_root_task ` to get the root task. (`#452 `__) Bugfixes ~~~~~~~~ - Fix KeyboardInterrupt handling when threading state has been modified by a 3rd-party library. (`#461 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Attempting to explicitly raise :exc:`trio.Cancelled` will cause a :exc:`RuntimeError`. :meth:`cancel_scope.cancel() ` should be used instead. (`#342 `__) Miscellaneous internal changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Simplify implementation of primitive traps like :func:`~trio.lowlevel.wait_task_rescheduled` (`#395 `__) Trio 0.3.0 (2017-12-28) ----------------------- Features ~~~~~~~~ - **Simplified nurseries**: In Trio, the rule used to be that "parenting is a full time job", meaning that after a task opened a nursery and spawned some children into it, it had to immediately block in ``__aexit__`` to supervise the new children, or else exception propagation wouldn't work. Also there was some elaborate machinery to let you replace this supervision logic with your own custom supervision logic. Thanks to new advances in task-rearing technology, **parenting is no longer a full time job!** Now the supervision happens automatically in the background, and essentially the body of a ``async with trio.open_nursery()`` block acts just like a task running inside the nursery. This is important: it makes it possible for libraries to abstract over nursery creation. For example, if you have a Websocket library that needs to run a background task to handle Websocket pings, you can now do that with ``async with open_websocket(...) as ws: ...``, and that can run a task in the background without your users having to worry about parenting it. And don't worry, you can still make custom supervisors; it turned out all that spiffy machinery was actually redundant and didn't provide much value. (`#136 `__) - Trio socket methods like ``bind`` and ``connect`` no longer require "pre-resolved" numeric addresses; you can now pass regular hostnames and Trio will implicitly resolve them for you. (`#377 `__) Bugfixes ~~~~~~~~ - Fixed some corner cases in Trio socket method implicit name resolution to better match stdlib behavior. Example: ``sock.bind(("", port))`` now binds to the wildcard address instead of raising an error. (`#277 `__) Deprecations and Removals ~~~~~~~~~~~~~~~~~~~~~~~~~ - Removed everything that was deprecated in 0.2.0; see the 0.2.0 release notes below for details. - As was foretold in the v0.2.0 release notes, the ``bind`` method on Trio sockets is now async. Please update your calls or – better yet – switch to our shiny new high-level networking API, like :func:`serve_tcp`. (`#241 `__) - The ``resolve_local_address`` and ``resolve_remote_address`` methods on Trio sockets have been deprecated; these are unnecessary now that you can just pass your hostnames directly to the socket methods you want to use. (`#377 `__) Trio 0.2.0 (2017-12-06) ----------------------- Trio 0.2.0 contains changes from 14 contributors, and brings major new features and bug fixes, as well as a number of deprecations and a very small number of backwards incompatible changes. We anticipate that these should be easy to adapt to, but make sure to read about them below, and if you're using Trio then remember to `read and subscribe to issue #1 `__. Highlights ~~~~~~~~~~ * Added a comprehensive API for async filesystem I/O: see :ref:`async-file-io` (`gh-20 `__) * The new nursery :meth:`~Nursery.start` method makes it easy to perform controlled start-up of long-running tasks. For example, given an appropriate ``http_server_on_random_open_port`` function, you could write:: port = await nursery.start(http_server_on_random_open_port) and this would start the server running in the background in the nursery, and then give you back the random port it selected – but not until it had finished initializing and was ready to accept requests! * Added a :ref:`new abstract API for byte streams `, and :mod:`trio.testing` gained helpers for creating fake streams for :ref:`testing your protocol implementation ` and checking that your custom stream implementation :ref:`follows the stream contract `. * If you're currently using :mod:`trio.socket` then you should :ref:`switch to using our new high-level networking API instead `. It takes care of many tiresome details, it's fully integrated with the abstract stream API, and it provides niceties like a state-of-the-art `Happy Eyeballs implementation `__ in :func:`open_tcp_stream` and server helpers that integrate with ``nursery.start``. * We've also added comprehensive support for SSL/TLS encryption, including SNI (both client and server side), STARTTLS, renegotiation during full-duplex usage (subject to OpenSSL limitations), and applying encryption to arbitrary :class:`~trio.abc.Stream`\s, which allows for interesting applications like `TLS-over-TLS `__. See: :func:`trio.open_ssl_over_tcp_stream`, :func:`trio.serve_ssl_over_tcp`, :func:`trio.open_ssl_over_tcp_listeners`, and ``trio.ssl``. Interesting fact: the test suite for ``trio.ssl`` has so far found bugs in CPython's ssl module, PyPy's ssl module, PyOpenSSL, and OpenSSL. (``trio.ssl`` doesn't use PyOpenSSL.) Trio's test suite is fairly thorough. * You know thread-local storage? Well, Trio now has an equivalent: :ref:`task-local storage `. There's also the related, but more obscure, run-local storage; see :class:`~trio.lowlevel.RunLocal`. (`#2 `__) * Added a new :ref:`guide to for contributors `. Breaking changes and deprecations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Trio is a young and ambitious project, but it also aims to become a stable, production-quality foundation for async I/O in Python. Therefore, our approach for now is to provide deprecation warnings where-ever possible, but on a fairly aggressive cycle as we push towards stability. If you use Trio you should `read and subscribe to issue #1 `__. We'd also welcome feedback on how this approach is working, whether our deprecation warnings could be more helpful, or anything else. The tl;dr is: stop using ``socket.bind`` if you can, and then fix everything your test suite warns you about. Upcoming breaking changes without warnings (i.e., stuff that works in 0.2.0, but won't work in 0.3.0): * In the next release, the ``bind`` method on Trio socket objects will become async (`#241 `__). Unfortunately, there's no good way to provide a warning here. We recommend switching to the new highlevel networking APIs like :func:`serve_tcp`, which will insulate you from this change. Breaking changes (i.e., stuff that could theoretically break a program that worked on 0.1.0): * :mod:`trio.socket` no longer attempts to normalize or modernize socket options across different platforms. The high-level networking API now handles that, freeing :mod:`trio.socket` to focus on giving you raw, unadulterated BSD sockets. * When a socket ``sendall`` call was cancelled, it used to attach some metadata to the exception reporting how much data was actually sent. It no longer does this, because in common configurations like an :class:`~trio.SSLStream` wrapped around a :class:`~trio.SocketStream` it becomes ambiguous which "level" the partial metadata applies to, leading to confusion and bugs. There is no longer any way to tell how much data was sent after a ``sendall`` is cancelled. * The :func:`trio.socket.getprotobyname` function is now async, like it should have been all along. I doubt anyone will ever use it, but that's no reason not to get the details right. * The :mod:`trio.socket` functions ``getservbyport``, ``getservbyname``, and ``getfqdn`` have been removed, because they were obscure, buggy, and obsolete. Use :func:`~trio.socket.getaddrinfo` instead. Upcoming breaking changes with warnings (i.e., stuff that in 0.2.0 *will* work but will print loud complaints, and that won't work in 0.3.0): * For consistency with the new ``start`` method, the nursery ``spawn`` method is being renamed to ``start_soon`` (`#284 `__) * ``trio.socket.sendall`` is deprecated; use ``trio.open_tcp_stream`` and ``SocketStream.send_all`` instead (`#291 `__) * Trio now consistently uses ``run`` for functions that take and run an async function (like :func:`trio.run`!), and ``run_sync`` for functions that take and run a synchronous function. As part of this: * ``run_in_worker_thread`` is becoming ``run_sync_in_worker_thread`` * We took the opportunity to refactor ``run_in_trio_thread`` and ``await_in_trio_thread`` into the new class ``trio.BlockingTrioPortal`` * The hazmat function ``current_call_soon_thread_and_signal_safe`` is being replaced by :class:`trio.hazmat.TrioToken ` See `#68 `__ for details. * ``trio.Queue``\'s ``join`` and ``task_done`` methods are deprecated without replacement (`#321 `__) * Trio 0.1.0 provided a set of built-in mechanisms for waiting for and tracking the result of individual tasks. We haven't yet found any cases where using this actually led to simpler code, though, and this feature is blocking useful improvements, so the following are being deprecated without replacement: * ``nursery.zombies`` * ``nursery.monitor`` * ``nursery.reap`` * ``nursery.reap_and_unwrap`` * ``task.result`` * ``task.add_monitor`` * ``task.discard_monitor`` * ``task.wait`` This also lets us move a number of lower-level features out of the main :mod:`trio` namespace and into :mod:`trio.hazmat `: * ``trio.Task`` → :class:`trio.hazmat.Task ` * ``trio.current_task`` → :func:`trio.hazmat.current_task ` * ``trio.Result`` → ``trio.hazmat.Result`` * ``trio.Value`` → ``trio.hazmat.Value`` * ``trio.Error`` → ``trio.hazmat.Error`` * ``trio.UnboundedQueue`` → ``trio.hazmat.UnboundedQueue`` In addition, several introspection attributes are being renamed: * ``nursery.children`` → ``nursery.child_tasks`` * ``task.parent_task`` → use ``task.parent_nursery.parent_task`` instead See `#136 `__ for more details. * To consolidate introspection functionality in :mod:`trio.hazmat `, the following functions are moving: * ``trio.current_clock`` → :func:`trio.hazmat.current_clock ` * ``trio.current_statistics`` → :func:`trio.hazmat.current_statistics ` See `#317 `__ for more details. * It was decided that 0.1.0's "yield point" terminology was confusing; we now use :ref:`"checkpoint" ` instead. As part of this, the following functions in :mod:`trio.hazmat ` are changing names: * ``yield_briefly`` → :func:`~trio.hazmat.checkpoint ` * ``yield_briefly_no_cancel`` → :func:`~trio.lowlevel.cancel_shielded_checkpoint` * ``yield_if_cancelled`` → :func:`~trio.lowlevel.checkpoint_if_cancelled` * ``yield_indefinitely`` → :func:`~trio.lowlevel.wait_task_rescheduled` In addition, the following functions in :mod:`trio.testing` are changing names: * ``assert_yields`` → :func:`~trio.testing.assert_checkpoints` * ``assert_no_yields`` → :func:`~trio.testing.assert_no_checkpoints` See `#157 `__ for more details. * ``trio.format_exception`` is deprecated; use :func:`traceback.format_exception` instead (`#347 `__). * ``trio.current_instruments`` is deprecated. For adding or removing instrumentation at run-time, see :func:`trio.hazmat.add_instrument ` and :func:`trio.hazmat.remove_instrument ` (`#257 `__) Unfortunately, a limitation in PyPy3 5.8 breaks our deprecation handling for some renames. (Attempting to use the old names will give an unhelpful error instead of a helpful warning.) This does not affect CPython, or PyPy3 5.9+. Other changes ~~~~~~~~~~~~~ * ``run_sync_in_worker_thread`` now has a :ref:`robust mechanism for applying capacity limits to the number of concurrent threads ` (`#10 `__, `#57 `__, `#156 `__) * New support for tests to cleanly hook hostname lookup and socket operations: see :ref:`virtual-network-hooks`. In addition, ``trio.socket.SocketType`` is now an empty abstract base class, with the actual socket class made private. This shouldn't effect anyone, since the only thing you could directly use it for in the first place was ``isinstance`` checks, and those still work (`#170 `__) * New class :class:`StrictFIFOLock` * New exception ``ResourceBusyError`` * The :class:`trio.hazmat.ParkingLot ` class (which is used to implement many of Trio's synchronization primitives) was rewritten to be simpler and faster (`#272 `__, `#287 `__) * It's generally true that if you're using Trio you have to use Trio functions, if you're using asyncio you have to use asyncio functions, and so forth. (See the discussion of the "async sandwich" in the Trio tutorial for more details.) So for example, this isn't going to work:: async def main(): # asyncio here await asyncio.sleep(1) # trio here trio.run(main) Trio now reliably detects if you accidentally do something like this, and gives a helpful error message. * Trio now also has special error messages for several other common errors, like doing ``trio.run(some_func())`` (should be ``trio.run(some_func)``). * :mod:`trio.socket` now handles non-ascii domain names using the modern IDNA 2008 standard instead of the obsolete IDNA 2003 standard (`#11 `__) * When an :class:`~trio.abc.Instrument` raises an unexpected error, we now route it through the :mod:`logging` module instead of printing it directly to stderr. Normally this produces exactly the same effect, but this way it's more configurable. (`#306 `__) * Fixed a minor race condition in IOCP thread shutdown on Windows (`#81 `__) * Control-C handling on Windows now uses :func:`signal.set_wakeup_fd` and should be more reliable (`#42 `__) * :func:`trio.run` takes a new keyword argument ``restrict_keyboard_interrupt_to_checkpoints`` * New attributes allow more detailed introspection of the task tree: ``nursery.child_tasks``, ``Task.child_nurseries``, ``nursery.parent_task``, ``Task.parent_nursery`` * :func:`trio.testing.wait_all_tasks_blocked` now takes a ``tiebreaker=`` argument. The main use is to allow :class:`~trio.testing.MockClock`\'s auto-jump functionality to avoid interfering with direct use of :func:`~trio.testing.wait_all_tasks_blocked` in the same test. * :meth:`MultiError.catch` now correctly preserves ``__context__``, despite Python's best attempts to stop us (`#165 `__) * It is now possible to take weakrefs to :class:`Lock` and many other classes (`#331 `__) * Fix ``sock.accept()`` for IPv6 sockets (`#164 `__) * PyCharm (and hopefully other IDEs) can now offer better completions for the :mod:`trio` and :mod:`trio.hazmat ` modules (`#314 `__) * Trio now uses `yapf `__ to standardize formatting across the source tree, so we never have to think about whitespace again. * Many documentation improvements Trio 0.1.0 (2017-03-10) ----------------------- * Initial release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/index.rst0000644000076500000000000000677500000000000016230 0ustar00quentinwheel.. Trio documentation master file, created by sphinx-quickstart on Sat Jan 21 19:11:14 2017. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. ============================================================= Trio: a friendly Python library for async concurrency and I/O ============================================================= The Trio project's goal is to produce a production-quality, `permissively licensed `__, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do **multiple things at the same time** with **parallelized I/O**. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections at the same time, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on **usability** and **correctness**. Concurrency is complicated; we try to make it *easy* to get things *right*. Trio was built from the ground up to take advantage of the `latest Python features `__, and draws inspiration from `many sources `__, in particular Dave Beazley's `Curio `__. The resulting design is radically simpler than older competitors like `asyncio `__ and `Twisted `__, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. Perhaps you'll find the same. This project is young and still somewhat experimental: the overall design is solid and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We *do* encourage you do use it, but you should `read and subscribe to issue #1 `__ to get warning and a chance to give feedback about any compatibility-breaking changes. Vital statistics: * Supported environments: We test on - Python: 3.6+ (CPython and PyPy) - Windows, macOS, Linux (glibc and musl), FreeBSD Other environments might also work; give it a try and see. * Install: ``python3 -m pip install -U trio`` (or on Windows, maybe ``py -3 -m pip install -U trio``). No compiler needed. * Tutorial and reference manual: https://trio.readthedocs.io * Bug tracker and source code: https://github.com/python-trio/trio * Real-time chat: https://gitter.im/python-trio/general * Discussion forum: https://trio.discourse.group * License: MIT or Apache 2, your choice * Contributor guide: https://trio.readthedocs.io/en/latest/contributing.html * Code of conduct: Contributors are requested to follow our `code of conduct `_ in all project spaces. .. toctree:: :maxdepth: 2 :caption: Trio's friendly, yet comprehensive, manual: tutorial.rst awesome-trio-libraries.rst reference-core.rst reference-io.rst reference-testing.rst reference-lowlevel.rst design.rst history.rst contributing.rst releasing.rst code-of-conduct.rst ==================== Indices and tables ==================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` * :ref:`glossary` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/local_customization.py0000644000076500000000000000120600000000000021003 0ustar00quentinwheelfrom docutils.parsers.rst import directives from sphinx import addnodes from sphinx.domains.python import PyClasslike from sphinx.ext.autodoc import ( FunctionDocumenter, MethodDocumenter, ClassLevelDocumenter, Options, ) """ .. interface:: The nursery interface .. attribute:: blahblah """ class Interface(PyClasslike): def handle_signature(self, sig, signode): signode += addnodes.desc_name(sig, sig) return sig, "" def get_index_text(self, modname, name_cls): return "{} (interface in {})".format(name_cls[0], modname) def setup(app): app.add_directive_to_domain("py", "interface", Interface) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.4803247 trio-0.19.0/docs/source/reference-core/0000755000076500000000000000000000000000000017234 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/channels-backpressure.py0000644000076500000000000000163400000000000024074 0ustar00quentinwheel# Simulate a producer that generates values 10x faster than the # consumer can handle them. import trio import math async def producer(send_channel): count = 0 while True: # Pretend that we have to do some work to create this message, and it # takes 0.1 seconds: await trio.sleep(0.1) await send_channel.send(count) print("Sent message:", count) count += 1 async def consumer(receive_channel): async for value in receive_channel: print("Received message:", value) # Pretend that we have to do some work to handle this message, and it # takes 1 second await trio.sleep(1) async def main(): send_channel, receive_channel = trio.open_memory_channel(math.inf) async with trio.open_nursery() as nursery: nursery.start_soon(producer, send_channel) nursery.start_soon(consumer, receive_channel) trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/channels-mpmc-broken.py0000644000076500000000000000204700000000000023614 0ustar00quentinwheel# This example usually crashes! import trio import random async def main(): async with trio.open_nursery() as nursery: send_channel, receive_channel = trio.open_memory_channel(0) # Start two producers nursery.start_soon(producer, "A", send_channel) nursery.start_soon(producer, "B", send_channel) # And two consumers nursery.start_soon(consumer, "X", receive_channel) nursery.start_soon(consumer, "Y", receive_channel) async def producer(name, send_channel): async with send_channel: for i in range(3): await send_channel.send(f"{i} from producer {name}") # Random sleeps help trigger the problem more reliably await trio.sleep(random.random()) async def consumer(name, receive_channel): async with receive_channel: async for value in receive_channel: print(f"consumer {name} got value {value!r}") # Random sleeps help trigger the problem more reliably await trio.sleep(random.random()) trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/channels-mpmc-fixed.py0000644000076500000000000000226600000000000023436 0ustar00quentinwheelimport trio import random async def main(): async with trio.open_nursery() as nursery: send_channel, receive_channel = trio.open_memory_channel(0) async with send_channel, receive_channel: # Start two producers, giving each its own private clone nursery.start_soon(producer, "A", send_channel.clone()) nursery.start_soon(producer, "B", send_channel.clone()) # And two consumers, giving each its own private clone nursery.start_soon(consumer, "X", receive_channel.clone()) nursery.start_soon(consumer, "Y", receive_channel.clone()) async def producer(name, send_channel): async with send_channel: for i in range(3): await send_channel.send(f"{i} from producer {name}") # Random sleeps help trigger the problem more reliably await trio.sleep(random.random()) async def consumer(name, receive_channel): async with receive_channel: async for value in receive_channel: print(f"consumer {name} got value {value!r}") # Random sleeps help trigger the problem more reliably await trio.sleep(random.random()) trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/channels-shutdown.py0000644000076500000000000000106600000000000023255 0ustar00quentinwheelimport trio async def main(): async with trio.open_nursery() as nursery: send_channel, receive_channel = trio.open_memory_channel(0) nursery.start_soon(producer, send_channel) nursery.start_soon(consumer, receive_channel) async def producer(send_channel): async with send_channel: for i in range(3): await send_channel.send(f"message {i}") async def consumer(receive_channel): async with receive_channel: async for value in receive_channel: print(f"got value {value!r}") trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/channels-simple.py0000644000076500000000000000141500000000000022671 0ustar00quentinwheelimport trio async def main(): async with trio.open_nursery() as nursery: # Open a channel: send_channel, receive_channel = trio.open_memory_channel(0) # Start a producer and a consumer, passing one end of the channel to # each of them: nursery.start_soon(producer, send_channel) nursery.start_soon(consumer, receive_channel) async def producer(send_channel): # Producer sends 3 messages for i in range(3): # The producer sends using 'await send_channel.send(...)' await send_channel.send(f"message {i}") async def consumer(receive_channel): # The consumer uses an 'async for' loop to receive the values: async for value in receive_channel: print(f"got value {value!r}") trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/contextvar-example.py0000644000076500000000000000235300000000000023437 0ustar00quentinwheelimport random import trio import contextvars request_info = contextvars.ContextVar("request_info") # Example logging function that tags each line with the request identifier. def log(msg): # Read from task-local storage: request_tag = request_info.get() print(f"request {request_tag}: {msg}") # An example "request handler" that does some work itself and also # spawns some helper tasks to do some concurrent work. async def handle_request(tag): # Write to task-local storage: request_info.set(tag) log("Request handler started") await trio.sleep(random.random()) async with trio.open_nursery() as nursery: nursery.start_soon(concurrent_helper, "a") nursery.start_soon(concurrent_helper, "b") await trio.sleep(random.random()) log("Request received finished") async def concurrent_helper(job): log(f"Helper task {job} started") await trio.sleep(random.random()) log(f"Helper task {job} finished") # Spawn several "request handlers" simultaneously, to simulate a # busy server handling multiple requests at the same time. async def main(): async with trio.open_nursery() as nursery: for i in range(3): nursery.start_soon(handle_request, i) trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core/from-thread-example.py0000644000076500000000000000252600000000000023454 0ustar00quentinwheelimport trio def thread_fn(receive_from_trio, send_to_trio): while True: # Since we're in a thread, we can't call methods on Trio # objects directly -- so we use trio.from_thread to call them. try: request = trio.from_thread.run(receive_from_trio.receive) except trio.EndOfChannel: trio.from_thread.run(send_to_trio.aclose) return else: response = request + 1 trio.from_thread.run(send_to_trio.send, response) async def main(): send_to_thread, receive_from_trio = trio.open_memory_channel(0) send_to_trio, receive_from_thread = trio.open_memory_channel(0) async with trio.open_nursery() as nursery: # In a background thread, run: # thread_fn(receive_from_trio, send_to_trio) nursery.start_soon( trio.to_thread.run_sync, thread_fn, receive_from_trio, send_to_trio ) # prints "1" await send_to_thread.send(0) print(await receive_from_thread.receive()) # prints "2" await send_to_thread.send(1) print(await receive_from_thread.receive()) # When we close the channel, it signals the thread to exit. await send_to_thread.aclose() # When we exit the nursery, it waits for the background thread to # exit. trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-core.rst0000644000076500000000000022043500000000000017774 0ustar00quentinwheelTrio's core functionality ========================= .. module:: trio Entering Trio ------------- If you want to use Trio, then the first thing you have to do is call :func:`trio.run`: .. autofunction:: run General principles ------------------ .. _checkpoints: Checkpoints ~~~~~~~~~~~ When writing code using Trio, it's very important to understand the concept of a *checkpoint*. Many of Trio's functions act as checkpoints. A checkpoint is two things: 1. It's a point where Trio checks for cancellation. For example, if the code that called your function set a timeout, and that timeout has expired, then the next time your function executes a checkpoint Trio will raise a :exc:`Cancelled` exception. See :ref:`cancellation` below for more details. 2. It's a point where the Trio scheduler checks its scheduling policy to see if it's a good time to switch to another task, and potentially does so. (Currently, this check is very simple: the scheduler always switches at every checkpoint. But `this might change in the future `__.) When writing Trio code, you need to keep track of where your checkpoints are. Why? First, because checkpoints require extra scrutiny: whenever you execute a checkpoint, you need to be prepared to handle a :exc:`Cancelled` error, or for another task to run and `rearrange some state out from under you `__. And second, because you also need to make sure that you have *enough* checkpoints: if your code doesn't pass through a checkpoint on a regular basis, then it will be slow to notice and respond to cancellation and – much worse – since Trio is a cooperative multi-tasking system where the *only* place the scheduler can switch tasks is at checkpoints, it'll also prevent the scheduler from fairly allocating time between different tasks and adversely effect the response latency of all the other code running in the same process. (Informally we say that a task that does this is "hogging the run loop".) So when you're doing code review on a project that uses Trio, one of the things you'll want to think about is whether there are enough checkpoints, and whether each one is handled correctly. Of course this means you need a way to recognize checkpoints. How do you do that? The underlying principle is that any operation that blocks has to be a checkpoint. This makes sense: if an operation blocks, then it might block for a long time, and you'll want to be able to cancel it if a timeout expires; and in any case, while this task is blocked we want another task to be scheduled to run so our code can make full use of the CPU. But if we want to write correct code in practice, then this principle is a little too sloppy and imprecise to be useful. How do we know which functions might block? What if a function blocks sometimes, but not others, depending on the arguments passed / network speed / phase of the moon? How do we figure out where the checkpoints are when we're stressed and sleep deprived but still want to get this code review right, and would prefer to reserve our mental energy for thinking about the actual logic instead of worrying about checkpoints? .. _checkpoint-rule: Don't worry – Trio's got your back. Since checkpoints are important and ubiquitous, we make it as simple as possible to keep track of them. Here are the rules: * Regular (synchronous) functions never contain any checkpoints. * If you call an async function provided by Trio (``await ``), and it doesn't raise an exception, then it *always* acts as a checkpoint. (If it does raise an exception, it might act as a checkpoint or might not.) * This includes async iterators: If you write ``async for ... in ``, then there will be at least one checkpoint before each iteration of the loop and one checkpoint after the last iteration. * Partial exception for async context managers: Both the entry and exit of an ``async with`` block are defined as async functions; but for a particular type of async context manager, it's often the case that only one of them is able to block, which means only that one will act as a checkpoint. This is documented on a case-by-case basis. * Third-party async functions / iterators / context managers can act as checkpoints; if you see ``await `` or one of its friends, then that *might* be a checkpoint. So to be safe, you should prepare for scheduling or cancellation happening there. The reason we distinguish between Trio functions and other functions is that we can't make any guarantees about third party code. Checkpoint-ness is a transitive property: if function A acts as a checkpoint, and you write a function that calls function A, then your function also acts as a checkpoint. If you don't, then it isn't. So there's nothing stopping someone from writing a function like:: # technically legal, but bad style: async def why_is_this_async(): return 7 that never calls any of Trio's async functions. This is an async function, but it's not a checkpoint. But why make a function async if it never calls any async functions? It's possible, but it's a bad idea. If you have a function that's not calling any async functions, then you should make it synchronous. The people who use your function will thank you, because it makes it obvious that your function is not a checkpoint, and their code reviews will go faster. (Remember how in the tutorial we emphasized the importance of the :ref:`"async sandwich" `, and the way it means that ``await`` ends up being a marker that shows when you're calling a function that calls a function that ... eventually calls one of Trio's built-in async functions? The transitivity of async-ness is a technical requirement that Python imposes, but since it exactly matches the transitivity of checkpoint-ness, we're able to exploit it to help you keep track of checkpoints. Pretty sneaky, eh?) A slightly trickier case is a function like:: async def sleep_or_not(should_sleep): if should_sleep: await trio.sleep(1) else: pass Here the function acts as a checkpoint if you call it with ``should_sleep`` set to a true value, but not otherwise. This is why we emphasize that Trio's own async functions are *unconditional* checkpoints: they *always* check for cancellation and check for scheduling, regardless of what arguments they're passed. If you find an async function in Trio that doesn't follow this rule, then it's a bug and you should `let us know `__. Inside Trio, we're very picky about this, because Trio is the foundation of the whole system so we think it's worth the extra effort to make things extra predictable. It's up to you how picky you want to be in your code. To give you a more realistic example of what this kind of issue looks like in real life, consider this function:: async def recv_exactly(sock, nbytes): data = bytearray() while nbytes > 0: # recv() reads up to 'nbytes' bytes each time chunk = await sock.recv(nbytes) if not chunk: raise RuntimeError("socket unexpected closed") nbytes -= len(chunk) data += chunk return data If called with an ``nbytes`` that's greater than zero, then it will call ``sock.recv`` at least once, and ``recv`` is an async Trio function, and thus an unconditional checkpoint. So in this case, ``recv_exactly`` acts as a checkpoint. But if we do ``await recv_exactly(sock, 0)``, then it will immediately return an empty buffer without executing a checkpoint. If this were a function in Trio itself, then this wouldn't be acceptable, but you may decide you don't want to worry about this kind of minor edge case in your own code. If you do want to be careful, or if you have some CPU-bound code that doesn't have enough checkpoints in it, then it's useful to know that ``await trio.sleep(0)`` is an idiomatic way to execute a checkpoint without doing anything else, and that :func:`trio.testing.assert_checkpoints` can be used to test that an arbitrary block of code contains a checkpoint. Thread safety ~~~~~~~~~~~~~ The vast majority of Trio's API is *not* thread safe: it can only be used from inside a call to :func:`trio.run`. This manual doesn't bother documenting this on individual calls; unless specifically noted otherwise, you should assume that it isn't safe to call any Trio functions from anywhere except the Trio thread. (But :ref:`see below ` if you really do need to work with threads.) .. _time-and-clocks: Time and clocks --------------- Every call to :func:`run` has an associated clock. By default, Trio uses an unspecified monotonic clock, but this can be changed by passing a custom clock object to :func:`run` (e.g. for testing). You should not assume that Trio's internal clock matches any other clock you have access to, including the clocks of simultaneous calls to :func:`trio.run` happening in other processes or threads! The default clock is currently implemented as :func:`time.perf_counter` plus a large random offset. The idea here is to catch code that accidentally uses :func:`time.perf_counter` early, which should help keep our options open for `changing the clock implementation later `__, and (more importantly) make sure you can be confident that custom clocks like :class:`trio.testing.MockClock` will work with third-party libraries you don't control. .. autofunction:: current_time .. autofunction:: sleep .. autofunction:: sleep_until .. autofunction:: sleep_forever If you're a mad scientist or otherwise feel the need to take direct control over the PASSAGE OF TIME ITSELF, then you can implement a custom :class:`~trio.abc.Clock` class: .. autoclass:: trio.abc.Clock :members: .. _cancellation: Cancellation and timeouts ------------------------- Trio has a rich, composable system for cancelling work, either explicitly or when a timeout expires. A simple timeout example ~~~~~~~~~~~~~~~~~~~~~~~~ In the simplest case, you can apply a timeout to a block of code:: with trio.move_on_after(30): result = await do_http_get("https://...") print("result is", result) print("with block finished") We refer to :func:`move_on_after` as creating a "cancel scope", which contains all the code that runs inside the ``with`` block. If the HTTP request takes more than 30 seconds to run, then it will be cancelled: we'll abort the request and we *won't* see ``result is ...`` printed on the console; instead we'll go straight to printing the ``with block finished`` message. .. note:: Note that this is a single 30 second timeout for the entire body of the ``with`` statement. This is different from what you might have seen with other Python libraries, where timeouts often refer to something `more complicated `__. We think this way is easier to reason about. How does this work? There's no magic here: Trio is built using ordinary Python functionality, so we can't just abandon the code inside the ``with`` block. Instead, we take advantage of Python's standard way of aborting a large and complex piece of code: we raise an exception. Here's the idea: whenever you call a cancellable function like ``await trio.sleep(...)`` or ``await sock.recv(...)`` – see :ref:`checkpoints` – then the first thing that function does is to check if there's a surrounding cancel scope whose timeout has expired, or otherwise been cancelled. If so, then instead of performing the requested operation, the function fails immediately with a :exc:`Cancelled` exception. In this example, this probably happens somewhere deep inside the bowels of ``do_http_get``. The exception then propagates out like any normal exception (you could even catch it if you wanted, but that's generally a bad idea), until it reaches the ``with move_on_after(...):``. And at this point, the :exc:`Cancelled` exception has done its job – it's successfully unwound the whole cancelled scope – so :func:`move_on_after` catches it, and execution continues as normal after the ``with`` block. And this all works correctly even if you have nested cancel scopes, because every :exc:`Cancelled` object carries an invisible marker that makes sure that the cancel scope that triggered it is the only one that will catch it. Handling cancellation ~~~~~~~~~~~~~~~~~~~~~ Pretty much any code you write using Trio needs to have some strategy to handle :exc:`Cancelled` exceptions – even if you didn't set a timeout, then your caller might (and probably will). You can catch :exc:`Cancelled`, but you shouldn't! Or more precisely, if you do catch it, then you should do some cleanup and then re-raise it or otherwise let it continue propagating (unless you encounter an error, in which case it's OK to let that propagate instead). To help remind you of this fact, :exc:`Cancelled` inherits from :exc:`BaseException`, like :exc:`KeyboardInterrupt` and :exc:`SystemExit` do, so that it won't be caught by catch-all ``except Exception:`` blocks. It's also important in any long-running code to make sure that you regularly check for cancellation, because otherwise timeouts won't work! This happens implicitly every time you call a cancellable operation; see :ref:`below ` for details. If you have a task that has to do a lot of work without any I/O, then you can use ``await sleep(0)`` to insert an explicit cancel+schedule point. Here's a rule of thumb for designing good Trio-style ("trionic"?) APIs: if you're writing a reusable function, then you shouldn't take a ``timeout=`` parameter, and instead let your caller worry about it. This has several advantages. First, it leaves the caller's options open for deciding how they prefer to handle timeouts – for example, they might find it easier to work with absolute deadlines instead of relative timeouts. If they're the ones calling into the cancellation machinery, then they get to pick, and you don't have to worry about it. Second, and more importantly, this makes it easier for others to re-use your code. If you write a ``http_get`` function, and then I come along later and write a ``log_in_to_twitter`` function that needs to internally make several ``http_get`` calls, I don't want to have to figure out how to configure the individual timeouts on each of those calls – and with Trio's timeout system, it's totally unnecessary. Of course, this rule doesn't apply to APIs that need to impose internal timeouts. For example, if you write a ``start_http_server`` function, then you probably should give your caller some way to configure timeouts on individual requests. Cancellation semantics ~~~~~~~~~~~~~~~~~~~~~~ You can freely nest cancellation blocks, and each :exc:`Cancelled` exception "knows" which block it belongs to. So long as you don't stop it, the exception will keep propagating until it reaches the block that raised it, at which point it will stop automatically. Here's an example:: print("starting...") with trio.move_on_after(5): with trio.move_on_after(10): await trio.sleep(20) print("sleep finished without error") print("move_on_after(10) finished without error") print("move_on_after(5) finished without error") In this code, the outer scope will expire after 5 seconds, causing the :func:`sleep` call to return early with a :exc:`Cancelled` exception. Then this exception will propagate through the ``with move_on_after(10)`` line until it's caught by the ``with move_on_after(5)`` context manager. So this code will print: .. code-block:: none starting... move_on_after(5) finished without error The end result is that Trio has successfully cancelled exactly the work that was happening within the scope that was cancelled. Looking at this, you might wonder how you can tell whether the inner block timed out – perhaps you want to do something different, like try a fallback procedure or report a failure to our caller. To make this easier, :func:`move_on_after`\´s ``__enter__`` function returns an object representing this cancel scope, which we can use to check whether this scope caught a :exc:`Cancelled` exception:: with trio.move_on_after(5) as cancel_scope: await trio.sleep(10) print(cancel_scope.cancelled_caught) # prints "True" The ``cancel_scope`` object also allows you to check or adjust this scope's deadline, explicitly trigger a cancellation without waiting for the deadline, check if the scope has already been cancelled, and so forth – see :class:`CancelScope` below for the full details. .. _blocking-cleanup-example: Cancellations in Trio are "level triggered", meaning that once a block has been cancelled, *all* cancellable operations in that block will keep raising :exc:`Cancelled`. This helps avoid some pitfalls around resource clean-up. For example, imagine that we have a function that connects to a remote server and sends some messages, and then cleans up on the way out:: with trio.move_on_after(TIMEOUT): conn = make_connection() try: await conn.send_hello_msg() finally: await conn.send_goodbye_msg() Now suppose that the remote server stops responding, so our call to ``await conn.send_hello_msg()`` hangs forever. Fortunately, we were clever enough to put a timeout around this code, so eventually the timeout will expire and ``send_hello_msg`` will raise :exc:`Cancelled`. But then, in the ``finally`` block, we make another blocking operation, which will also hang forever! At this point, if we were using :mod:`asyncio` or another library with "edge-triggered" cancellation, we'd be in trouble: since our timeout already fired, it wouldn't fire again, and at this point our application would lock up forever. But in Trio, this *doesn't* happen: the ``await conn.send_goodbye_msg()`` call is still inside the cancelled block, so it will also raise :exc:`Cancelled`. Of course, if you really want to make another blocking call in your cleanup handler, Trio will let you; it's trying to prevent you from accidentally shooting yourself in the foot. Intentional foot-shooting is no problem (or at least – it's not Trio's problem). To do this, create a new scope, and set its :attr:`~CancelScope.shield` attribute to :data:`True`:: with trio.move_on_after(TIMEOUT): conn = make_connection() try: await conn.send_hello_msg() finally: with trio.move_on_after(CLEANUP_TIMEOUT) as cleanup_scope: cleanup_scope.shield = True await conn.send_goodbye_msg() So long as you're inside a scope with ``shield = True`` set, then you'll be protected from outside cancellations. Note though that this *only* applies to *outside* cancellations: if ``CLEANUP_TIMEOUT`` expires then ``await conn.send_goodbye_msg()`` will still be cancelled, and if ``await conn.send_goodbye_msg()`` call uses any timeouts internally, then those will continue to work normally as well. This is a pretty advanced feature that most people probably won't use, but it's there for the rare cases where you need it. .. _cancellable-primitives: Cancellation and primitive operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We've talked a lot about what happens when an operation is cancelled, and how you need to be prepared for this whenever calling a cancellable operation... but we haven't gone into the details about which operations are cancellable, and how exactly they behave when they're cancelled. Here's the rule: if it's in the ``trio`` namespace, and you use ``await`` to call it, then it's cancellable (see :ref:`checkpoints` above). Cancellable means: * If you try to call it when inside a cancelled scope, then it will raise :exc:`Cancelled`. * If it blocks, and while it's blocked then one of the scopes around it becomes cancelled, it will return early and raise :exc:`Cancelled`. * Raising :exc:`Cancelled` means that the operation *did not happen*. If a Trio socket's ``send`` method raises :exc:`Cancelled`, then no data was sent. If a Trio socket's ``recv`` method raises :exc:`Cancelled` then no data was lost – it's still sitting in the socket receive buffer waiting for you to call ``recv`` again. And so forth. There are a few idiosyncratic cases where external constraints make it impossible to fully implement these semantics. These are always documented. There is also one systematic exception: * Async cleanup operations – like ``__aexit__`` methods or async close methods – are cancellable just like anything else *except* that if they are cancelled, they still perform a minimum level of cleanup before raising :exc:`Cancelled`. For example, closing a TLS-wrapped socket normally involves sending a notification to the remote peer, so that they can be cryptographically assured that you really meant to close the socket, and your connection wasn't just broken by a man-in-the-middle attacker. But handling this robustly is a bit tricky. Remember our :ref:`example ` above where the blocking ``send_goodbye_msg`` caused problems? That's exactly how closing a TLS socket works: if the remote peer has disappeared, then our code may never be able to actually send our shutdown notification, and it would be nice if it didn't block forever trying. Therefore, the method for closing a TLS-wrapped socket will *try* to send that notification – and if it gets cancelled, then it will give up on sending the message, but *will* still close the underlying socket before raising :exc:`Cancelled`, so at least you don't leak that resource. Cancellation API details ~~~~~~~~~~~~~~~~~~~~~~~~ :func:`move_on_after` and all the other cancellation facilities provided by Trio are ultimately implemented in terms of :class:`CancelScope` objects. .. autoclass:: trio.CancelScope .. autoattribute:: deadline .. autoattribute:: shield .. automethod:: cancel() .. attribute:: cancelled_caught Readonly :class:`bool`. Records whether this scope caught a :exc:`~trio.Cancelled` exception. This requires two things: (1) the ``with`` block exited with a :exc:`~trio.Cancelled` exception, and (2) this scope is the one that was responsible for triggering this :exc:`~trio.Cancelled` exception. .. autoattribute:: cancel_called Often there is no need to create :class:`CancelScope` object. Trio already includes :attr:`~trio.Nursery.cancel_scope` attribute in a task-related :class:`Nursery` object. We will cover nurseries later in the manual. Trio also provides several convenience functions for the common situation of just wanting to impose a timeout on some code: .. autofunction:: move_on_after :with: cancel_scope .. autofunction:: move_on_at :with: cancel_scope .. autofunction:: fail_after :with: cancel_scope .. autofunction:: fail_at :with: cancel_scope Cheat sheet: * If you want to impose a timeout on a function, but you don't care whether it timed out or not:: with trio.move_on_after(TIMEOUT): await do_whatever() # carry on! * If you want to impose a timeout on a function, and then do some recovery if it timed out:: with trio.move_on_after(TIMEOUT) as cancel_scope: await do_whatever() if cancel_scope.cancelled_caught: # The operation timed out, try something else try_to_recover() * If you want to impose a timeout on a function, and then if it times out then just give up and raise an error for your caller to deal with:: with trio.fail_after(TIMEOUT): await do_whatever() It's also possible to check what the current effective deadline is, which is sometimes useful: .. autofunction:: current_effective_deadline .. _tasks: Tasks let you do multiple things at once ---------------------------------------- One of Trio's core design principles is: *no implicit concurrency*. Every function executes in a straightforward, top-to-bottom manner, finishing each operation before moving on to the next – *like Guido intended*. But, of course, the entire point of an async library is to let you do multiple things at once. The one and only way to do that in Trio is through the task spawning interface. So if you want your program to walk *and* chew gum, this is the section for you. Nurseries and spawning ~~~~~~~~~~~~~~~~~~~~~~ Most libraries for concurrent programming let you start new child tasks (or threads, or whatever) willy-nilly, whenever and where-ever you feel like it. Trio is a bit different: you can't start a child task unless you're prepared to be a responsible parent. The way you demonstrate your responsibility is by creating a nursery:: async with trio.open_nursery() as nursery: ... And once you have a reference to a nursery object, you can start children in that nursery:: async def child(): ... async def parent(): async with trio.open_nursery() as nursery: # Make two concurrent calls to child() nursery.start_soon(child) nursery.start_soon(child) This means that tasks form a tree: when you call :func:`run`, then this creates an initial task, and all your other tasks will be children, grandchildren, etc. of the initial task. Essentially, the body of the ``async with`` block acts like an initial task that's running inside the nursery, and then each call to ``nursery.start_soon`` adds another task that runs in parallel. Two crucial things to keep in mind: * If any task inside the nursery finishes with an unhandled exception, then the nursery immediately cancels all the tasks inside the nursery. * Since all of the tasks are running concurrently inside the ``async with`` block, the block does not exit until *all* tasks have completed. If you've used other concurrency frameworks, then you can think of it as, the de-indentation at the end of the ``async with`` automatically "joins" (waits for) all of the tasks in the nursery. * Once all the tasks have finished, then: * The nursery is marked as "closed", meaning that no new tasks can be started inside it. * Any unhandled exceptions are re-raised inside the parent task. If there are multiple exceptions, then they're collected up into a single :exc:`MultiError` exception. Since all tasks are descendents of the initial task, one consequence of this is that :func:`run` can't finish until all tasks have finished. .. note:: A return statement will not cancel the nursery if it still has tasks running:: async def main(): async with trio.open_nursery() as nursery: nursery.start_soon(trio.sleep, 5) return trio.run(main) This code will wait 5 seconds (for the child task to finish), and then return. Child tasks and cancellation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In Trio, child tasks inherit the parent nursery's cancel scopes. So in this example, both the child tasks will be cancelled when the timeout expires:: with trio.move_on_after(TIMEOUT): async with trio.open_nursery() as nursery: nursery.start_soon(child1) nursery.start_soon(child2) Note that what matters here is the scopes that were active when :func:`open_nursery` was called, *not* the scopes active when ``start_soon`` is called. So for example, the timeout block below does nothing at all:: async with trio.open_nursery() as nursery: with trio.move_on_after(TIMEOUT): # don't do this! nursery.start_soon(child) Why is this so? Well, ``start_soon()`` returns as soon as it has scheduled the new task to start running. The flow of execution in the parent then continues on to exit the ``with trio.move_on_after(TIMEOUT):`` block, at which point Trio forgets about the timeout entirely. In order for the timeout to apply to the child task, Trio must be able to tell that its associated cancel scope will stay open for at least as long as the child task is executing. And Trio can only know that for sure if the cancel scope block is outside the nursery block. You might wonder why Trio can't just remember "this task should be cancelled in ``TIMEOUT`` seconds", even after the ``with trio.move_on_after(TIMEOUT):`` block is gone. The reason has to do with :ref:`how cancellation is implemented `. Recall that cancellation is represented by a `Cancelled` exception, which eventually needs to be caught by the cancel scope that caused it. (Otherwise, the exception would take down your whole program!) In order to be able to cancel the child tasks, the cancel scope has to be able to "see" the `Cancelled` exceptions that they raise -- and those exceptions come out of the ``async with open_nursery()`` block, not out of the call to ``start_soon()``. If you want a timeout to apply to one task but not another, then you need to put the cancel scope in that individual task's function -- ``child()``, in this example. Errors in multiple child tasks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Normally, in Python, only one thing happens at a time, which means that only one thing can wrong at a time. Trio has no such limitation. Consider code like:: async def broken1(): d = {} return d["missing"] async def broken2(): seq = range(10) return seq[20] async def parent(): async with trio.open_nursery() as nursery: nursery.start_soon(broken1) nursery.start_soon(broken2) ``broken1`` raises ``KeyError``. ``broken2`` raises ``IndexError``. Obviously ``parent`` should raise some error, but what? In some sense, the answer should be "both of these at once", but in Python there can only be one exception at a time. Trio's answer is that it raises a :exc:`MultiError` object. This is a special exception which encapsulates multiple exception objects – either regular exceptions or nested :exc:`MultiError`\s. To make these easier to work with, Trio installs a custom `sys.excepthook` that knows how to print nice tracebacks for unhandled :exc:`MultiError`\s, and it also provides some helpful utilities like :meth:`MultiError.catch`, which allows you to catch "part of" a :exc:`MultiError`. Spawning tasks without becoming a parent ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sometimes it doesn't make sense for the task that starts a child to take on responsibility for watching it. For example, a server task may want to start a new task for each connection, but it can't listen for connections and supervise children at the same time. The solution here is simple once you see it: there's no requirement that a nursery object stay in the task that created it! We can write code like this:: async def new_connection_listener(handler, nursery): while True: conn = await get_new_connection() nursery.start_soon(handler, conn) async def server(handler): async with trio.open_nursery() as nursery: nursery.start_soon(new_connection_listener, handler, nursery) Notice that ``server`` opens a nursery and passes it to ``new_connection_listener``, and then ``new_connection_listener`` is able to start new tasks as "siblings" of itself. Of course, in this case, we could just as well have written:: async def server(handler): async with trio.open_nursery() as nursery: while True: conn = await get_new_connection() nursery.start_soon(handler, conn) \...but sometimes things aren't so simple, and this trick comes in handy. One thing to remember, though: cancel scopes are inherited from the nursery, **not** from the task that calls ``start_soon``. So in this example, the timeout does *not* apply to ``child`` (or to anything else):: async def do_spawn(nursery): with trio.move_on_after(TIMEOUT): # don't do this, it has no effect nursery.start_soon(child) async with trio.open_nursery() as nursery: nursery.start_soon(do_spawn, nursery) Custom supervisors ~~~~~~~~~~~~~~~~~~ The default cleanup logic is often sufficient for simple cases, but what if you want a more sophisticated supervisor? For example, maybe you have `Erlang envy `__ and want features like automatic restart of crashed tasks. Trio itself doesn't provide these kinds of features, but you can build them on top; Trio's goal is to enforce basic hygiene and then get out of your way. (Specifically: Trio won't let you build a supervisor that exits and leaves orphaned tasks behind, and if you have an unhandled exception due to bugs or laziness then Trio will make sure they propagate.) And then you can wrap your fancy supervisor up in a library and put it on PyPI, because supervisors are tricky and there's no reason everyone should have to write their own. For example, here's a function that takes a list of functions, runs them all concurrently, and returns the result from the one that finishes first:: async def race(*async_fns): if not async_fns: raise ValueError("must pass at least one argument") winner = None async def jockey(async_fn, cancel_scope): nonlocal winner winner = await async_fn() cancel_scope.cancel() async with trio.open_nursery() as nursery: for async_fn in async_fns: nursery.start_soon(jockey, async_fn, nursery.cancel_scope) return winner This works by starting a set of tasks which each try to run their function. As soon as the first function completes its execution, the task will set the nonlocal variable ``winner`` from the outer scope to the result of the function, and cancel the other tasks using the passed in cancel scope. Once all tasks have been cancelled (which exits the nursery block), the variable ``winner`` will be returned. Here if one or more of the racing functions raises an unhandled exception then Trio's normal handling kicks in: it cancels the others and then propagates the exception. If you want different behavior, you can get that by adding a ``try`` block to the ``jockey`` function to catch exceptions and handle them however you like. Task-related API details ~~~~~~~~~~~~~~~~~~~~~~~~ The nursery API +++++++++++++++ .. autofunction:: open_nursery :async-with: nursery .. autoclass:: Nursery() :members: .. attribute:: TASK_STATUS_IGNORED See :meth:`~Nursery.start`. Working with :exc:`MultiError`\s ++++++++++++++++++++++++++++++++ .. autoexception:: MultiError .. attribute:: exceptions The list of exception objects that this :exc:`MultiError` represents. .. automethod:: filter .. automethod:: catch :with: Examples: Suppose we have a handler function that discards :exc:`ValueError`\s:: def handle_ValueError(exc): if isinstance(exc, ValueError): return None else: return exc Then these both raise :exc:`KeyError`:: with MultiError.catch(handle_ValueError): raise MultiError([KeyError(), ValueError()]) with MultiError.catch(handle_ValueError): raise MultiError([ ValueError(), MultiError([KeyError(), ValueError()]), ]) And both of these raise nothing at all:: with MultiError.catch(handle_ValueError): raise MultiError([ValueError(), ValueError()]) with MultiError.catch(handle_ValueError): raise MultiError([ MultiError([ValueError(), ValueError()]), ValueError(), ]) You can also return a new or modified exception, for example:: def convert_ValueError_to_MyCustomError(exc): if isinstance(exc, ValueError): # Similar to 'raise MyCustomError from exc' new_exc = MyCustomError(...) new_exc.__cause__ = exc return new_exc else: return exc In the example above, we set ``__cause__`` as a form of explicit context chaining. :meth:`MultiError.filter` and :meth:`MultiError.catch` also perform implicit exception chaining – if you return a new exception object, then the new object's ``__context__`` attribute will automatically be set to the original exception. We also monkey patch :class:`traceback.TracebackException` to be able to handle formatting :exc:`MultiError`\s. This means that anything that formats exception messages like :mod:`logging` will work out of the box:: import logging logging.basicConfig() try: raise MultiError([ValueError("foo"), KeyError("bar")]) except: logging.exception("Oh no!") raise Will properly log the inner exceptions: .. code-block:: none ERROR:root:Oh no! Traceback (most recent call last): File "", line 2, in trio.MultiError: ValueError('foo',), KeyError('bar',) Details of embedded exception 1: ValueError: foo Details of embedded exception 2: KeyError: 'bar' .. _task-local-storage: Task-local storage ------------------ Suppose you're writing a server that responds to network requests, and you log some information about each request as you process it. If the server is busy and there are multiple requests being handled at the same time, then you might end up with logs like this: .. code-block:: none Request handler started Request handler started Request handler finished Request handler finished In this log, it's hard to know which lines came from which request. (Did the request that started first also finish first, or not?) One way to solve this is to assign each request a unique identifier, and then include this identifier in each log message: .. code-block:: none request 1: Request handler started request 2: Request handler started request 2: Request handler finished request 1: Request handler finished This way we can see that request 1 was slow: it started before request 2 but finished afterwards. (You can also get `much fancier `__, but this is enough for an example.) Now, here's the problem: how does the logging code know what the request identifier is? One approach would be to explicitly pass it around to every function that might want to emit logs... but that's basically every function, because you never know when you might need to add a ``log.debug(...)`` call to some utility function buried deep in the call stack, and when you're in the middle of a debugging a nasty problem that last thing you want is to have to stop first and refactor everything to pass through the request identifier! Sometimes this is the right solution, but other times it would be much more convenient if we could store the identifier in a global variable, so that the logging function could look it up whenever it needed it. Except... a global variable can only have one value at a time, so if we have multiple handlers running at once then this isn't going to work. What we need is something that's *like* a global variable, but that can have different values depending on which request handler is accessing it. To solve this problem, Python 3.7 added a new module to the standard library: :mod:`contextvars`. And not only does Trio have built-in support for :mod:`contextvars`, but if you're using an earlier version of Python, then Trio makes sure that a backported version of :mod:`contextvars` is installed. So you can assume :mod:`contextvars` is there and works regardless of what version of Python you're using. Here's a toy example demonstrating how to use :mod:`contextvars`: .. literalinclude:: reference-core/contextvar-example.py Example output (yours may differ slightly): .. code-block:: none request 1: Request handler started request 2: Request handler started request 0: Request handler started request 2: Helper task a started request 2: Helper task b started request 1: Helper task a started request 1: Helper task b started request 0: Helper task b started request 0: Helper task a started request 2: Helper task b finished request 2: Helper task a finished request 2: Request received finished request 0: Helper task a finished request 1: Helper task a finished request 1: Helper task b finished request 1: Request received finished request 0: Helper task b finished request 0: Request received finished For more information, read the `contextvar docs `__. .. _synchronization: Synchronizing and communicating between tasks --------------------------------------------- Trio provides a standard set of synchronization and inter-task communication primitives. These objects' APIs are generally modelled off of the analogous classes in the standard library, but with some differences. Blocking and non-blocking methods ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The standard library synchronization primitives have a variety of mechanisms for specifying timeouts and blocking behavior, and of signaling whether an operation returned due to success versus a timeout. In Trio, we standardize on the following conventions: * We don't provide timeout arguments. If you want a timeout, then use a cancel scope. * For operations that have a non-blocking variant, the blocking and non-blocking variants are different methods with names like ``X`` and ``X_nowait``, respectively. (This is similar to :class:`queue.Queue`, but unlike most of the classes in :mod:`threading`.) We like this approach because it allows us to make the blocking version async and the non-blocking version sync. * When a non-blocking method cannot succeed (the channel is empty, the lock is already held, etc.), then it raises :exc:`trio.WouldBlock`. There's no equivalent to the :exc:`queue.Empty` versus :exc:`queue.Full` distinction – we just have the one exception that we use consistently. Fairness ~~~~~~~~ These classes are all guaranteed to be "fair", meaning that when it comes time to choose who will be next to acquire a lock, get an item from a queue, etc., then it always goes to the task which has been waiting longest. It's `not entirely clear `__ whether this is the best choice, but for now that's how it works. As an example of what this means, here's a small program in which two tasks compete for a lock. Notice that the task which releases the lock always immediately attempts to re-acquire it, before the other task has a chance to run. (And remember that we're doing cooperative multi-tasking here, so it's actually *deterministic* that the task releasing the lock will call :meth:`~Lock.acquire` before the other task wakes up; in Trio releasing a lock is not a checkpoint.) With an unfair lock, this would result in the same task holding the lock forever and the other task being starved out. But if you run this, you'll see that the two tasks politely take turns:: # fairness-demo.py import trio async def loopy_child(number, lock): while True: async with lock: print(f"Child {number} has the lock!") await trio.sleep(0.5) async def main(): async with trio.open_nursery() as nursery: lock = trio.Lock() nursery.start_soon(loopy_child, 1, lock) nursery.start_soon(loopy_child, 2, lock) trio.run(main) Broadcasting an event with :class:`Event` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: Event :members: .. _channels: Using channels to pass values between tasks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *Channels* allow you to safely and conveniently send objects between different tasks. They're particularly useful for implementing producer/consumer patterns. The core channel API is defined by the abstract base classes :class:`trio.abc.SendChannel` and :class:`trio.abc.ReceiveChannel`. You can use these to implement your own custom channels, that do things like pass objects between processes or over the network. But in many cases, you just want to pass objects between different tasks inside a single process, and for that you can use :func:`trio.open_memory_channel`: .. autofunction:: open_memory_channel(max_buffer_size) .. note:: If you've used the :mod:`threading` or :mod:`asyncio` modules, you may be familiar with :class:`queue.Queue` or :class:`asyncio.Queue`. In Trio, :func:`open_memory_channel` is what you use when you're looking for a queue. The main difference is that Trio splits the classic queue interface up into two objects. The advantage of this is that it makes it possible to put the two ends in different processes without rewriting your code, and that we can close the two sides separately. `MemorySendChannel` and `MemoryReceiveChannel` also expose several more features beyond the core channel interface: .. autoclass:: MemorySendChannel :members: .. autoclass:: MemoryReceiveChannel :members: A simple channel example ++++++++++++++++++++++++ Here's a simple example of how to use memory channels: .. literalinclude:: reference-core/channels-simple.py If you run this, it prints: .. code-block:: none got value "message 0" got value "message 1" got value "message 2" And then it hangs forever. (Use control-C to quit.) .. _channel-shutdown: Clean shutdown with channels ++++++++++++++++++++++++++++ Of course we don't generally like it when programs hang. What happened? The problem is that the producer sent 3 messages and then exited, but the consumer has no way to tell that the producer is gone: for all it knows, another message might be coming along any moment. So it hangs forever waiting for the 4th message. Here's a new version that fixes this: it produces the same output as the previous version, and then exits cleanly. The only change is the addition of ``async with`` blocks inside the producer and consumer: .. literalinclude:: reference-core/channels-shutdown.py :emphasize-lines: 10,15 The really important thing here is the producer's ``async with`` . When the producer exits, this closes the ``send_channel``, and that tells the consumer that no more messages are coming, so it can cleanly exit its ``async for`` loop. Then the program shuts down because both tasks have exited. We also added an ``async with`` to the consumer. This isn't as important, but it can help us catch mistakes or other problems. For example, suppose that the consumer exited early for some reason – maybe because of a bug. Then the producer would be sending messages into the void, and might get stuck indefinitely. But, if the consumer closes its ``receive_channel``, then the producer will get a :exc:`BrokenResourceError` to alert it that it should stop sending messages because no-one is listening. If you want to see the effect of the consumer exiting early, try adding a ``break`` statement to the ``async for`` loop – you should see a :exc:`BrokenResourceError` from the producer. .. _channel-mpmc: Managing multiple producers and/or multiple consumers +++++++++++++++++++++++++++++++++++++++++++++++++++++ You can also have multiple producers, and multiple consumers, all sharing the same channel. However, this makes shutdown a little more complicated. For example, consider this naive extension of our previous example, now with two producers and two consumers: .. literalinclude:: reference-core/channels-mpmc-broken.py The two producers, A and B, send 3 messages apiece. These are then randomly distributed between the two consumers, X and Y. So we're hoping to see some output like: .. code-block:: none consumer Y got value '0 from producer B' consumer X got value '0 from producer A' consumer Y got value '1 from producer A' consumer Y got value '1 from producer B' consumer X got value '2 from producer B' consumer X got value '2 from producer A' However, on most runs, that's not what happens – the first part of the output is OK, and then when we get to the end the program crashes with :exc:`ClosedResourceError`. If you run the program a few times, you'll see that sometimes the traceback shows ``send`` crashing, and other times it shows ``receive`` crashing, and you might even find that on some runs it doesn't crash at all. Here's what's happening: suppose that producer A finishes first. It exits, and its ``async with`` block closes the ``send_channel``. But wait! Producer B was still using that ``send_channel``... so the next time B calls ``send``, it gets a :exc:`ClosedResourceError`. Sometimes, though if we're lucky, the two producers might finish at the same time (or close enough), so they both make their last ``send`` before either of them closes the ``send_channel``. But, even if that happens, we're not out of the woods yet! After the producers exit, the two consumers race to be the first to notice that the ``send_channel`` has closed. Suppose that X wins the race. It exits its ``async for`` loop, then exits the ``async with`` block... and closes the ``receive_channel``, while Y is still using it. Again, this causes a crash. We could avoid this by using some complicated bookkeeping to make sure that only the *last* producer and the *last* consumer close their channel endpoints... but that would be tiresome and fragile. Fortunately, there's a better way! Here's a fixed version of our program above: .. literalinclude:: reference-core/channels-mpmc-fixed.py :emphasize-lines: 7, 9, 10, 12, 13 This example demonstrates using the `MemorySendChannel.clone` and `MemoryReceiveChannel.clone` methods. What these do is create copies of our endpoints, that act just like the original – except that they can be closed independently. And the underlying channel is only closed after *all* the clones have been closed. So this completely solves our problem with shutdown, and if you run this program, you'll see it print its six lines of output and then exits cleanly. Notice a small trick we use: the code in ``main`` creates clone objects to pass into all the child tasks, and then closes the original objects using ``async with``. Another option is to pass clones into all-but-one of the child tasks, and then pass the original object into the last task, like:: # Also works, but is more finicky: send_channel, receive_channel = trio.open_memory_channel(0) nursery.start_soon(producer, "A", send_channel.clone()) nursery.start_soon(producer, "B", send_channel) nursery.start_soon(consumer, "X", receive_channel.clone()) nursery.start_soon(consumer, "Y", receive_channel) But this is more error-prone, especially if you use a loop to spawn the producers/consumers. Just make sure that you don't write:: # Broken, will cause program to hang: send_channel, receive_channel = trio.open_memory_channel(0) nursery.start_soon(producer, "A", send_channel.clone()) nursery.start_soon(producer, "B", send_channel.clone()) nursery.start_soon(consumer, "X", receive_channel.clone()) nursery.start_soon(consumer, "Y", receive_channel.clone()) Here we pass clones into the tasks, but never close the original objects. That means we have 3 send channel objects (the original + two clones), but we only close 2 of them, so the consumers will hang around forever waiting for that last one to be closed. .. _channel-buffering: Buffering in channels +++++++++++++++++++++ When you call :func:`open_memory_channel`, you have to specify how many values can be buffered internally in the channel. If the buffer is full, then any task that calls :meth:`~trio.abc.SendChannel.send` will stop and wait for another task to call :meth:`~trio.abc.ReceiveChannel.receive`. This is useful because it produces *backpressure*: if the channel producers are running faster than the consumers, then it forces the producers to slow down. You can disable buffering entirely, by doing ``open_memory_channel(0)``. In that case any task that calls :meth:`~trio.abc.SendChannel.send` will wait until another task calls :meth:`~trio.abc.ReceiveChannel.receive`, and vice versa. This is similar to how channels work in the `classic Communicating Sequential Processes model `__, and is a reasonable default if you aren't sure what size buffer to use. (That's why we used it in the examples above.) At the other extreme, you can make the buffer unbounded by using ``open_memory_channel(math.inf)``. In this case, :meth:`~trio.abc.SendChannel.send` *always* returns immediately. Normally, this is a bad idea. To see why, consider a program where the producer runs more quickly than the consumer: .. literalinclude:: reference-core/channels-backpressure.py If you run this program, you'll see output like: .. code-block:: none Sent message: 0 Received message: 0 Sent message: 1 Sent message: 2 Sent message: 3 Sent message: 4 Sent message: 5 Sent message: 6 Sent message: 7 Sent message: 8 Sent message: 9 Received message: 1 Sent message: 10 Sent message: 11 Sent message: 12 ... On average, the producer sends ten messages per second, but the consumer only calls ``receive`` once per second. That means that each second, the channel's internal buffer has to grow to hold an extra nine items. After a minute, the buffer will have ~540 items in it; after an hour, that grows to ~32,400. Eventually, the program will run out of memory. And well before we run out of memory, our latency on handling individual messages will become abysmal. For example, at the one minute mark, the producer is sending message ~600, but the consumer is still processing message ~60. Message 600 will have to sit in the channel for ~9 minutes before the consumer catches up and processes it. Now try replacing ``open_memory_channel(math.inf)`` with ``open_memory_channel(0)``, and run it again. We get output like: .. code-block:: none Sent message: 0 Received message: 0 Received message: 1 Sent message: 1 Received message: 2 Sent message: 2 Sent message: 3 Received message: 3 ... Now the ``send`` calls wait for the ``receive`` calls to finish, which forces the producer to slow down to match the consumer's speed. (It might look strange that some values are reported as "Received" before they're reported as "Sent"; this happens because the actual send/receive happen at the same time, so which line gets printed first is random.) Now, let's try setting a small but nonzero buffer size, like ``open_memory_channel(3)``. what do you think will happen? I get: .. code-block:: none Sent message: 0 Received message: 0 Sent message: 1 Sent message: 2 Sent message: 3 Received message: 1 Sent message: 4 Received message: 2 Sent message: 5 ... So you can see that the producer runs ahead by 3 messages, and then stops to wait: when the consumer reads message 1, it sends message 4, then when the consumer reads message 2, it sends message 5, and so on. Once it reaches the steady state, this version acts just like our previous version where we set the buffer size to 0, except that it uses a bit more memory and each message sits in the buffer for a bit longer before being processed (i.e., the message latency is higher). Of course real producers and consumers are usually more complicated than this, and in some situations, a modest amount of buffering might improve throughput. But too much buffering wastes memory and increases latency, so if you want to tune your application you should experiment to see what value works best for you. **Why do we even support unbounded buffers then?** Good question! Despite everything we saw above, there are times when you actually do need an unbounded buffer. For example, consider a web crawler that uses a channel to keep track of all the URLs it still wants to crawl. Each crawler runs a loop where it takes a URL from the channel, fetches it, checks the HTML for outgoing links, and then adds the new URLs to the channel. This creates a *circular flow*, where each consumer is also a producer. In this case, if your channel buffer gets full, then the crawlers will block when they try to add new URLs to the channel, and if all the crawlers got blocked, then they aren't taking any URLs out of the channel, so they're stuck forever in a deadlock. Using an unbounded channel avoids this, because it means that :meth:`~trio.abc.SendChannel.send` never blocks. Lower-level synchronization primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Personally, I find that events and channels are usually enough to implement most things I care about, and lead to easier to read code than the lower-level primitives discussed in this section. But if you need them, they're here. (If you find yourself reaching for these because you're trying to implement a new higher-level synchronization primitive, then you might also want to check out the facilities in :mod:`trio.lowlevel` for a more direct exposure of Trio's underlying synchronization logic. All of classes discussed in this section are implemented on top of the public APIs in :mod:`trio.lowlevel`; they don't have any special access to Trio's internals.) .. autoclass:: CapacityLimiter :members: .. autoclass:: Semaphore :members: .. We have to use :inherited-members: here because all the actual lock methods are stashed in _LockImpl. Weird side-effect of having both Lock and StrictFIFOLock, but wanting both to be marked Final so neither can inherit from the other. .. autoclass:: Lock :members: :inherited-members: .. autoclass:: StrictFIFOLock :members: .. autoclass:: Condition :members: .. _async-generators: Notes on async generators ------------------------- Python 3.6 added support for *async generators*, which can use ``await``, ``async for``, and ``async with`` in between their ``yield`` statements. As you might expect, you use ``async for`` to iterate over them. :pep:`525` has many more details if you want them. For example, the following is a roundabout way to print the numbers 0 through 9 with a 1-second delay before each one:: async def range_slowly(*args): """Like range(), but adds a 1-second sleep before each value.""" for value in range(*args): await trio.sleep(1) yield value async def use_it(): async for value in range_slowly(10): print(value) trio.run(use_it) Trio supports async generators, with some caveats described in this section. Finalization ~~~~~~~~~~~~ If you iterate over an async generator in its entirety, like the example above does, then the execution of the async generator will occur completely in the context of the code that's iterating over it, and there aren't too many surprises. If you abandon a partially-completed async generator, though, such as by ``break``\ing out of the iteration, things aren't so simple. The async generator iterator object is still alive, waiting for you to resume iterating it so it can produce more values. At some point, Python will realize that you've dropped all references to the iterator, and will call on Trio to throw in a `GeneratorExit` exception so that any remaining cleanup code inside the generator has a chance to run: ``finally`` blocks, ``__aexit__`` handlers, and so on. So far, so good. Unfortunately, Python provides no guarantees about *when* this happens. It could be as soon as you break out of the ``async for`` loop, or an arbitrary amount of time later. It could even be after the entire Trio run has finished! Just about the only guarantee is that it *won't* happen in the task that was using the generator. That task will continue on with whatever else it's doing, and the async generator cleanup will happen "sometime later, somewhere else": potentially with different context variables, not subject to timeouts, and/or after any nurseries you're using have been closed. If you don't like that ambiguity, and you want to ensure that a generator's ``finally`` blocks and ``__aexit__`` handlers execute as soon as you're done using it, then you'll need to wrap your use of the generator in something like `async_generator.aclosing() `__:: # Instead of this: async for value in my_generator(): if value == 42: break # Do this: async with aclosing(my_generator()) as aiter: async for value in aiter: if value == 42: break This is cumbersome, but Python unfortunately doesn't provide any other reliable options. If you use ``aclosing()``, then your generator's cleanup code executes in the same context as the rest of its iterations, so timeouts, exceptions, and context variables work like you'd expect. If you don't use ``aclosing()``, then Trio will do its best anyway, but you'll have to contend with the following semantics: * The cleanup of the generator occurs in a cancelled context, i.e., all blocking calls executed during cleanup will raise `Cancelled`. This is to compensate for the fact that any timeouts surrounding the original use of the generator have been long since forgotten. * The cleanup runs without access to any :ref:`context variables ` that may have been present when the generator was originally being used. * If the generator raises an exception during cleanup, then it's printed to the ``trio.async_generator_errors`` logger and otherwise ignored. * If an async generator is still alive at the end of the whole call to :func:`trio.run`, then it will be cleaned up after all tasks have exited and before :func:`trio.run` returns. Since the "system nursery" has already been closed at this point, Trio isn't able to support any new calls to :func:`trio.lowlevel.spawn_system_task`. If you plan to run your code on PyPy to take advantage of its better performance, you should be aware that PyPy is *far more likely* than CPython to perform async generator cleanup at a time well after the last use of the generator. (This is a consequence of the fact that PyPy does not use reference counting to manage memory.) To help catch issues like this, Trio will issue a `ResourceWarning` (ignored by default, but enabled when running under ``python -X dev`` for example) for each async generator that needs to be handled through the fallback finalization path. Cancel scopes and nurseries ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. warning:: You may not write a ``yield`` statement that suspends an async generator inside a `CancelScope` or `Nursery` that was entered within the generator. That is, this is OK:: async def some_agen(): with trio.move_on_after(1): await long_operation() yield "first" async with trio.open_nursery() as nursery: nursery.start_soon(task1) nursery.start_soon(task2) yield "second" ... But this is not:: async def some_agen(): with trio.move_on_after(1): yield "first" async with trio.open_nursery() as nursery: yield "second" ... Async generators decorated with ``@asynccontextmanager`` to serve as the template for an async context manager are *not* subject to this constraint, because ``@asynccontextmanager`` uses them in a limited way that doesn't create problems. Violating the rule described in this section will sometimes get you a useful error message, but Trio is not able to detect all such cases, so sometimes you'll get an unhelpful `TrioInternalError`. (And sometimes it will seem to work, which is probably the worst outcome of all, since then you might not notice the issue until you perform some minor refactoring of the generator or the code that's iterating it, or just get unlucky. There is a `proposed Python enhancement `__ that would at least make it fail consistently.) The reason for the restriction on cancel scopes has to do with the difficulty of noticing when a generator gets suspended and resumed. The cancel scopes inside the generator shouldn't affect code running outside the generator, but Trio isn't involved in the process of exiting and reentering the generator, so it would be hard pressed to keep its cancellation plumbing in the correct state. Nurseries use a cancel scope internally, so they have all the problems of cancel scopes plus a number of problems of their own: for example, when the generator is suspended, what should the background tasks do? There's no good way to suspend them, but if they keep running and throw an exception, where can that exception be reraised? If you have an async generator that wants to ``yield`` from within a nursery or cancel scope, your best bet is to refactor it to be a separate task that communicates over memory channels. The ``trio_util`` package offers a `decorator that does this for you transparently `__. For more discussion, see Trio issues `264 `__ (especially `this comment `__) and `638 `__. .. _threads: Threads (if you must) --------------------- In a perfect world, all third-party libraries and low-level APIs would be natively async and integrated into Trio, and all would be happiness and rainbows. That world, alas, does not (yet) exist. Until it does, you may find yourself needing to interact with non-Trio APIs that do rude things like "blocking". In acknowledgment of this reality, Trio provides two useful utilities for working with real, operating-system level, :mod:`threading`\-module-style threads. First, if you're in Trio but need to push some blocking I/O into a thread, there's `trio.to_thread.run_sync`. And if you're in a thread and need to communicate back with Trio, you can use :func:`trio.from_thread.run` and :func:`trio.from_thread.run_sync`. .. _worker-thread-limiting: Trio's philosophy about managing worker threads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you've used other I/O frameworks, you may have encountered the concept of a "thread pool", which is most commonly implemented as a fixed size collection of threads that hang around waiting for jobs to be assigned to them. These solve two different problems: First, re-using the same threads over and over is more efficient than starting and stopping a new thread for every job you need done; basically, the pool acts as a kind of cache for idle threads. And second, having a fixed size avoids getting into a situation where 100,000 jobs are submitted simultaneously, and then 100,000 threads are spawned and the system gets overloaded and crashes. Instead, the N threads start executing the first N jobs, while the other (100,000 - N) jobs sit in a queue and wait their turn. Which is generally what you want, and this is how :func:`trio.to_thread.run_sync` works by default. The downside of this kind of thread pool is that sometimes, you need more sophisticated logic for controlling how many threads are run at once. For example, you might want a policy like "at most 20 threads total, but no more than 3 of those can be running jobs associated with the same user account", or you might want a pool whose size is dynamically adjusted over time in response to system conditions. It's even possible for a fixed-size policy to cause unexpected `deadlocks `__. Imagine a situation where we have two different types of blocking jobs that you want to run in the thread pool, type A and type B. Type A is pretty simple: it just runs and completes pretty quickly. But type B is more complicated: it has to stop in the middle and wait for some other work to finish, and that other work includes running a type A job. Now, suppose you submit N jobs of type B to the pool. They all start running, and then eventually end up submitting one or more jobs of type A. But since every thread in our pool is already busy, the type A jobs don't actually start running – they just sit in a queue waiting for the type B jobs to finish. But the type B jobs will never finish, because they're waiting for the type A jobs. Our system has deadlocked. The ideal solution to this problem is to avoid having type B jobs in the first place – generally it's better to keep complex synchronization logic in the main Trio thread. But if you can't do that, then you need a custom thread allocation policy that tracks separate limits for different types of jobs, and make it impossible for type B jobs to fill up all the slots that type A jobs need to run. So, we can see that it's important to be able to change the policy controlling the allocation of threads to jobs. But in many frameworks, this requires implementing a new thread pool from scratch, which is highly non-trivial; and if different types of jobs need different policies, then you may have to create multiple pools, which is inefficient because now you effectively have two different thread caches that aren't sharing resources. Trio's solution to this problem is to split worker thread management into two layers. The lower layer is responsible for taking blocking I/O jobs and arranging for them to run immediately on some worker thread. It takes care of solving the tricky concurrency problems involved in managing threads and is responsible for optimizations like re-using threads, but has no admission control policy: if you give it 100,000 jobs, it will spawn 100,000 threads. The upper layer is responsible for providing the policy to make sure that this doesn't happen – but since it *only* has to worry about policy, it can be much simpler. In fact, all there is to it is the ``limiter=`` argument passed to :func:`trio.to_thread.run_sync`. This defaults to a global :class:`CapacityLimiter` object, which gives us the classic fixed-size thread pool behavior. (See :func:`trio.to_thread.current_default_thread_limiter`.) But if you want to use "separate pools" for type A jobs and type B jobs, then it's just a matter of creating two separate :class:`CapacityLimiter` objects and passing them in when running these jobs. Or here's an example of defining a custom policy that respects the global thread limit, while making sure that no individual user can use more than 3 threads at a time:: class CombinedLimiter: def __init__(self, first, second): self._first = first self._second = second async def acquire_on_behalf_of(self, borrower): # Acquire both, being careful to clean up properly on error await self._first.acquire_on_behalf_of(borrower) try: await self._second.acquire_on_behalf_of(borrower) except: self._first.release_on_behalf_of(borrower) raise def release_on_behalf_of(self, borrower): # Release both, being careful to clean up properly on error try: self._second.release_on_behalf_of(borrower) finally: self._first.release_on_behalf_of(borrower) # Use a weak value dictionary, so that we don't waste memory holding # limiter objects for users who don't have any worker threads running. USER_LIMITERS = weakref.WeakValueDictionary() MAX_THREADS_PER_USER = 3 def get_user_limiter(user_id): try: return USER_LIMITERS[user_id] except KeyError: per_user_limiter = trio.CapacityLimiter(MAX_THREADS_PER_USER) global_limiter = trio.current_default_thread_limiter() # IMPORTANT: acquire the per_user_limiter before the global_limiter. # If we get 100 jobs for a user at the same time, we want # to only allow 3 of them at a time to even compete for the # global thread slots. combined_limiter = CombinedLimiter(per_user_limiter, global_limiter) USER_LIMITERS[user_id] = combined_limiter return combined_limiter async def run_sync_in_thread_for_user(user_id, sync_fn, *args): combined_limiter = get_user_limiter(user_id) return await trio.to_thread.run_sync(sync_fn, *args, limiter=combined_limiter) .. module:: trio.to_thread .. currentmodule:: trio Putting blocking I/O into worker threads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: trio.to_thread.run_sync .. autofunction:: trio.to_thread.current_default_thread_limiter .. module:: trio.from_thread .. currentmodule:: trio Getting back into the Trio thread from another thread ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: trio.from_thread.run .. autofunction:: trio.from_thread.run_sync This will probably be clearer with an example. Here we demonstrate how to spawn a child thread, and then use a :ref:`memory channel ` to send messages between the thread and a Trio task: .. literalinclude:: reference-core/from-thread-example.py Exceptions and warnings ----------------------- .. autoexception:: Cancelled .. autoexception:: TooSlowError .. autoexception:: WouldBlock .. autoexception:: EndOfChannel .. autoexception:: BusyResourceError .. autoexception:: ClosedResourceError .. autoexception:: BrokenResourceError .. autoexception:: RunFinishedError .. autoexception:: TrioInternalError .. autoexception:: TrioDeprecationWarning :show-inheritance: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-io.rst0000644000076500000000000010001200000000000017437 0ustar00quentinwheel.. currentmodule:: trio I/O in Trio =========== .. _abstract-stream-api: The abstract Stream API ----------------------- Trio provides a set of abstract base classes that define a standard interface for unidirectional and bidirectional byte streams. Why is this useful? Because it lets you write generic protocol implementations that can work over arbitrary transports, and easily create complex transport configurations. Here's some examples: * :class:`trio.SocketStream` wraps a raw socket (like a TCP connection over the network), and converts it to the standard stream interface. * :class:`trio.SSLStream` is a "stream adapter" that can take any object that implements the :class:`trio.abc.Stream` interface, and convert it into an encrypted stream. In Trio the standard way to speak SSL over the network is to wrap an :class:`~trio.SSLStream` around a :class:`~trio.SocketStream`. * If you spawn a :ref:`subprocess `, you can get a :class:`~trio.abc.SendStream` that lets you write to its stdin, and a :class:`~trio.abc.ReceiveStream` that lets you read from its stdout. If for some reason you wanted to speak SSL to a subprocess, you could use a :class:`StapledStream` to combine its stdin/stdout into a single bidirectional :class:`~trio.abc.Stream`, and then wrap that in an :class:`~trio.SSLStream`:: ssl_context = ssl.create_default_context() ssl_context.check_hostname = False s = SSLStream(StapledStream(process.stdin, process.stdout), ssl_context) * It sometimes happens that you want to connect to an HTTPS server, but you have to go through a web proxy... and the proxy also uses HTTPS. So you end up having to do `SSL-on-top-of-SSL `__. In Trio this is trivial – just wrap your first :class:`~trio.SSLStream` in a second :class:`~trio.SSLStream`:: # Get a raw SocketStream connection to the proxy: s0 = await open_tcp_stream("proxy", 443) # Set up SSL connection to proxy: s1 = SSLStream(s0, proxy_ssl_context, server_hostname="proxy") # Request a connection to the website await s1.send_all(b"CONNECT website:443 / HTTP/1.0\r\n\r\n") await check_CONNECT_response(s1) # Set up SSL connection to the real website. Notice that s1 is # already an SSLStream object, and here we're wrapping a second # SSLStream object around it. s2 = SSLStream(s1, website_ssl_context, server_hostname="website") # Make our request await s2.send_all(b"GET /index.html HTTP/1.0\r\n\r\n") ... * The :mod:`trio.testing` module provides a set of :ref:`flexible in-memory stream object implementations `, so if you have a protocol implementation to test then you can can start two tasks, set up a virtual "socket" connecting them, and then do things like inject random-but-repeatable delays into the connection. Abstract base classes ~~~~~~~~~~~~~~~~~~~~~ .. currentmodule:: trio.abc .. http://docutils.sourceforge.net/docs/ref/rst/directives.html#list-table .. list-table:: Overview: abstract base classes for I/O :widths: auto :header-rows: 1 * - Abstract base class - Inherits from... - Adds these abstract methods... - And these concrete methods. - Example implementations * - :class:`AsyncResource` - - :meth:`~AsyncResource.aclose` - ``__aenter__``, ``__aexit__`` - :ref:`async-file-objects` * - :class:`SendStream` - :class:`AsyncResource` - :meth:`~SendStream.send_all`, :meth:`~SendStream.wait_send_all_might_not_block` - - :class:`~trio.testing.MemorySendStream` * - :class:`ReceiveStream` - :class:`AsyncResource` - :meth:`~ReceiveStream.receive_some` - ``__aiter__``, ``__anext__`` - :class:`~trio.testing.MemoryReceiveStream` * - :class:`Stream` - :class:`SendStream`, :class:`ReceiveStream` - - - :class:`~trio.SSLStream` * - :class:`HalfCloseableStream` - :class:`Stream` - :meth:`~HalfCloseableStream.send_eof` - - :class:`~trio.SocketStream`, :class:`~trio.StapledStream` * - :class:`Listener` - :class:`AsyncResource` - :meth:`~Listener.accept` - - :class:`~trio.SocketListener`, :class:`~trio.SSLListener` * - :class:`SendChannel` - :class:`AsyncResource` - :meth:`~SendChannel.send` - - `~trio.MemorySendChannel` * - :class:`ReceiveChannel` - :class:`AsyncResource` - :meth:`~ReceiveChannel.receive` - ``__aiter__``, ``__anext__`` - `~trio.MemoryReceiveChannel` * - `Channel` - `SendChannel`, `ReceiveChannel` - - - .. autoclass:: trio.abc.AsyncResource :members: .. currentmodule:: trio .. autofunction:: aclose_forcefully .. currentmodule:: trio.abc .. autoclass:: trio.abc.SendStream :members: :show-inheritance: .. autoclass:: trio.abc.ReceiveStream :members: :show-inheritance: .. autoclass:: trio.abc.Stream :members: :show-inheritance: .. autoclass:: trio.abc.HalfCloseableStream :members: :show-inheritance: .. currentmodule:: trio.abc .. autoclass:: trio.abc.Listener :members: :show-inheritance: .. autoclass:: trio.abc.SendChannel :members: :show-inheritance: .. autoclass:: trio.abc.ReceiveChannel :members: :show-inheritance: .. autoclass:: trio.abc.Channel :members: :show-inheritance: .. currentmodule:: trio Generic stream tools ~~~~~~~~~~~~~~~~~~~~ Trio currently provides a generic helper for writing servers that listen for connections using one or more :class:`~trio.abc.Listener`\s, and a generic utility class for working with streams. And if you want to test code that's written against the streams interface, you should also check out :ref:`testing-streams` in :mod:`trio.testing`. .. autofunction:: serve_listeners .. autoclass:: StapledStream :members: :show-inheritance: .. _high-level-networking: Sockets and networking ~~~~~~~~~~~~~~~~~~~~~~ The high-level network interface is built on top of our stream abstraction. .. autofunction:: open_tcp_stream .. autofunction:: serve_tcp .. autofunction:: open_ssl_over_tcp_stream .. autofunction:: serve_ssl_over_tcp .. autofunction:: open_unix_socket .. autoclass:: SocketStream :members: :undoc-members: :show-inheritance: .. autoclass:: SocketListener :members: :show-inheritance: .. autofunction:: open_tcp_listeners .. autofunction:: open_ssl_over_tcp_listeners SSL / TLS support ~~~~~~~~~~~~~~~~~ Trio provides SSL/TLS support based on the standard library :mod:`ssl` module. Trio's :class:`SSLStream` and :class:`SSLListener` take their configuration from a :class:`ssl.SSLContext`, which you can create using :func:`ssl.create_default_context` and customize using the other constants and functions in the :mod:`ssl` module. .. warning:: Avoid instantiating :class:`ssl.SSLContext` directly. A newly constructed :class:`~ssl.SSLContext` has less secure defaults than one returned by :func:`ssl.create_default_context`, dramatically so before Python 3.6. Instead of using :meth:`ssl.SSLContext.wrap_socket`, you create a :class:`SSLStream`: .. autoclass:: SSLStream :show-inheritance: :members: And if you're implementing a server, you can use :class:`SSLListener`: .. autoclass:: SSLListener :show-inheritance: :members: Some methods on :class:`SSLStream` raise :exc:`NeedHandshakeError` if you call them before the handshake completes: .. autoexception:: NeedHandshakeError .. module:: trio.socket Low-level networking with :mod:`trio.socket` --------------------------------------------- The :mod:`trio.socket` module provides Trio's basic low-level networking API. If you're doing ordinary things with stream-oriented connections over IPv4/IPv6/Unix domain sockets, then you probably want to stick to the high-level API described above. If you want to use UDP, or exotic address families like ``AF_BLUETOOTH``, or otherwise get direct access to all the quirky bits of your system's networking API, then you're in the right place. Top-level exports ~~~~~~~~~~~~~~~~~ Generally, the API exposed by :mod:`trio.socket` mirrors that of the standard library :mod:`socket` module. Most constants (like ``SOL_SOCKET``) and simple utilities (like :func:`~socket.inet_aton`) are simply re-exported unchanged. But there are also some differences, which are described here. First, Trio provides analogues to all the standard library functions that return socket objects; their interface is identical, except that they're modified to return Trio socket objects instead: .. autofunction:: socket .. autofunction:: socketpair .. autofunction:: fromfd .. function:: fromshare(data) Like :func:`socket.fromshare`, but returns a Trio socket object. In addition, there is a new function to directly convert a standard library socket into a Trio socket: .. autofunction:: from_stdlib_socket Unlike :func:`socket.socket`, :func:`trio.socket.socket` is a function, not a class; if you want to check whether an object is a Trio socket, use ``isinstance(obj, trio.socket.SocketType)``. For name lookup, Trio provides the standard functions, but with some changes: .. autofunction:: getaddrinfo .. autofunction:: getnameinfo .. autofunction:: getprotobyname Trio intentionally DOES NOT include some obsolete, redundant, or broken features: * :func:`~socket.gethostbyname`, :func:`~socket.gethostbyname_ex`, :func:`~socket.gethostbyaddr`: obsolete; use :func:`~socket.getaddrinfo` and :func:`~socket.getnameinfo` instead. * :func:`~socket.getservbyport`: obsolete and `buggy `__; instead, do:: _, service_name = await getnameinfo((127.0.0.1, port), NI_NUMERICHOST)) * :func:`~socket.getservbyname`: obsolete and `buggy `__; instead, do:: await getaddrinfo(None, service_name) * :func:`~socket.getfqdn`: obsolete; use :func:`getaddrinfo` with the ``AI_CANONNAME`` flag. * :func:`~socket.getdefaulttimeout`, :func:`~socket.setdefaulttimeout`: instead, use Trio's standard support for :ref:`cancellation`. * On Windows, ``SO_REUSEADDR`` is not exported, because it's a trap: the name is the same as Unix ``SO_REUSEADDR``, but the semantics are `different and extremely broken `__. In the very rare cases where you actually want ``SO_REUSEADDR`` on Windows, then it can still be accessed from the standard library's :mod:`socket` module. Socket objects ~~~~~~~~~~~~~~ .. class:: SocketType .. note:: :class:`trio.socket.SocketType` is an abstract class and cannot be instantiated directly; you get concrete socket objects by calling constructors like :func:`trio.socket.socket`. However, you can use it to check if an object is a Trio socket via ``isinstance(obj, trio.socket.SocketType)``. Trio socket objects are overall very similar to the :ref:`standard library socket objects `, with a few important differences: First, and most obviously, everything is made "Trio-style": blocking methods become async methods, and the following attributes are *not* supported: * :meth:`~socket.socket.setblocking`: Trio sockets always act like blocking sockets; if you need to read/write from multiple sockets at once, then create multiple tasks. * :meth:`~socket.socket.settimeout`: see :ref:`cancellation` instead. * :meth:`~socket.socket.makefile`: Python's file-like API is synchronous, so it can't be implemented on top of an async socket. * :meth:`~socket.socket.sendall`: Could be supported, but you're better off using the higher-level :class:`~trio.SocketStream`, and specifically its :meth:`~trio.SocketStream.send_all` method, which also does additional error checking. In addition, the following methods are similar to the equivalents in :func:`socket.socket`, but have some Trio-specific quirks: .. method:: connect :async: Connect the socket to a remote address. Similar to :meth:`socket.socket.connect`, except async. .. warning:: Due to limitations of the underlying operating system APIs, it is not always possible to properly cancel a connection attempt once it has begun. If :meth:`connect` is cancelled, and is unable to abort the connection attempt, then it will: 1. forcibly close the socket to prevent accidental re-use 2. raise :exc:`~trio.Cancelled`. tl;dr: if :meth:`connect` is cancelled then the socket is left in an unknown state – possibly open, and possibly closed. The only reasonable thing to do is to close it. .. method:: is_readable Check whether the socket is readable or not. .. method:: sendfile `Not implemented yet! `__ We also keep track of an extra bit of state, because it turns out to be useful for :class:`trio.SocketStream`: .. attribute:: did_shutdown_SHUT_WR This :class:`bool` attribute is True if you've called ``sock.shutdown(SHUT_WR)`` or ``sock.shutdown(SHUT_RDWR)``, and False otherwise. The following methods are identical to their equivalents in :func:`socket.socket`, except async, and the ones that take address arguments require pre-resolved addresses: * :meth:`~socket.socket.accept` * :meth:`~socket.socket.bind` * :meth:`~socket.socket.recv` * :meth:`~socket.socket.recv_into` * :meth:`~socket.socket.recvfrom` * :meth:`~socket.socket.recvfrom_into` * :meth:`~socket.socket.recvmsg` (if available) * :meth:`~socket.socket.recvmsg_into` (if available) * :meth:`~socket.socket.send` * :meth:`~socket.socket.sendto` * :meth:`~socket.socket.sendmsg` (if available) All methods and attributes *not* mentioned above are identical to their equivalents in :func:`socket.socket`: * :attr:`~socket.socket.family` * :attr:`~socket.socket.type` * :attr:`~socket.socket.proto` * :meth:`~socket.socket.fileno` * :meth:`~socket.socket.listen` * :meth:`~socket.socket.getpeername` * :meth:`~socket.socket.getsockname` * :meth:`~socket.socket.close` * :meth:`~socket.socket.shutdown` * :meth:`~socket.socket.setsockopt` * :meth:`~socket.socket.getsockopt` * :meth:`~socket.socket.dup` * :meth:`~socket.socket.detach` * :meth:`~socket.socket.share` * :meth:`~socket.socket.set_inheritable` * :meth:`~socket.socket.get_inheritable` .. currentmodule:: trio .. _async-file-io: Asynchronous filesystem I/O --------------------------- Trio provides built-in facilities for performing asynchronous filesystem operations like reading or renaming a file. Generally, we recommend that you use these instead of Python's normal synchronous file APIs. But the tradeoffs here are somewhat subtle: sometimes people switch to async I/O, and then they're surprised and confused when they find it doesn't speed up their program. The next section explains the theory behind async file I/O, to help you better understand your code's behavior. Or, if you just want to get started, you can :ref:`jump down to the API overview `. Background: Why is async file I/O useful? The answer may surprise you ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Many people expect that switching from synchronous file I/O to async file I/O will always make their program faster. This is not true! If we just look at total throughput, then async file I/O might be faster, slower, or about the same, and it depends in a complicated way on things like your exact patterns of disk access, or how much RAM you have. The main motivation for async file I/O is not to improve throughput, but to **reduce the frequency of latency glitches.** To understand why, you need to know two things. First, right now no mainstream operating system offers a generic, reliable, native API for async file or filesystem operations, so we have to fake it by using threads (specifically, :func:`trio.to_thread.run_sync`). This is cheap but isn't free: on a typical PC, dispatching to a worker thread adds something like ~100 µs of overhead to each operation. ("µs" is pronounced "microseconds", and there are 1,000,000 µs in a second. Note that all the numbers here are going to be rough orders of magnitude to give you a sense of scale; if you need precise numbers for your environment, measure!) .. file.read benchmark is notes-to-self/file-read-latency.py .. Numbers for spinning disks and SSDs are from taking a few random recent reviews from http://www.storagereview.com/best_drives and looking at their "4K Write Latency" test results for "Average MS" and "Max MS": http://www.storagereview.com/samsung_ssd_850_evo_ssd_review http://www.storagereview.com/wd_black_6tb_hdd_review And second, the cost of a disk operation is incredibly bimodal. Sometimes, the data you need is already cached in RAM, and then accessing it is very, very fast – calling :class:`io.FileIO`\'s ``read`` method on a cached file takes on the order of ~1 µs. But when the data isn't cached, then accessing it is much, much slower: the average is ~100 µs for SSDs and ~10,000 µs for spinning disks, and if you look at tail latencies then for both types of storage you'll see cases where occasionally some operation will be 10x or 100x slower than average. And that's assuming your program is the only thing trying to use that disk – if you're on some oversold cloud VM fighting for I/O with other tenants then who knows what will happen. And some operations can require multiple disk accesses. Putting these together: if your data is in RAM then it should be clear that using a thread is a terrible idea – if you add 100 µs of overhead to a 1 µs operation, then that's a 100x slowdown! On the other hand, if your data's on a spinning disk, then using a thread is *great* – instead of blocking the main thread and all tasks for 10,000 µs, we only block them for 100 µs and can spend the rest of that time running other tasks to get useful work done, which can effectively be a 100x speedup. But here's the problem: for any individual I/O operation, there's no way to know in advance whether it's going to be one of the fast ones or one of the slow ones, so you can't pick and choose. When you switch to async file I/O, it makes all the fast operations slower, and all the slow operations faster. Is that a win? In terms of overall speed, it's hard to say: it depends what kind of disks you're using and your kernel's disk cache hit rate, which in turn depends on your file access patterns, how much spare RAM you have, the load on your service, ... all kinds of things. If the answer is important to you, then there's no substitute for measuring your code's actual behavior in your actual deployment environment. But what we *can* say is that async disk I/O makes performance much more predictable across a wider range of runtime conditions. **If you're not sure what to do, then we recommend that you use async disk I/O by default,** because it makes your code more robust when conditions are bad, especially with regards to tail latencies; this improves the chances that what your users see matches what you saw in testing. Blocking the main thread stops *all* tasks from running for that time. 10,000 µs is 10 ms, and it doesn't take many 10 ms glitches to start adding up to `real money `__; async disk I/O can help prevent those. Just don't expect it to be magic, and be aware of the tradeoffs. .. _async-file-io-overview: API overview ~~~~~~~~~~~~ If you want to perform general filesystem operations like creating and listing directories, renaming files, or checking file metadata – or if you just want a friendly way to work with filesystem paths – then you want :class:`trio.Path`. It's an asyncified replacement for the standard library's :class:`pathlib.Path`, and provides the same comprehensive set of operations. For reading and writing to files and file-like objects, Trio also provides a mechanism for wrapping any synchronous file-like object into an asynchronous interface. If you have a :class:`trio.Path` object you can get one of these by calling its :meth:`~trio.Path.open` method; or if you know the file's name you can open it directly with :func:`trio.open_file`. Alternatively, if you already have an open file-like object, you can wrap it with :func:`trio.wrap_file` – one case where this is especially useful is to wrap :class:`io.BytesIO` or :class:`io.StringIO` when writing tests. Asynchronous path objects ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: Path :members: .. _async-file-objects: Asynchronous file objects ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: open_file .. autofunction:: wrap_file .. interface:: Asynchronous file interface Trio's asynchronous file objects have an interface that automatically adapts to the object being wrapped. Intuitively, you can mostly treat them like a regular :term:`file object`, except adding an ``await`` in front of any of methods that do I/O. The definition of :term:`file object` is a little vague in Python though, so here are the details: * Synchronous attributes/methods: if any of the following attributes or methods are present, then they're re-exported unchanged: ``closed``, ``encoding``, ``errors``, ``fileno``, ``isatty``, ``newlines``, ``readable``, ``seekable``, ``writable``, ``buffer``, ``raw``, ``line_buffering``, ``closefd``, ``name``, ``mode``, ``getvalue``, ``getbuffer``. * Async methods: if any of the following methods are present, then they're re-exported as an async method: ``flush``, ``read``, ``read1``, ``readall``, ``readinto``, ``readline``, ``readlines``, ``seek``, ``tell``, ``truncate``, ``write``, ``writelines``, ``readinto1``, ``peek``, ``detach``. Special notes: * Async file objects implement Trio's :class:`~trio.abc.AsyncResource` interface: you close them by calling :meth:`~trio.abc.AsyncResource.aclose` instead of ``close`` (!!), and they can be used as async context managers. Like all :meth:`~trio.abc.AsyncResource.aclose` methods, the ``aclose`` method on async file objects is guaranteed to close the file before returning, even if it is cancelled or otherwise raises an error. * Using the same async file object from multiple tasks simultaneously: because the async methods on async file objects are implemented using threads, it's only safe to call two of them at the same time from different tasks IF the underlying synchronous file object is thread-safe. You should consult the documentation for the object you're wrapping. For objects returned from :func:`trio.open_file` or :meth:`trio.Path.open`, it depends on whether you open the file in binary mode or text mode: `binary mode files are task-safe/thread-safe, text mode files are not `__. * Async file objects can be used as async iterators to iterate over the lines of the file:: async with await trio.open_file(...) as f: async for line in f: print(line) * The ``detach`` method, if present, returns an async file object. This should include all the attributes exposed by classes in :mod:`io`. But if you're wrapping an object that has other attributes that aren't on the list above, then you can access them via the ``.wrapped`` attribute: .. attribute:: wrapped The underlying synchronous file object. .. _subprocess: Spawning subprocesses --------------------- Trio provides support for spawning other programs as subprocesses, communicating with them via pipes, sending them signals, and waiting for them to exit. The interface for doing so consists of two layers: * :func:`trio.run_process` runs a process from start to finish and returns a :class:`~subprocess.CompletedProcess` object describing its outputs and return value. This is what you should reach for if you want to run a process to completion before continuing, while possibly sending it some input or capturing its output. It is modelled after the standard :func:`subprocess.run` with some additional features and safer defaults. * `trio.open_process` starts a process in the background and returns a `Process` object to let you interact with it. Using it requires a bit more code than `run_process`, but exposes additional capabilities: back-and-forth communication, processing output as soon as it is generated, and so forth. It is modelled after the standard library :class:`subprocess.Popen`. .. _subprocess-options: Options for starting subprocesses ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All of Trio's subprocess APIs accept the numerous keyword arguments used by the standard :mod:`subprocess` module to control the environment in which a process starts and the mechanisms used for communicating with it. These may be passed wherever you see ``**options`` in the documentation below. See the `full list `__ or just the `frequently used ones `__ in the :mod:`subprocess` documentation. (You may need to ``import subprocess`` in order to access constants such as ``PIPE`` or ``DEVNULL``.) Currently, Trio always uses unbuffered byte streams for communicating with a process, so it does not support the ``encoding``, ``errors``, ``universal_newlines`` (alias ``text`` in 3.7+), and ``bufsize`` options. Running a process and waiting for it to finish ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The basic interface for running a subprocess start-to-finish is :func:`trio.run_process`. It always waits for the subprocess to exit before returning, so there's no need to worry about leaving a process running by mistake after you've gone on to do other things. :func:`~trio.run_process` is similar to the standard library :func:`subprocess.run` function, but tries to have safer defaults: with no options, the subprocess's input is empty rather than coming from the user's terminal, and a failure in the subprocess will be propagated as a :exc:`subprocess.CalledProcessError` exception. Of course, these defaults can be changed where necessary. .. autofunction:: trio.run_process Interacting with a process as it runs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you want more control than :func:`~trio.run_process` affords, you can use `trio.open_process` to spawn a subprocess, and then interact with it using the `Process` interface. .. autofunction:: trio.open_process .. autoclass:: trio.Process .. autoattribute:: returncode .. automethod:: aclose .. automethod:: wait .. automethod:: poll .. automethod:: kill .. automethod:: terminate .. automethod:: send_signal .. note:: :meth:`~subprocess.Popen.communicate` is not provided as a method on :class:`~trio.Process` objects; use :func:`~trio.run_process` instead, or write the loop yourself if you have unusual needs. :meth:`~subprocess.Popen.communicate` has quite unusual cancellation behavior in the standard library (on some platforms it spawns a background thread which continues to read from the child process even after the timeout has expired) and we wanted to provide an interface with fewer surprises. .. _subprocess-quoting: Quoting: more than you wanted to know ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The command to run and its arguments usually must be passed to Trio's subprocess APIs as a sequence of strings, where the first element in the sequence specifies the command to run and the remaining elements specify its arguments, one argument per element. This form is used because it avoids potential quoting pitfalls; for example, you can run ``["cp", "-f", source_file, dest_file]`` without worrying about whether ``source_file`` or ``dest_file`` contains spaces. If you only run subprocesses without ``shell=True`` and on UNIX, that's all you need to know about specifying the command. If you use ``shell=True`` or run on Windows, you probably should read the rest of this section to be aware of potential pitfalls. With ``shell=True`` on UNIX, you must specify the command as a single string, which will be passed to the shell as if you'd entered it at an interactive prompt. The advantage of this option is that it lets you use shell features like pipes and redirection without writing code to handle them. For example, you can write ``Process("ls | grep some_string", shell=True)``. The disadvantage is that you must account for the shell's quoting rules, generally by wrapping in :func:`shlex.quote` any argument that might contain spaces, quotes, or other shell metacharacters. If you don't do that, your safe-looking ``f"ls | grep {some_string}"`` might end in disaster when invoked with ``some_string = "foo; rm -rf /"``. On Windows, the fundamental API for process spawning (the ``CreateProcess()`` system call) takes a string, not a list, and it's actually up to the child process to decide how it wants to split that string into individual arguments. Since the C language specifies that ``main()`` should take a list of arguments, *most* programs you encounter will follow the rules used by the Microsoft C/C++ runtime. :class:`subprocess.Popen`, and thus also Trio, uses these rules when it converts an argument sequence to a string, and they are `documented `__ alongside the :mod:`subprocess` module. There is no documented Python standard library function that can directly perform that conversion, so even on Windows, you almost always want to pass an argument sequence rather than a string. But if the program you're spawning doesn't split its command line back into individual arguments in the standard way, you might need to pass a string to work around this. (Or you might just be out of luck: as far as I can tell, there's simply no way to pass an argument containing a double-quote to a Windows batch file.) On Windows with ``shell=True``, things get even more chaotic. Now there are two separate sets of quoting rules applied, one by the Windows command shell ``CMD.EXE`` and one by the process being spawned, and they're *different*. (And there's no :func:`shlex.quote` to save you: it uses UNIX-style quoting rules, even on Windows.) Most special characters interpreted by the shell ``&<>()^|`` are not treated as special if the shell thinks they're inside double quotes, but ``%FOO%`` environment variable substitutions still are, and the shell doesn't provide any way to write a double quote inside a double-quoted string. Outside double quotes, any character (including a double quote) can be escaped using a leading ``^``. But since a pipeline is processed by running each command in the pipeline in a subshell, multiple layers of escaping can be needed:: echo ^^^&x | find "x" | find "x" # prints: &x And if you combine pipelines with () grouping, you can need even more levels of escaping:: (echo ^^^^^^^&x | find "x") | find "x" # prints: &x Since process creation takes a single arguments string, ``CMD.EXE``\'s quoting does not influence word splitting, and double quotes are not removed during CMD.EXE's expansion pass. Double quotes are troublesome because CMD.EXE handles them differently from the MSVC runtime rules; in:: prog.exe "foo \"bar\" baz" the program will see one argument ``foo "bar" baz`` but CMD.EXE thinks ``bar\`` is not quoted while ``foo \`` and ``baz`` are. All of this makes it a formidable task to reliably interpolate anything into a ``shell=True`` command line on Windows, and Trio falls back on the :mod:`subprocess` behavior: If you pass a sequence with ``shell=True``, it's quoted in the same way as a sequence with ``shell=False``, and had better not contain any shell metacharacters you weren't planning on. Further reading: * https://stackoverflow.com/questions/30620876/how-to-properly-escape-filenames-in-windows-cmd-exe * https://stackoverflow.com/questions/4094699/how-does-the-windows-command-interpreter-cmd-exe-parse-scripts Signals ------- .. currentmodule:: trio .. autofunction:: open_signal_receiver :with: signal_aiter ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-lowlevel.rst0000644000076500000000000011151200000000000020670 0ustar00quentinwheel========================================================= Introspecting and extending Trio with ``trio.lowlevel`` ========================================================= .. module:: trio.lowlevel :mod:`trio.lowlevel` contains low-level APIs for introspecting and extending Trio. If you're writing ordinary, everyday code, then you can ignore this module completely. But sometimes you need something a bit lower level. Here are some examples of situations where you should reach for :mod:`trio.lowlevel`: * You want to implement a new :ref:`synchronization primitive ` that Trio doesn't (yet) provide, like a reader-writer lock. * You want to extract low-level metrics to monitor the health of your application. * You want to use a low-level operating system interface that Trio doesn't (yet) provide its own wrappers for, like watching a filesystem directory for changes. * You want to implement an interface for calling between Trio and another event loop within the same process. * You're writing a debugger and want to visualize Trio's task tree. * You need to interoperate with a C library whose API exposes raw file descriptors. You don't need to be scared of :mod:`trio.lowlevel`, as long as you take proper precautions. These are real public APIs, with strictly defined and carefully documented semantics. They're the same tools we use to implement all the nice high-level APIs in the :mod:`trio` namespace. But, be careful. Some of those strict semantics have `nasty big pointy teeth `__. If you make a mistake, Trio may not be able to handle it gracefully; conventions and guarantees that are followed strictly in the rest of Trio do not always apply. When you use this module, it's your job to think about how you're going to handle the tricky cases so you can expose a friendly Trio-style API to your users. Debugging and instrumentation ============================= Trio tries hard to provide useful hooks for debugging and instrumentation. Some are documented above (the nursery introspection attributes, :meth:`trio.Lock.statistics`, etc.). Here are some more. Global statistics ----------------- .. autofunction:: current_statistics The current clock ----------------- .. autofunction:: current_clock .. _instrumentation: Instrument API -------------- The instrument API provides a standard way to add custom instrumentation to the run loop. Want to make a histogram of scheduling latencies, log a stack trace of any task that blocks the run loop for >50 ms, or measure what percentage of your process's running time is spent waiting for I/O? This is the place. The general idea is that at any given moment, :func:`trio.run` maintains a set of "instruments", which are objects that implement the :class:`trio.abc.Instrument` interface. When an interesting event happens, it loops over these instruments and notifies them by calling an appropriate method. The tutorial has :ref:`a simple example of using this for tracing `. Since this hooks into Trio at a rather low level, you do have to be careful. The callbacks are run synchronously, and in many cases if they error out then there isn't any plausible way to propagate this exception (for instance, we might be deep in the guts of the exception propagation machinery...). Therefore our `current strategy `__ for handling exceptions raised by instruments is to (a) log an exception to the ``"trio.abc.Instrument"`` logger, which by default prints a stack trace to standard error and (b) disable the offending instrument. You can register an initial list of instruments by passing them to :func:`trio.run`. :func:`add_instrument` and :func:`remove_instrument` let you add and remove instruments at runtime. .. autofunction:: add_instrument .. autofunction:: remove_instrument And here's the interface to implement if you want to build your own :class:`~trio.abc.Instrument`: .. autoclass:: trio.abc.Instrument :members: The tutorial has a :ref:`fully-worked example ` of defining a custom instrument to log Trio's internal scheduling decisions. Low-level I/O primitives ======================== Different environments expose different low-level APIs for performing async I/O. :mod:`trio.lowlevel` exposes these APIs in a relatively direct way, so as to allow maximum power and flexibility for higher level code. However, this means that the exact API provided may vary depending on what system Trio is running on. Universally available API ------------------------- All environments provide the following functions: .. function:: wait_readable(obj) :async: Block until the kernel reports that the given object is readable. On Unix systems, ``obj`` must either be an integer file descriptor, or else an object with a ``.fileno()`` method which returns an integer file descriptor. Any kind of file descriptor can be passed, though the exact semantics will depend on your kernel. For example, this probably won't do anything useful for on-disk files. On Windows systems, ``obj`` must either be an integer ``SOCKET`` handle, or else an object with a ``.fileno()`` method which returns an integer ``SOCKET`` handle. File descriptors aren't supported, and neither are handles that refer to anything besides a ``SOCKET``. :raises trio.BusyResourceError: if another task is already waiting for the given socket to become readable. :raises trio.ClosedResourceError: if another task calls :func:`notify_closing` while this function is still working. .. function:: wait_writable(obj) :async: Block until the kernel reports that the given object is writable. See `wait_readable` for the definition of ``obj``. :raises trio.BusyResourceError: if another task is already waiting for the given socket to become writable. :raises trio.ClosedResourceError: if another task calls :func:`notify_closing` while this function is still working. .. function:: notify_closing(obj) Call this before closing a file descriptor (on Unix) or socket (on Windows). This will cause any `wait_readable` or `wait_writable` calls on the given object to immediately wake up and raise `~trio.ClosedResourceError`. This doesn't actually close the object – you still have to do that yourself afterwards. Also, you want to be careful to make sure no new tasks start waiting on the object in between when you call this and when it's actually closed. So to close something properly, you usually want to do these steps in order: 1. Explicitly mark the object as closed, so that any new attempts to use it will abort before they start. 2. Call `notify_closing` to wake up any already-existing users. 3. Actually close the object. It's also possible to do them in a different order if that's more convenient, *but only if* you make sure not to have any checkpoints in between the steps. This way they all happen in a single atomic step, so other tasks won't be able to tell what order they happened in anyway. Unix-specific API ----------------- `FdStream` supports wrapping Unix files (such as a pipe or TTY) as a stream. If you have two different file descriptors for sending and receiving, and want to bundle them together into a single bidirectional `~trio.abc.Stream`, then use `trio.StapledStream`:: bidirectional_stream = trio.StapledStream( trio.lowlevel.FdStream(write_fd), trio.lowlevel.FdStream(read_fd) ) .. autoclass:: FdStream :show-inheritance: :members: Kqueue-specific API ------------------- TODO: these are implemented, but are currently more of a sketch than anything real. See `#26 `__. .. function:: current_kqueue() .. function:: wait_kevent(ident, filter, abort_func) :async: .. function:: monitor_kevent(ident, filter) :with: queue Windows-specific API -------------------- .. function:: WaitForSingleObject(handle) :async: Async and cancellable variant of `WaitForSingleObject `__. Windows only. :arg handle: A Win32 object handle, as a Python integer. :raises OSError: If the handle is invalid, e.g. when it is already closed. TODO: these are implemented, but are currently more of a sketch than anything real. See `#26 `__ and `#52 `__. .. function:: register_with_iocp(handle) .. function:: wait_overlapped(handle, lpOverlapped) :async: .. function:: current_iocp() .. function:: monitor_completion_key() :with: queue Global state: system tasks and run-local variables ================================================== .. autoclass:: RunVar .. autofunction:: spawn_system_task Trio tokens =========== .. autoclass:: TrioToken() :members: .. autofunction:: current_trio_token Spawning threads ================ .. autofunction:: start_thread_soon Safer KeyboardInterrupt handling ================================ Trio's handling of control-C is designed to balance usability and safety. On the one hand, there are sensitive regions (like the core scheduling loop) where it's simply impossible to handle arbitrary :exc:`KeyboardInterrupt` exceptions while maintaining our core correctness invariants. On the other, if the user accidentally writes an infinite loop, we do want to be able to break out of that. Our solution is to install a default signal handler which checks whether it's safe to raise :exc:`KeyboardInterrupt` at the place where the signal is received. If so, then we do; otherwise, we schedule a :exc:`KeyboardInterrupt` to be delivered to the main task at the next available opportunity (similar to how :exc:`~trio.Cancelled` is delivered). So that's great, but – how do we know whether we're in one of the sensitive parts of the program or not? This is determined on a function-by-function basis. By default: - The top-level function in regular user tasks is unprotected. - The top-level function in system tasks is protected. - If a function doesn't specify otherwise, then it inherits the protection state of its caller. This means you only need to override the defaults at places where you transition from protected code to unprotected code or vice-versa. These transitions are accomplished using two function decorators: .. function:: disable_ki_protection() :decorator: Decorator that marks the given regular function, generator function, async function, or async generator function as unprotected against :exc:`KeyboardInterrupt`, i.e., the code inside this function *can* be rudely interrupted by :exc:`KeyboardInterrupt` at any moment. If you have multiple decorators on the same function, then this should be at the bottom of the stack (closest to the actual function). An example of where you'd use this is in implementing something like :func:`trio.from_thread.run`, which uses :meth:`TrioToken.run_sync_soon` to get into the Trio thread. :meth:`~TrioToken.run_sync_soon` callbacks are run with :exc:`KeyboardInterrupt` protection enabled, and :func:`trio.from_thread.run` takes advantage of this to safely set up the machinery for sending a response back to the original thread, but then uses :func:`disable_ki_protection` when entering the user-provided function. .. function:: enable_ki_protection() :decorator: Decorator that marks the given regular function, generator function, async function, or async generator function as protected against :exc:`KeyboardInterrupt`, i.e., the code inside this function *won't* be rudely interrupted by :exc:`KeyboardInterrupt`. (Though if it contains any :ref:`checkpoints `, then it can still receive :exc:`KeyboardInterrupt` at those. This is considered a polite interruption.) .. warning:: Be very careful to only use this decorator on functions that you know will either exit in bounded time, or else pass through a checkpoint regularly. (Of course all of your functions should have this property, but if you mess it up here then you won't even be able to use control-C to escape!) If you have multiple decorators on the same function, then this should be at the bottom of the stack (closest to the actual function). An example of where you'd use this is on the ``__exit__`` implementation for something like a :class:`~trio.Lock`, where a poorly-timed :exc:`KeyboardInterrupt` could leave the lock in an inconsistent state and cause a deadlock. .. autofunction:: currently_ki_protected Sleeping and waking =================== Wait queue abstraction ---------------------- .. autoclass:: ParkingLot :members: :undoc-members: Low-level checkpoint functions ------------------------------ .. autofunction:: checkpoint The next two functions are used *together* to make up a checkpoint: .. autofunction:: checkpoint_if_cancelled .. autofunction:: cancel_shielded_checkpoint These are commonly used in cases where you have an operation that might-or-might-not block, and you want to implement Trio's standard checkpoint semantics. Example:: async def operation_that_maybe_blocks(): await checkpoint_if_cancelled() try: ret = attempt_operation() except BlockingIOError: # need to block and then retry, which we do below pass else: # operation succeeded, finish the checkpoint then return await cancel_shielded_checkpoint() return ret while True: await wait_for_operation_to_be_ready() try: return attempt_operation() except BlockingIOError: pass This logic is a bit convoluted, but accomplishes all of the following: * Every successful execution path passes through a checkpoint (assuming that ``wait_for_operation_to_be_ready`` is an unconditional checkpoint) * Our :ref:`cancellation semantics ` say that :exc:`~trio.Cancelled` should only be raised if the operation didn't happen. Using :func:`cancel_shielded_checkpoint` on the early-exit branch accomplishes this. * On the path where we do end up blocking, we don't pass through any schedule points before that, which avoids some unnecessary work. * Avoids implicitly chaining the :exc:`BlockingIOError` with any errors raised by ``attempt_operation`` or ``wait_for_operation_to_be_ready``, by keeping the ``while True:`` loop outside of the ``except BlockingIOError:`` block. These functions can also be useful in other situations. For example, when :func:`trio.to_thread.run_sync` schedules some work to run in a worker thread, it blocks until the work is finished (so it's a schedule point), but by default it doesn't allow cancellation. So to make sure that the call always acts as a checkpoint, it calls :func:`checkpoint_if_cancelled` before starting the thread. Low-level blocking ------------------ .. autofunction:: wait_task_rescheduled .. autoclass:: Abort .. autofunction:: reschedule Here's an example lock class implemented using :func:`wait_task_rescheduled` directly. This implementation has a number of flaws, including lack of fairness, O(n) cancellation, missing error checking, failure to insert a checkpoint on the non-blocking path, etc. If you really want to implement your own lock, then you should study the implementation of :class:`trio.Lock` and use :class:`ParkingLot`, which handles some of these issues for you. But this does serve to illustrate the basic structure of the :func:`wait_task_rescheduled` API:: class NotVeryGoodLock: def __init__(self): self._blocked_tasks = collections.deque() self._held = False async def acquire(self): while self._held: task = trio.lowlevel.current_task() self._blocked_tasks.append(task) def abort_fn(_): self._blocked_tasks.remove(task) return trio.lowlevel.Abort.SUCCEEDED await trio.lowlevel.wait_task_rescheduled(abort_fn) self._held = True def release(self): self._held = False if self._blocked_tasks: woken_task = self._blocked_tasks.popleft() trio.lowlevel.reschedule(woken_task) Task API ======== .. autofunction:: current_root_task() .. autofunction:: current_task() .. class:: Task() A :class:`Task` object represents a concurrent "thread" of execution. It has no public constructor; Trio internally creates a :class:`Task` object for each call to ``nursery.start(...)`` or ``nursery.start_soon(...)``. Its public members are mostly useful for introspection and debugging: .. attribute:: name String containing this :class:`Task`\'s name. Usually the name of the function this :class:`Task` is running, but can be overridden by passing ``name=`` to ``start`` or ``start_soon``. .. attribute:: coro This task's coroutine object. Example usage: extracting a stack trace:: import traceback def walk_coro_stack(coro): while coro is not None: if hasattr(coro, "cr_frame"): # A real coroutine yield coro.cr_frame, coro.cr_frame.f_lineno coro = coro.cr_await else: # A generator decorated with @types.coroutine yield coro.gi_frame, coro.gi_frame.f_lineno coro = coro.gi_yieldfrom def print_stack_for_task(task): ss = traceback.StackSummary.extract(walk_coro_stack(task.coro)) print("".join(ss.format())) .. attribute:: context This task's :class:`contextvars.Context` object. .. autoattribute:: parent_nursery .. autoattribute:: eventual_parent_nursery .. autoattribute:: child_nurseries .. attribute:: custom_sleep_data Trio doesn't assign this variable any meaning, except that it sets it to ``None`` whenever a task is rescheduled. It can be used to share data between the different tasks involved in putting a task to sleep and then waking it up again. (See :func:`wait_task_rescheduled` for details.) .. _guest-mode: Using "guest mode" to run Trio on top of other event loops ========================================================== What is "guest mode"? --------------------- An event loop acts as a central coordinator to manage all the IO happening in your program. Normally, that means that your application has to pick one event loop, and use it for everything. But what if you like Trio, but also need to use a framework like `Qt `__ or `PyGame `__ that has its own event loop? Then you need some way to run both event loops at once. It is possible to combine event loops, but the standard approaches all have significant downsides: - **Polling:** this is where you use a `busy-loop `__ to manually check for IO on both event loops many times per second. This adds latency, and wastes CPU time and electricity. - **Pluggable IO backends:** this is where you reimplement one of the event loop APIs on top of the other, so you effectively end up with just one event loop. This requires a significant amount of work for each pair of event loops you want to integrate, and different backends inevitably end up with inconsistent behavior, forcing users to program against the least-common-denominator. And if the two event loops expose different feature sets, it may not even be possible to implement one in terms of the other. - **Running the two event loops in separate threads:** This works, but most event loop APIs aren't thread-safe, so in this approach you need to keep careful track of which code runs on which event loop, and remember to use explicit inter-thread messaging whenever you interact with the other loop – or else risk obscure race conditions and data corruption. That's why Trio offers a fourth option: **guest mode**. Guest mode lets you execute `trio.run` on top of some other "host" event loop, like Qt. Its advantages are: - Efficiency: guest mode is event-driven instead of using a busy-loop, so it has low latency and doesn't waste electricity. - No need to think about threads: your Trio code runs in the same thread as the host event loop, so you can freely call sync Trio APIs from the host, and call sync host APIs from Trio. For example, if you're making a GUI app with Qt as the host loop, then making a `cancel button `__ and connecting it to a `trio.CancelScope` is as easy as writing:: # Trio code can create Qt objects without any special ceremony... my_cancel_button = QPushButton("Cancel") # ...and Qt can call back to Trio just as easily my_cancel_button.clicked.connect(my_cancel_scope.cancel) (For async APIs, it's not that simple, but you can use sync APIs to build explicit bridges between the two worlds, e.g. by passing async functions and their results back and forth through queues.) - Consistent behavior: guest mode uses the same code as regular Trio: the same scheduler, same IO code, same everything. So you get the full feature set and everything acts the way you expect. - Simple integration and broad compatibility: pretty much every event loop offers some threadsafe "schedule a callback" operation, and that's all you need to use it as a host loop. Really? How is that possible? ----------------------------- .. note:: You can use guest mode without reading this section. It's included for those who enjoy understanding how things work. All event loops have the same basic structure. They loop through two operations, over and over: 1. Wait for the operating system to notify them that something interesting has happened, like data arriving on a socket or a timeout passing. They do this by invoking a platform-specific ``sleep_until_something_happens()`` system call – ``select``, ``epoll``, ``kqueue``, ``GetQueuedCompletionEvents``, etc. 2. Run all the user tasks that care about whatever happened, then go back to step 1. The problem here is step 1. Two different event loops on the same thread can take turns running user tasks in step 2, but when they're idle and nothing is happening, they can't both invoke their own ``sleep_until_something_happens()`` function at the same time. The "polling" and "pluggable backend" strategies solve this by hacking the loops so both step 1s can run at the same time in the same thread. Keeping everything in one thread is great for step 2, but the step 1 hacks create problems. The "separate threads" strategy solves this by moving both steps into separate threads. This makes step 1 work, but the downside is that now the user tasks in step 2 are running separate threads as well, so users are forced to deal with inter-thread coordination. The idea behind guest mode is to combine the best parts of each approach: we move Trio's step 1 into a separate worker thread, while keeping Trio's step 2 in the main host thread. This way, when the application is idle, both event loops do their ``sleep_until_something_happens()`` at the same time in their own threads. But when the app wakes up and your code is actually running, it all happens in a single thread. The threading trickiness is all handled transparently inside Trio. Concretely, we unroll Trio's internal event loop into a chain of callbacks, and as each callback finishes, it schedules the next callback onto the host loop or a worker thread as appropriate. So the only thing the host loop has to provide is a way to schedule a callback onto the main thread from a worker thread. Coordinating between Trio and the host loop does add some overhead. The main cost is switching in and out of the background thread, since this requires cross-thread messaging. This is cheap (on the order of a few microseconds, assuming your host loop is implemented efficiently), but it's not free. But, there's a nice optimization we can make: we only *need* the thread when our ``sleep_until_something_happens()`` call actually sleeps, that is, when the Trio part of your program is idle and has nothing to do. So before we switch into the worker thread, we double-check whether we're idle, and if not, then we skip the worker thread and jump directly to step 2. This means that your app only pays the extra thread-switching penalty at moments when it would otherwise be sleeping, so it should have minimal effect on your app's overall performance. The total overhead will depend on your host loop, your platform, your application, etc. But we expect that in most cases, apps running in guest mode should only be 5-10% slower than the same code using `trio.run`. If you find that's not true for your app, then please let us know and we'll see if we can fix it! .. _guest-run-implementation: Implementing guest mode for your favorite event loop ---------------------------------------------------- Let's walk through what you need to do to integrate Trio's guest mode with your favorite event loop. Treat this section like a checklist. **Getting started:** The first step is to get something basic working. Here's a minimal example of running Trio on top of asyncio, that you can use as a model:: import asyncio, trio # A tiny Trio program async def trio_main(): for i in range(5): print(f"Hello from Trio!") # This is inside Trio, so we have to use Trio APIs await trio.sleep(1) return "trio done!" # The code to run it as a guest inside asyncio async def asyncio_main(): asyncio_loop = asyncio.get_running_loop() def run_sync_soon_threadsafe(fn): asyncio_loop.call_soon_threadsafe(fn) def done_callback(trio_main_outcome): print(f"Trio program ended with: {trio_main_outcome}") # This is where the magic happens: trio.lowlevel.start_guest_run( trio_main, run_sync_soon_threadsafe=run_sync_soon_threadsafe, done_callback=done_callback, ) # Let the host loop run for a while to give trio_main time to # finish. (WARNING: This is a hack. See below for better # approaches.) # # This function is in asyncio, so we have to use asyncio APIs. await asyncio.sleep(10) asyncio.run(asyncio_main()) You can see we're using asyncio-specific APIs to start up a loop, and then we call `trio.lowlevel.start_guest_run`. This function is very similar to `trio.run`, and takes all the same arguments. But it has two differences: First, instead of blocking until ``trio_main`` has finished, it schedules ``trio_main`` to start running on top of the host loop, and then returns immediately. So ``trio_main`` is running in the background – that's why we have to sleep and give it time to finish. And second, it requires two extra keyword arguments: ``run_sync_soon_threadsafe``, and ``done_callback``. For ``run_sync_soon_threadsafe``, we need a function that takes a synchronous callback, and schedules it to run on your host loop. And this function needs to be "threadsafe" in the sense that you can safely call it from any thread. So you need to figure out how to write a function that does that using your host loop's API. For asyncio, this is easy because `~asyncio.loop.call_soon_threadsafe` does exactly what we need; for your loop, it might be more or less complicated. For ``done_callback``, you pass in a function that Trio will automatically invoke when the Trio run finishes, so you know it's done and what happened. For this basic starting version, we just print the result; in the next section we'll discuss better alternatives. At this stage you should be able to run a simple Trio program inside your host loop. Now we'll turn that prototype into something solid. **Loop lifetimes:** One of the trickiest things in most event loops is shutting down correctly. And having two event loops makes this even harder! If you can, we recommend following this pattern: - Start up your host loop - Immediately call `start_guest_run` to start Trio - When Trio finishes and your ``done_callback`` is invoked, shut down the host loop - Make sure that nothing else shuts down your host loop This way, your two event loops have the same lifetime, and your program automatically exits when your Trio function finishes. Here's how we'd extend our asyncio example to implement this pattern: .. code-block:: python3 :emphasize-lines: 8-11,19-22 # Improved version, that shuts down properly after Trio finishes async def asyncio_main(): asyncio_loop = asyncio.get_running_loop() def run_sync_soon_threadsafe(fn): asyncio_loop.call_soon_threadsafe(fn) # Revised 'done' callback: set a Future done_fut = asyncio_loop.create_future() def done_callback(trio_main_outcome): done_fut.set_result(trio_main_outcome) trio.lowlevel.start_guest_run( trio_main, run_sync_soon_threadsafe=run_sync_soon_threadsafe, done_callback=done_callback, ) # Wait for the guest run to finish trio_main_outcome = await done_fut # Pass through the return value or exception from the guest run return trio_main_outcome.unwrap() And then you can encapsulate all this machinery in a utility function that exposes a `trio.run`-like API, but runs both loops together:: def trio_run_with_asyncio(trio_main, *args, **trio_run_kwargs): async def asyncio_main(): # same as above ... return asyncio.run(asyncio_main()) Technically, it is possible to use other patterns. But there are some important limitations you have to respect: - **You must let the Trio program run to completion.** Many event loops let you stop the event loop at any point, and any pending callbacks/tasks/etc. just... don't run. Trio follows a more structured system, where you can cancel things, but the code always runs to completion, so ``finally`` blocks run, resources are cleaned up, etc. If you stop your host loop early, before the ``done_callback`` is invoked, then that cuts off the Trio run in the middle without a chance to clean up. This can leave your code in an inconsistent state, and will definitely leave Trio's internals in an inconsistent state, which will cause errors if you try to use Trio again in that thread. Some programs need to be able to quit at any time, for example in response to a GUI window being closed or a user selecting a "Quit" from a menu. In these cases, we recommend wrapping your whole program in a `trio.CancelScope`, and cancelling it when you want to quit. - Each host loop can only have one `start_guest_run` at a time. If you try to start a second one, you'll get an error. If you need to run multiple Trio functions at the same time, then start up a single Trio run, open a nursery, and then start your functions as child tasks in that nursery. - Unless you or your host loop register a handler for `signal.SIGINT` before starting Trio (this is not common), then Trio will take over delivery of `KeyboardInterrupt`\s. And since Trio can't tell which host code is safe to interrupt, it will only deliver `KeyboardInterrupt` into the Trio part of your code. This is fine if your program is set up to exit when the Trio part exits, because the `KeyboardInterrupt` will propagate out of Trio and then trigger the shutdown of your host loop, which is just what you want. Given these constraints, we think the simplest approach is to always start and stop the two loops together. **Signal management:** `"Signals" `__ are a low-level inter-process communication primitive. When you hit control-C to kill a program, that uses a signal. Signal handling in Python has `a lot of moving parts `__. One of those parts is `signal.set_wakeup_fd`, which event loops use to make sure that they wake up when a signal arrives so they can respond to it. (If you've ever had an event loop ignore you when you hit control-C, it was probably because they weren't using `signal.set_wakeup_fd` correctly.) But, only one event loop can use `signal.set_wakeup_fd` at a time. And in guest mode that can cause problems: Trio and the host loop might start fighting over who's using `signal.set_wakeup_fd`. Some event loops, like asyncio, won't work correctly unless they win this fight. Fortunately, Trio is a little less picky: as long as *someone* makes sure that the program wakes up when a signal arrives, it should work correctly. So if your host loop wants `signal.set_wakeup_fd`, then you should disable Trio's `signal.set_wakeup_fd` support, and then both loops will work correctly. On the other hand, if your host loop doesn't use `signal.set_wakeup_fd`, then the only way to make everything work correctly is to *enable* Trio's `signal.set_wakeup_fd` support. By default, Trio assumes that your host loop doesn't use `signal.set_wakeup_fd`. It does try to detect when this creates a conflict with the host loop, and print a warning – but unfortunately, by the time it detects it, the damage has already been done. So if you're getting this warning, then you should disable Trio's `signal.set_wakeup_fd` support by passing ``host_uses_signal_set_wakeup_fd=True`` to `start_guest_run`. If you aren't seeing any warnings with your initial prototype, you're *probably* fine. But the only way to be certain is to check your host loop's source. For example, asyncio may or may not use `signal.set_wakeup_fd` depending on the Python version and operating system. **A small optimization:** Finally, consider a small optimization. Some event loops offer two versions of their "call this function soon" API: one that can be used from any thread, and one that can only be used from the event loop thread, with the latter being cheaper. For example, asyncio has both `~asyncio.loop.call_soon_threadsafe` and `~asyncio.loop.call_soon`. If you have a loop like this, then you can also pass a ``run_sync_soon_not_threadsafe=...`` kwarg to `start_guest_run`, and Trio will automatically use it when appropriate. If your loop doesn't have a split like this, then don't worry about it; ``run_sync_soon_not_threadsafe=`` is optional. (If it's not passed, then Trio will just use your threadsafe version in all cases.) **That's it!** If you've followed all these steps, you should now have a cleanly-integrated hybrid event loop. Go make some cool GUIs/games/whatever! Limitations ----------- In general, almost all Trio features should work in guest mode. The exception is features which rely on Trio having a complete picture of everything that your program is doing, since obviously, it can't control the host loop or see what it's doing. Custom clocks can be used in guest mode, but they only affect Trio timeouts, not host loop timeouts. And the :ref:`autojump clock ` and related `trio.testing.wait_all_tasks_blocked` can technically be used in guest mode, but they'll only take Trio tasks into account when decided whether to jump the clock or whether all tasks are blocked. Reference --------- .. autofunction:: start_guest_run .. _live-coroutine-handoff: Handing off live coroutine objects between coroutine runners ============================================================ Internally, Python's async/await syntax is built around the idea of "coroutine objects" and "coroutine runners". A coroutine object represents the state of an async callstack. But by itself, this is just a static object that sits there. If you want it to do anything, you need a coroutine runner to push it forward. Every Trio task has an associated coroutine object (see :data:`Task.coro`), and the Trio scheduler acts as their coroutine runner. But of course, Trio isn't the only coroutine runner in Python – :mod:`asyncio` has one, other event loops have them, you can even define your own. And in some very, very unusual circumstances, it even makes sense to transfer a single coroutine object back and forth between different coroutine runners. That's what this section is about. This is an *extremely* exotic use case, and assumes a lot of expertise in how Python async/await works internally. For motivating examples, see `trio-asyncio issue #42 `__, and `trio issue #649 `__. For more details on how coroutines work, we recommend André Caron's `A tale of event loops `__, or going straight to `PEP 492 `__ for the full details. .. autofunction:: permanently_detach_coroutine_object .. autofunction:: temporarily_detach_coroutine_object .. autofunction:: reattach_detached_coroutine_object ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.489574 trio-0.19.0/docs/source/reference-testing/0000755000076500000000000000000000000000000017761 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-testing/across-realtime.out0000644000076500000000000000150000000000000023600 0ustar00quentinwheelClock where time passes at 100 years per second: task2: sleeping for 5 years task1: sleeping for 1 year task1: woke up; clock says I've slept 1.0365006048232317 years task1: sleeping for 1 year, 100 times task2: woke up; clock says I've slept 5.0572111969813704 years task2: sleeping for 500 years task1: slept 104.77677842136472 years total task2: slept 505.25014589075 years total Total real time elapsed: 5.053582429885864 seconds Clock where time automatically skips past the boring parts: task2: sleeping for 5 years task1: sleeping for 1 year task1: woke up; clock says I've slept 1.0 years task1: sleeping for 1 year, 100 times task2: woke up; clock says I've slept 5.0 years task2: sleeping for 500 years task1: slept 101.0 years total task2: slept 505.0 years total Total real time elapsed: 0.019298791885375977 seconds ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-testing/across-realtime.py0000644000076500000000000000313400000000000023426 0ustar00quentinwheel# across-realtime.py import time import trio import trio.testing YEAR = 365 * 24 * 60 * 60 # seconds async def task1(): start = trio.current_time() print("task1: sleeping for 1 year") await trio.sleep(YEAR) duration = trio.current_time() - start print("task1: woke up; clock says I've slept {} years" .format(duration / YEAR)) print("task1: sleeping for 1 year, 100 times") for _ in range(100): await trio.sleep(YEAR) duration = trio.current_time() - start print("task1: slept {} years total".format(duration / YEAR)) async def task2(): start = trio.current_time() print("task2: sleeping for 5 years") await trio.sleep(5 * YEAR) duration = trio.current_time() - start print("task2: woke up; clock says I've slept {} years" .format(duration / YEAR)) print("task2: sleeping for 500 years") await trio.sleep(500 * YEAR) duration = trio.current_time() - start print("task2: slept {} years total".format(duration / YEAR)) async def main(): async with trio.open_nursery() as nursery: nursery.start_soon(task1) nursery.start_soon(task2) def run_example(clock): real_start = time.perf_counter() trio.run(main, clock=clock) real_duration = time.perf_counter() - real_start print("Total real time elapsed: {} seconds".format(real_duration)) print("Clock where time passes at 100 years per second:\n") run_example(trio.testing.MockClock(rate=100 * YEAR)) print("\nClock where time automatically skips past the boring parts:\n") run_example(trio.testing.MockClock(autojump_threshold=0)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/reference-testing.rst0000644000076500000000000001576200000000000020526 0ustar00quentinwheelTesting made easier with ``trio.testing`` ========================================= .. module:: trio.testing The :mod:`trio.testing` module provides various utilities to make it easier to test Trio code. Unlike the other submodules in the :mod:`trio` namespace, :mod:`trio.testing` is *not* automatically imported when you do ``import trio``; you must ``import trio.testing`` explicitly. Test harness integration ------------------------ .. decorator:: trio_test .. _testing-time: Time and timeouts ----------------- :class:`trio.testing.MockClock` is a :class:`~trio.abc.Clock` with a few tricks up its sleeve to help you efficiently test code involving timeouts: * By default, it starts at time 0, and clock time only advances when you explicitly call :meth:`~MockClock.jump`. This provides an extremely controllable clock for testing. * You can set :attr:`~MockClock.rate` to 1.0 if you want it to start running in real time like a regular clock. You can stop and start the clock within a test. You can set :attr:`~MockClock.rate` to 10.0 to make clock time pass at 10x real speed (so e.g. ``await trio.sleep(10)`` returns after 1 second). * But even more interestingly, you can set :attr:`~MockClock.autojump_threshold` to zero or a small value, and then it will watch the execution of the run loop, and any time things have settled down and everyone's waiting for a timeout, it jumps the clock forward to that timeout. In many cases this allows natural-looking code involving timeouts to be automatically run at near full CPU utilization with no changes. (Thanks to `fluxcapacitor `__ for this awesome idea.) * And of course these can be mixed and matched at will. Regardless of these shenanigans, from "inside" Trio the passage of time still seems normal so long as you restrict yourself to Trio's time functions (see :ref:`time-and-clocks`). Below is an example demonstrating two different ways of making time pass quickly. Notice how in both cases, the two tasks keep a consistent view of reality and events happen in the expected order, despite being wildly divorced from real time: .. literalinclude:: reference-testing/across-realtime.py Output: .. literalinclude:: reference-testing/across-realtime.out :language: none .. autoclass:: MockClock :members: Inter-task ordering ------------------- .. autoclass:: Sequencer .. autofunction:: wait_all_tasks_blocked .. _testing-streams: Streams ------- Connecting to an in-process socket server ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: open_stream_to_socket_listener .. _virtual-streams: Virtual, controllable streams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ One particularly challenging problem when testing network protocols is making sure that your implementation can handle data whose flow gets broken up in weird ways and arrives with weird timings: localhost connections tend to be much better behaved than real networks, so if you only test on localhost then you might get bitten later. To help you out, Trio provides some fully in-memory implementations of the stream interfaces (see :ref:`abstract-stream-api`), that let you write all kinds of interestingly evil tests. There are a few pieces here, so here's how they fit together: :func:`memory_stream_pair` gives you a pair of connected, bidirectional streams. It's like :func:`socket.socketpair`, but without any involvement from that pesky operating system and its networking stack. To build a bidirectional stream, :func:`memory_stream_pair` uses two unidirectional streams. It gets these by calling :func:`memory_stream_one_way_pair`. :func:`memory_stream_one_way_pair`, in turn, is implemented using the low-ish level classes :class:`MemorySendStream` and :class:`MemoryReceiveStream`. These are implementations of (you guessed it) :class:`trio.abc.SendStream` and :class:`trio.abc.ReceiveStream` that on their own, aren't attached to anything – "sending" and "receiving" just put data into and get data out of a private internal buffer that each object owns. They also have some interesting hooks you can set, that let you customize the behavior of their methods. This is where you can insert the evil, if you want it. :func:`memory_stream_one_way_pair` takes advantage of these hooks in a relatively boring way: it just sets it up so that when you call ``send_all``, or when you close the send stream, then it automatically triggers a call to :func:`memory_stream_pump`, which is a convenience function that takes data out of a :class:`MemorySendStream`\´s buffer and puts it into a :class:`MemoryReceiveStream`\´s buffer. But that's just the default – you can replace this with whatever arbitrary behavior you want. Trio also provides some specialized functions for testing completely **un**\buffered streams: :func:`lockstep_stream_one_way_pair` and :func:`lockstep_stream_pair`. These aren't customizable, but they do exhibit an extreme kind of behavior that's good at catching out edge cases in protocol implementations. API details ~~~~~~~~~~~ .. autoclass:: MemorySendStream :members: .. autoclass:: MemoryReceiveStream :members: .. autofunction:: memory_stream_pump .. autofunction:: memory_stream_one_way_pair .. autofunction:: memory_stream_pair .. autofunction:: lockstep_stream_one_way_pair .. autofunction:: lockstep_stream_pair .. _testing-custom-streams: Testing custom stream implementations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Trio also provides some functions to help you test your custom stream implementations: .. autofunction:: check_one_way_stream .. autofunction:: check_two_way_stream .. autofunction:: check_half_closeable_stream .. _virtual-network-hooks: Virtual networking for testing ------------------------------ In the previous section you learned how to use virtual in-memory streams to test protocols that are written against Trio's :class:`~trio.abc.Stream` abstraction. But what if you have more complicated networking code – the kind of code that makes connections to multiple hosts, or opens a listening socket, or sends UDP packets? Trio doesn't itself provide a virtual in-memory network implementation for testing – but :mod:`trio.socket` module does provide the hooks you need to write your own! And if you're interested in helping implement a reusable virtual network for testing, then `please get in touch `__. Note that these APIs are actually in :mod:`trio.socket` and :mod:`trio.abc`, but we document them here because they're primarily intended for testing. .. currentmodule:: trio.socket .. autofunction:: trio.socket.set_custom_hostname_resolver .. currentmodule:: trio.abc .. autoclass:: trio.abc.HostnameResolver :members: .. currentmodule:: trio.socket .. autofunction:: trio.socket.set_custom_socket_factory .. currentmodule:: trio.abc .. autoclass:: trio.abc.SocketFactory :members: .. currentmodule:: trio.testing Testing checkpoints -------------------- .. autofunction:: assert_checkpoints :with: .. autofunction:: assert_no_checkpoints :with: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/releasing.rst0000644000076500000000000000226700000000000017062 0ustar00quentinwheel.. _releasing: Preparing a release ------------------- Things to do for releasing: * announce intent to release on gitter * check for open issues / pull requests that really should be in the release + come back when these are done + … or ignore them and do another release next week * check for deprecations "long enough ago" (two months or two releases, whichever is longer) + remove affected code * Do the actual release changeset + bump version number - increment as per Semantic Versioning rules - remove ``+dev`` tag from version number + Run ``towncrier`` - review history change - ``git rm`` changes + commit * push to your personal repository * create pull request to ``python-trio/trio``'s "master" branch * verify that all checks succeeded * tag with vVERSION, push tag on ``python-trio/trio`` (not on your personal repository) * push to PyPI:: git clean -xdf # maybe run 'git clean -xdn' first to see what it will delete python3 setup.py sdist bdist_wheel twine upload dist/* * update version number in the same pull request + add ``+dev`` tag to the end * merge the release pull request * announce on gitter ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.5122077 trio-0.19.0/docs/source/tutorial/0000755000076500000000000000000000000000000016213 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/tutorial/echo-client.py0000644000076500000000000000216300000000000020761 0ustar00quentinwheel# echo-client.py import sys import trio # arbitrary, but: # - must be in between 1024 and 65535 # - can't be in use by some other program on your computer # - must match what we set in our echo server PORT = 12345 async def sender(client_stream): print("sender: started!") while True: data = b"async can sometimes be confusing, but I believe in you!" print(f"sender: sending {data!r}") await client_stream.send_all(data) await trio.sleep(1) async def receiver(client_stream): print("receiver: started!") async for data in client_stream: print(f"receiver: got data {data!r}") print("receiver: connection closed") sys.exit() async def parent(): print(f"parent: connecting to 127.0.0.1:{PORT}") client_stream = await trio.open_tcp_stream("127.0.0.1", PORT) async with client_stream: async with trio.open_nursery() as nursery: print("parent: spawning sender...") nursery.start_soon(sender, client_stream) print("parent: spawning receiver...") nursery.start_soon(receiver, client_stream) trio.run(parent) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/tutorial/echo-server.py0000644000076500000000000000311500000000000021007 0ustar00quentinwheel# echo-server.py import trio from itertools import count # Port is arbitrary, but: # - must be in between 1024 and 65535 # - can't be in use by some other program on your computer # - must match what we set in our echo client PORT = 12345 CONNECTION_COUNTER = count() async def echo_server(server_stream): # Assign each connection a unique number to make our debug prints easier # to understand when there are multiple simultaneous connections. ident = next(CONNECTION_COUNTER) print(f"echo_server {ident}: started") try: async for data in server_stream: print(f"echo_server {ident}: received data {data!r}") await server_stream.send_all(data) print(f"echo_server {ident}: connection closed") # FIXME: add discussion of MultiErrors to the tutorial, and use # MultiError.catch here. (Not important in this case, but important if the # server code uses nurseries internally.) except Exception as exc: # Unhandled exceptions will propagate into our parent and take # down the whole program. If the exception is KeyboardInterrupt, # that's what we want, but otherwise maybe not... print(f"echo_server {ident}: crashed: {exc!r}") async def main(): await trio.serve_tcp(echo_server, PORT) # We could also just write 'trio.run(trio.serve_tcp, echo_server, PORT)', but real # programs almost always end up doing other stuff too and then we'd have to go # back and factor it out into a separate function anyway. So it's simplest to # just make it a standalone function from the beginning. trio.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/tutorial/tasks-intro.py0000644000076500000000000000126500000000000021047 0ustar00quentinwheel# tasks-intro.py import trio async def child1(): print(" child1: started! sleeping now...") await trio.sleep(1) print(" child1: exiting!") async def child2(): print(" child2: started! sleeping now...") await trio.sleep(1) print(" child2: exiting!") async def parent(): print("parent: started!") async with trio.open_nursery() as nursery: print("parent: spawning child1...") nursery.start_soon(child1) print("parent: spawning child2...") nursery.start_soon(child2) print("parent: waiting for children to finish...") # -- we exit the nursery block here -- print("parent: all done!") trio.run(parent) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/tutorial/tasks-with-trace.py0000644000076500000000000000371300000000000021763 0ustar00quentinwheel# tasks-with-trace.py import trio async def child1(): print(" child1: started! sleeping now...") await trio.sleep(1) print(" child1: exiting!") async def child2(): print(" child2 started! sleeping now...") await trio.sleep(1) print(" child2 exiting!") async def parent(): print("parent: started!") async with trio.open_nursery() as nursery: print("parent: spawning child1...") nursery.start_soon(child1) print("parent: spawning child2...") nursery.start_soon(child2) print("parent: waiting for children to finish...") # -- we exit the nursery block here -- print("parent: all done!") class Tracer(trio.abc.Instrument): def before_run(self): print("!!! run started") def _print_with_task(self, msg, task): # repr(task) is perhaps more useful than task.name in general, # but in context of a tutorial the extra noise is unhelpful. print(f"{msg}: {task.name}") def task_spawned(self, task): self._print_with_task("### new task spawned", task) def task_scheduled(self, task): self._print_with_task("### task scheduled", task) def before_task_step(self, task): self._print_with_task(">>> about to run one step of task", task) def after_task_step(self, task): self._print_with_task("<<< task step finished", task) def task_exited(self, task): self._print_with_task("### task exited", task) def before_io_wait(self, timeout): if timeout: print(f"### waiting for I/O for up to {timeout} seconds") else: print("### doing a quick check for I/O") self._sleep_time = trio.current_time() def after_io_wait(self, timeout): duration = trio.current_time() - self._sleep_time print(f"### finished I/O check (took {duration} seconds)") def after_run(self): print("!!! run finished") trio.run(parent, instruments=[Tracer()]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/docs/source/tutorial.rst0000644000076500000000000013663700000000000016765 0ustar00quentinwheelTutorial ======== .. The Trio tutorial the spiel about what a concurrent library is Traditionally Python is a synchronous language, and we assume you're familiar with that kind of programming, but don't assume any knowledge of concurrent programming. (And even if you are familiar with concurrent programming using another library like Twisted or asyncio, or another language like Go or Erlang, then you should still probably read this, because Trio is different.) Trio turns Python into a concurrent language. It takes the core async/await syntax introduced in 3.5, and uses it to add three new pieces of semantics: - cancel scopes: a generic system for managing timeouts and cancellation - nurseries: which let your program do multiple things at the same time - MultiErrors: for when multiple things go wrong at once Of course it also provides a complete suite of APIs for doing networking, file I/O, using worker threads, We'll go through and explain each of these simple cancellation applied to an HTTP request with fail_after(5): response = await asks.get("https://httpbin.org/delay/1") print(response) and then again with /delay/10 (note that asks needs cpython 3.6 though. maybe just for one async generator?) value of async/await: show you where the cancellation exceptions can happen -- see pillar re: explicit cancel points (also briefly discuss cancel scopes and cancel() + the query APIs, fail_after vs move_on_after, current_time() and current_effective_deadline()) simple multi-task concurrency applied to do multiple HTTP requests adding a per-request timeout adding a timeout on the whole thing -- demonstrating wrapping cancel around a nursery pillars: implicit concurrency and exception raising and explicit schedule points example: the scheduling trace implicit concurrency -> use echo example to introduce networking API, and show how to do explicit concurrency and demonstrate start() then point out that you can just use serve_tcp() exceptions and MultiError example: catch-all logging in our echo server review of the three (or four) core language extensions and how they fit together and .. currentmodule:: trio Welcome to the Trio tutorial! Trio is a modern Python library for writing asynchronous applications – that is, programs that want to do multiple things at the same time with parallelized I/O, like a web spider that fetches lots of pages in parallel, a web server juggling lots of simultaneous downloads... that sort of thing. Here we'll try to give a gentle introduction to asynchronous programming with Trio. We assume that you're familiar with Python in general, but don't worry – we don't assume you know anything about asynchronous programming or Python's new ``async/await`` feature. Also, unlike many ``async/await`` tutorials, we assume that your goal is to *use* Trio to write interesting programs, so we won't go into the nitty-gritty details of how ``async/await`` is implemented inside the Python interpreter. The word "coroutine" is never mentioned. The fact is, you really don't *need* to know any of that stuff unless you want to *implement* a library like Trio, so we leave it out (though we'll throw in a few links for those who want to dig deeper). Okay, ready? Let's get started. Before you begin ---------------- 1. Make sure you're using Python 3.6 or newer. 2. ``python3 -m pip install --upgrade trio`` (or on Windows, maybe ``py -3 -m pip install --upgrade trio`` – `details `__) 3. Can you ``import trio``? If so then you're good to go! If you get lost or confused... ------------------------------ ...then we want to know! We have a friendly `chat channel `__, you can ask questions `using the "python-trio" tag on StackOverflow `__, or just `file a bug `__ (if our documentation is confusing, that's our fault, and we want to fix it!). Async functions --------------- Python 3.5 added a major new feature: async functions. Using Trio is all about writing async functions, so let's start there. An async function is defined like a normal function, except you write ``async def`` instead of ``def``:: # A regular function def regular_double(x): return 2 * x # An async function async def async_double(x): return 2 * x "Async" is short for "asynchronous"; we'll sometimes refer to regular functions like ``regular_double`` as "synchronous functions", to distinguish them from async functions. From a user's point of view, there are two differences between an async function and a regular function: 1. To call an async function, you have to use the ``await`` keyword. So instead of writing ``regular_double(3)``, you write ``await async_double(3)``. 2. You can't use the ``await`` keyword inside the body of a regular function. If you try it, you'll get a syntax error:: def print_double(x): print(await async_double(x)) # <-- SyntaxError here But inside an async function, ``await`` is allowed:: async def print_double(x): print(await async_double(x)) # <-- OK! Now, let's think about the consequences here: if you need ``await`` to call an async function, and only async functions can use ``await``... here's a little table: ======================= ================================== =================== If a function like this wants to call a function like this is it gonna happen? ======================= ================================== =================== sync sync ✓ sync async **NOPE** async sync ✓ async async ✓ ======================= ================================== =================== So in summary: As a user, the entire advantage of async functions over regular functions is that async functions have a superpower: they can call other async functions. This immediately raises two questions: how, and why? Specifically: When your Python program starts up, it's running regular old sync code. So there's a chicken-and-the-egg problem: once we're running an async function we can call other async functions, but *how* do we call that first async function? And, if the only reason to write an async function is that it can call other async functions, *why* on earth would we ever use them in the first place? I mean, as superpowers go this seems a bit pointless. Wouldn't it be simpler to just... not use any async functions at all? This is where an async library like Trio comes in. It provides two things: 1. A runner function, which is a special *synchronous* function that takes and calls an *asynchronous* function. In Trio, this is ``trio.run``:: import trio async def async_double(x): return 2 * x trio.run(async_double, 3) # returns 6 So that answers the "how" part. 2. A bunch of useful async functions – in particular, functions for doing I/O. So that answers the "why": these functions are async, and they're useful, so if you want to use them, you have to write async code. If you think keeping track of these ``async`` and ``await`` things is annoying, then too bad – you've got no choice in the matter! (Well, OK, you could just not use Trio. That's a legitimate option. But it turns out that the ``async/await`` stuff is actually a good thing, for reasons we'll discuss a little bit later.) Here's an example function that uses :func:`trio.sleep`. (:func:`trio.sleep` is like :func:`time.sleep`, but with more async.) .. code-block:: python3 import trio async def double_sleep(x): await trio.sleep(2 * x) trio.run(double_sleep, 3) # does nothing for 6 seconds then returns .. _async-sandwich: So it turns out our ``async_double`` function is actually a bad example. I mean, it works, it's fine, there's nothing *wrong* with it, but it's pointless: it could just as easily be written as a regular function, and it would be more useful that way. ``double_sleep`` is a much more typical example: we have to make it async, because it calls another async function. The end result is a kind of async sandwich, with Trio on both sides and our code in the middle: .. code-block:: none trio.run -> double_sleep -> trio.sleep This "sandwich" structure is typical for async code; in general, it looks like: .. code-block:: none trio.run -> [async function] -> ... -> [async function] -> trio.whatever It's exactly the functions on the path between :func:`trio.run` and ``trio.whatever`` that have to be async. Trio provides the async bread, and then your code makes up the async sandwich's tasty async filling. Other functions (e.g., helpers you call along the way) should generally be regular, non-async functions. Warning: don't forget that ``await``! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Now would be a good time to open up a Python prompt and experiment a little with writing simple async functions and running them with ``trio.run``. At some point in this process, you'll probably write some code like this, that tries to call an async function but leaves out the ``await``:: import time import trio async def broken_double_sleep(x): print("*yawn* Going to sleep") start_time = time.perf_counter() # Whoops, we forgot the 'await'! trio.sleep(2 * x) sleep_time = time.perf_counter() - start_time print(f"Woke up after {sleep_time:.2f} seconds, feeling well rested!") trio.run(broken_double_sleep, 3) You might think that Python would raise an error here, like it does for other kinds of mistakes we sometimes make when calling a function. Like, if we forgot to pass :func:`trio.sleep` its required argument, then we would get a nice :exc:`TypeError` saying so. But unfortunately, if you forget an ``await``, you don't get that. What you actually get is: .. code-block:: none >>> trio.run(broken_double_sleep, 3) *yawn* Going to sleep Woke up after 0.00 seconds, feeling well rested! __main__:4: RuntimeWarning: coroutine 'sleep' was never awaited >>> This is clearly broken – 0.00 seconds is not long enough to feel well rested! Yet the code acts like it succeeded – no exception was raised. The only clue that something went wrong is that it prints ``RuntimeWarning: coroutine 'sleep' was never awaited``. Also, the exact place where the warning is printed might vary, because it depends on the whims of the garbage collector. If you're using PyPy, you might not even get a warning at all until the next GC collection runs: .. code-block:: none # On PyPy: >>>> trio.run(broken_double_sleep, 3) *yawn* Going to sleep Woke up after 0.00 seconds, feeling well rested! >>>> # what the ... ?? not even a warning! >>>> # but forcing a garbage collection gives us a warning: >>>> import gc >>>> gc.collect() /home/njs/pypy-3.8-nightly/lib-python/3/importlib/_bootstrap.py:191: RuntimeWarning: coroutine 'sleep' was never awaited if _module_locks.get(name) is wr: # XXX PyPy fix? 0 >>>> (If you can't see the warning above, try scrolling right.) Forgetting an ``await`` like this is an *incredibly common mistake*. You will mess this up. Everyone does. And Python will not help you as much as you'd hope 😞. The key thing to remember is: if you see the magic words ``RuntimeWarning: coroutine '...' was never awaited``, then this *always* means that you made the mistake of leaving out an ``await`` somewhere, and you should ignore all the other error messages you see and go fix that first, because there's a good chance the other stuff is just collateral damage. I'm not even sure what all that other junk in the PyPy output is. Fortunately I don't need to know, I just need to fix my function! ("I thought you said you weren't going to mention coroutines!" Yes, well, *I* didn't mention coroutines, Python did. Take it up with Guido! But seriously, this is unfortunately a place where the internal implementation details do leak out a bit.) Why does this happen? In Trio, every time we use ``await`` it's to call an async function, and every time we call an async function we use ``await``. But Python's trying to keep its options open for other libraries that are *ahem* a little less organized about things. So while for our purposes we can think of ``await trio.sleep(...)`` as a single piece of syntax, Python thinks of it as two things: first a function call that returns this weird "coroutine" object:: >>> trio.sleep(3) and then that object gets passed to ``await``, which actually runs the function. So if you forget ``await``, then two bad things happen: your function doesn't actually get called, and you get a "coroutine" object where you might have been expecting something else, like a number:: >>> async_double(3) + 1 TypeError: unsupported operand type(s) for +: 'coroutine' and 'int' If you didn't already mess this up naturally, then give it a try on purpose: try writing some code with a missing ``await``, or an extra ``await``, and see what you get. This way you'll be prepared for when it happens to you for real. And remember: watch out for ``RuntimeWarning: coroutine '...' was never awaited``; it means you need to find and fix your missing ``await``. Okay, let's see something cool already ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ So now we've started using Trio, but so far all we've learned to do is write functions that print things and sleep for various lengths of time. Interesting enough, but we could just as easily have done that with :func:`time.sleep`. ``async/await`` is useless! Well, not really. Trio has one more trick up its sleeve, that makes async functions more powerful than regular functions: it can run multiple async functions *at the same time*. Here's an example: .. _tutorial-example-tasks-intro: .. literalinclude:: tutorial/tasks-intro.py :linenos: There's a lot going on in here, so we'll take it one step at a time. In the first part, we define two async functions ``child1`` and ``child2``. These should look familiar from the last section: .. literalinclude:: tutorial/tasks-intro.py :linenos: :lineno-match: :start-at: async def child1 :end-at: child2: exiting Next, we define ``parent`` as an async function that's going to call ``child1`` and ``child2`` concurrently: .. literalinclude:: tutorial/tasks-intro.py :linenos: :lineno-match: :start-at: async def parent :end-at: all done! It does this by using a mysterious ``async with`` statement to create a "nursery", and then "spawns" ``child1`` and ``child2`` into the nursery. Let's start with this ``async with`` thing. It's actually pretty simple. In regular Python, a statement like ``with someobj: ...`` instructs the interpreter to call ``someobj.__enter__()`` at the beginning of the block, and to call ``someobj.__exit__()`` at the end of the block. We call ``someobj`` a "context manager". An ``async with`` does exactly the same thing, except that where a regular ``with`` statement calls regular methods, an ``async with`` statement calls async methods: at the start of the block it does ``await someobj.__aenter__()`` and at that end of the block it does ``await someobj.__aexit__()``. In this case we call ``someobj`` an "async context manager". So in short: ``with`` blocks are a shorthand for calling some functions, and since with async/await Python now has two kinds of functions, it also needs two kinds of ``with`` blocks. That's all there is to it! If you understand async functions, then you understand ``async with``. .. note:: This example doesn't use them, but while we're here we might as well mention the one other piece of new syntax that async/await added: ``async for``. It's basically the same idea as ``async with`` versus ``with``: An ``async for`` loop is just like a ``for`` loop, except that where a ``for`` loop does ``iterator.__next__()`` to fetch the next item, an ``async for`` does ``await async_iterator.__anext__()``. Now you understand all of async/await. Basically just remember that it involves making sandwiches and sticking the word "async" in front of everything, and you'll do fine. Now that we understand ``async with``, let's look at ``parent`` again: .. literalinclude:: tutorial/tasks-intro.py :linenos: :lineno-match: :start-at: async def parent :end-at: all done! There are only 4 lines of code that really do anything here. On line 17, we use :func:`trio.open_nursery` to get a "nursery" object, and then inside the ``async with`` block we call ``nursery.start_soon`` twice, on lines 19 and 22. There are actually two ways to call an async function: the first one is the one we already saw, using ``await async_fn()``; the new one is ``nursery.start_soon(async_fn)``: it asks Trio to start running this async function, *but then returns immediately without waiting for the function to finish*. So after our two calls to ``nursery.start_soon``, ``child1`` and ``child2`` are now running in the background. And then at line 25, the commented line, we hit the end of the ``async with`` block, and the nursery's ``__aexit__`` function runs. What this does is force ``parent`` to stop here and wait for all the children in the nursery to exit. This is why you have to use ``async with`` to get a nursery: it gives us a way to make sure that the child calls can't run away and get lost. One reason this is important is that if there's a bug or other problem in one of the children, and it raises an exception, then it lets us propagate that exception into the parent; in many other frameworks, exceptions like this are just discarded. Trio never discards exceptions. Ok! Let's try running it and see what we get: .. code-block:: none parent: started! parent: spawning child1... parent: spawning child2... parent: waiting for children to finish... child2: started! sleeping now... child1: started! sleeping now... [... 1 second passes ...] child1: exiting! child2: exiting! parent: all done! (Your output might have the order of the "started" and/or "exiting" lines swapped compared to mine.) Notice that ``child1`` and ``child2`` both start together and then both exit together. And, even though we made two calls to ``trio.sleep(1)``, the program finished in just one second total. So it looks like ``child1`` and ``child2`` really are running at the same time! Now, if you're familiar with programming using threads, this might look familiar – and that's intentional. But it's important to realize that *there are no threads here*. All of this is happening in a single thread. To remind ourselves of this, we use slightly different terminology: instead of spawning two "threads", we say that we spawned two "tasks". There are two differences between tasks and threads: (1) many tasks can take turns running on a single thread, and (2) with threads, the Python interpreter/operating system can switch which thread is running whenever they feel like it; with tasks, we can only switch at certain designated places we call :ref:`"checkpoints" `. In the next section, we'll dig into what this means. .. _tutorial-instrument-example: Task switching illustrated ~~~~~~~~~~~~~~~~~~~~~~~~~~ The big idea behind async/await-based libraries like Trio is to run lots of tasks simultaneously on a single thread by switching between them at appropriate places – so for example, if we're implementing a web server, then one task could be sending an HTTP response at the same time as another task is waiting for new connections. If all you want to do is use Trio, then you don't need to understand all the nitty-gritty detail of how this switching works – but it's very useful to have at least a general intuition about what Trio is doing "under the hood" when your code is executing. To help build that intuition, let's look more closely at how Trio ran our example from the last section. Fortunately, Trio provides a :ref:`rich set of tools for inspecting and debugging your programs `. Here we want to watch :func:`trio.run` at work, which we can do by writing a class we'll call ``Tracer``, which implements Trio's :class:`~trio.abc.Instrument` interface. Its job is to log various events as they happen: .. literalinclude:: tutorial/tasks-with-trace.py :pyobject: Tracer Then we re-run our example program from the previous section, but this time we pass :func:`trio.run` a ``Tracer`` object: .. literalinclude:: tutorial/tasks-with-trace.py :start-at: trio.run This generates a *lot* of output, so we'll go through it one step at a time. First, there's a bit of chatter while Trio gets ready to run our code. Most of this is irrelevant to us for now, but in the middle you can see that Trio has created a task for the ``__main__.parent`` function, and "scheduled" it (i.e., made a note that it should be run soon): .. code-block:: none $ python3 tutorial/tasks-with-trace.py !!! run started ### new task spawned: ### task scheduled: ### doing a quick check for I/O ### finished I/O check (took 1.1122087016701698e-05 seconds) >>> about to run one step of task: ### new task spawned: ### task scheduled: ### new task spawned: __main__.parent ### task scheduled: __main__.parent <<< task step finished: ### doing a quick check for I/O ### finished I/O check (took 6.4980704337358475e-06 seconds) Once the initial housekeeping is done, Trio starts running the ``parent`` function, and you can see ``parent`` creating the two child tasks. Then it hits the end of the ``async with`` block, and pauses: .. code-block:: none >>> about to run one step of task: __main__.parent parent: started! parent: spawning child1... ### new task spawned: __main__.child1 ### task scheduled: __main__.child1 parent: spawning child2... ### new task spawned: __main__.child2 ### task scheduled: __main__.child2 parent: waiting for children to finish... <<< task step finished: __main__.parent Control then goes back to :func:`trio.run`, which logs a bit more internal chatter: .. code-block:: none >>> about to run one step of task: <<< task step finished: ### doing a quick check for I/O ### finished I/O check (took 5.476875230669975e-06 seconds) And then gives the two child tasks a chance to run: .. code-block:: none >>> about to run one step of task: __main__.child2 child2 started! sleeping now... <<< task step finished: __main__.child2 >>> about to run one step of task: __main__.child1 child1: started! sleeping now... <<< task step finished: __main__.child1 Each task runs until it hits the call to :func:`trio.sleep`, and then suddenly we're back in :func:`trio.run` deciding what to run next. How does this happen? The secret is that :func:`trio.run` and :func:`trio.sleep` work together to make it happen: :func:`trio.sleep` has access to some special magic that lets it pause itself, so it sends a note to :func:`trio.run` requesting to be woken again after 1 second, and then suspends the task. And once the task is suspended, Python gives control back to :func:`trio.run`, which decides what to do next. (If this sounds similar to the way that generators can suspend execution by doing a ``yield``, then that's not a coincidence: inside the Python interpreter, there's a lot of overlap between the implementation of generators and async functions.) .. note:: You might wonder whether you can mix-and-match primitives from different async libraries. For example, could we use :func:`trio.run` together with :func:`asyncio.sleep`? The answer is no, we can't, and the paragraph above explains why: the two sides of our async sandwich have a private language they use to talk to each other, and different libraries use different languages. So if you try to call :func:`asyncio.sleep` from inside a :func:`trio.run`, then Trio will get very confused indeed and probably blow up in some dramatic way. Only async functions have access to the special magic for suspending a task, so only async functions can cause the program to switch to a different task. What this means is that if a call *doesn't* have an ``await`` on it, then you know that it *can't* be a place where your task will be suspended. This makes tasks much `easier to reason about `__ than threads, because there are far fewer ways that tasks can be interleaved with each other and stomp on each others' state. (For example, in Trio a statement like ``a += 1`` is always atomic – even if ``a`` is some arbitrarily complicated custom object!) Trio also makes some :ref:`further guarantees beyond that `, but that's the big one. And now you also know why ``parent`` had to use an ``async with`` to open the nursery: if we had used a regular ``with`` block, then it wouldn't have been able to pause at the end and wait for the children to finish; we need our cleanup function to be async, which is exactly what ``async with`` gives us. Now, back to our execution point. To recap: at this point ``parent`` is waiting on ``child1`` and ``child2``, and both children are sleeping. So :func:`trio.run` checks its notes, and sees that there's nothing to be done until those sleeps finish – unless possibly some external I/O event comes in. If that happened, then it might give us something to do. Of course we aren't doing any I/O here so it won't happen, but in other situations it could. So next it calls an operating system primitive to put the whole process to sleep: .. code-block:: none ### waiting for I/O for up to 0.9999009938910604 seconds And in fact no I/O does arrive, so one second later we wake up again, and Trio checks its notes again. At this point it checks the current time, compares it to the notes that :func:`trio.sleep` sent saying when the two child tasks should be woken up again, and realizes that they've slept for long enough, so it schedules them to run soon: .. code-block:: none ### finished I/O check (took 1.0006483688484877 seconds) ### task scheduled: __main__.child1 ### task scheduled: __main__.child2 And then the children get to run, and this time they run to completion. Remember how ``parent`` is waiting for them to finish? Notice how ``parent`` gets scheduled when the first child exits: .. code-block:: none >>> about to run one step of task: __main__.child1 child1: exiting! ### task scheduled: __main__.parent ### task exited: __main__.child1 <<< task step finished: __main__.child1 >>> about to run one step of task: __main__.child2 child2 exiting! ### task exited: __main__.child2 <<< task step finished: __main__.child2 Then, after another check for I/O, ``parent`` wakes up. The nursery cleanup code notices that all its children have exited, and lets the nursery block finish. And then ``parent`` makes a final print and exits: .. code-block:: none ### doing a quick check for I/O ### finished I/O check (took 9.045004844665527e-06 seconds) >>> about to run one step of task: __main__.parent parent: all done! ### task scheduled: ### task exited: __main__.parent <<< task step finished: __main__.parent And finally, after a bit more internal bookkeeping, :func:`trio.run` exits too: .. code-block:: none ### doing a quick check for I/O ### finished I/O check (took 5.996786057949066e-06 seconds) >>> about to run one step of task: ### task scheduled: ### task scheduled: <<< task step finished: ### doing a quick check for I/O ### finished I/O check (took 6.258022040128708e-06 seconds) >>> about to run one step of task: ### task exited: <<< task step finished: >>> about to run one step of task: ### task exited: <<< task step finished: !!! run finished You made it! That was a lot of text, but again, you don't need to understand everything here to use Trio – in fact, Trio goes to great lengths to make each task feel like it executes in a simple, linear way. (Just like your operating system goes to great lengths to make it feel like your single-threaded code executes in a simple linear way, even though under the covers the operating system juggles between different threads and processes in essentially the same way Trio does.) But it is useful to have a rough model in your head of how the code you write is actually executed, and – most importantly – the consequences of that for parallelism. Alternatively, if this has just whetted your appetite and you want to know more about how ``async/await`` works internally, then `this blog post `__ is a good deep dive, or check out `this great walkthrough `__ to see how to build a simple async I/O framework from the ground up. A kinder, gentler GIL --------------------- Speaking of parallelism – let's zoom out for a moment and talk about how async/await compares to other ways of handling concurrency in Python. As we've already noted, Trio tasks are conceptually rather similar to Python's built-in threads, as provided by the :mod:`threading` module. And in all common Python implementations, threads have a famous limitation: the Global Interpreter Lock, or "GIL" for short. The GIL means that even if you use multiple threads, your code still (mostly) ends up running on a single core. People tend to find this frustrating. But from Trio's point of view, the problem with the GIL isn't that it restricts parallelism. Of course it would be nice if Python had better options for taking advantage of multiple cores, but that's an extremely difficult problem to solve, and in the meantime there are lots of problems where a single core is totally adequate – or where if it isn't, then process-level or machine-level parallelism works fine. No, the problem with the GIL is that it's a *lousy deal*: we give up on using multiple cores, and in exchange we get... almost all the same challenges and mind-bending bugs that come with real parallel programming, and – to add insult to injury – `pretty poor scalability `__. Threads in Python just aren't that appealing. Trio doesn't make your code run on multiple cores; in fact, as we saw above, it's baked into Trio's design that when it has multiple tasks, they take turns, so at each moment only one of them is actively running. We're not so much overcoming the GIL as embracing it. But if you're willing to accept that, plus a bit of extra work to put these new ``async`` and ``await`` keywords in the right places, then in exchange you get: * Excellent scalability: Trio can run 10,000+ tasks simultaneously without breaking a sweat, so long as their total CPU demands don't exceed what a single core can provide. (This is common in, for example, network servers that have lots of clients connected, but only a few active at any given time.) * Fancy features: most threading systems are implemented in C and restricted to whatever features the operating system provides. In Trio our logic is all in Python, which makes it possible to implement powerful and ergonomic features like :ref:`Trio's cancellation system `. * Code that's easier to reason about: the ``await`` keyword means that potential task-switching points are explicitly marked within each function. This can make Trio code `dramatically easier to reason about `__ than the equivalent program using threads. Certainly it's not appropriate for every app... but there are a lot of situations where the trade-offs here look pretty appealing. There is one downside that's important to keep in mind, though. Making checkpoints explicit gives you more control over how your tasks can be interleaved – but with great power comes great responsibility. With threads, the runtime environment is responsible for making sure that each thread gets its fair share of running time. With Trio, if some task runs off and does stuff for seconds on end without executing a checkpoint, then... all your other tasks will just have to wait. Here's an example of how this can go wrong. Take our :ref:`example from above `, and replace the calls to :func:`trio.sleep` with calls to :func:`time.sleep`. If we run our modified program, we'll see something like: .. code-block:: none parent: started! parent: spawning child1... parent: spawning child2... parent: waiting for children to finish... child2 started! sleeping now... [... pauses for 1 second ...] child2 exiting! child1: started! sleeping now... [... pauses for 1 second ...] child1: exiting! parent: all done! One of the major reasons why Trio has such a rich :ref:`instrumentation API ` is to make it possible to write debugging tools to catch issues like this. Networking with Trio -------------------- Now let's take what we've learned and use it to do some I/O, which is where async/await really shines. The traditional toy application for demonstrating network APIs is an "echo server": a program that awaits arbitrary data from remote clients, and then sends that same data right back. (Probably a more relevant example these days would be an application that does lots of concurrent HTTP requests, but for that `you need an HTTP library `__ such as `asks `__, so we'll stick with the echo server tradition.) In this tutorial, we present both ends of the pipe: the client, and the server. The client periodically sends data to the server, and displays its answers. The server awaits connections; when a client connects, it recopies the received data back on the pipe. An echo client ~~~~~~~~~~~~~~ To start with, here's an example echo *client*, i.e., the program that will send some data at our echo server and get responses back: .. _tutorial-echo-client-example: .. literalinclude:: tutorial/echo-client.py :linenos: Note that this code will not work without a TCP server such as the one we'll implement below. The overall structure here should be familiar, because it's just like our :ref:`last example `: we have a parent task, which spawns two child tasks to do the actual work, and then at the end of the ``async with`` block it switches into full-time parenting mode while waiting for them to finish. But now instead of just calling :func:`trio.sleep`, the children use some of Trio's networking APIs. Let's look at the parent first: .. literalinclude:: tutorial/echo-client.py :linenos: :lineno-match: :pyobject: parent First we call :func:`trio.open_tcp_stream` to make a TCP connection to the server. ``127.0.0.1`` is a magic `IP address `__ meaning "the computer I'm running on", so this connects us to whatever program on the local computer is using ``PORT`` as its contact point. This function returns an object implementing Trio's :class:`~trio.abc.Stream` interface, which gives us methods to send and receive bytes, and to close the connection when we're done. We use an ``async with`` block to make sure that we do close the connection – not a big deal in a toy example like this, but it's a good habit to get into, and Trio is designed to make ``with`` and ``async with`` blocks easy to use. Finally, we start up two child tasks, and pass each of them a reference to the stream. (This is also a good example of how ``nursery.start_soon`` lets you pass positional arguments to the spawned function.) Our first task's job is to send data to the server: .. literalinclude:: tutorial/echo-client.py :linenos: :lineno-match: :pyobject: sender It uses a loop that alternates between calling ``await client_stream.send_all(...)`` to send some data (this is the method you use for sending data on any kind of Trio stream), and then sleeping for a second to avoid making the output scroll by too fast on your terminal. And the second task's job is to process the data the server sends back: .. literalinclude:: tutorial/echo-client.py :linenos: :lineno-match: :pyobject: receiver It uses an ``async for`` loop to fetch data from the server. Alternatively, it could use `~trio.abc.ReceiveStream.receive_some`, which is the opposite of `~trio.abc.SendStream.send_all`, but using ``async for`` saves some boilerplate. And now we're ready to look at the server. .. _tutorial-echo-server-example: An echo server ~~~~~~~~~~~~~~ As usual, let's look at the whole thing first, and then we'll discuss the pieces: .. literalinclude:: tutorial/echo-server.py :linenos: Let's start with ``main``, which is just one line long: .. literalinclude:: tutorial/echo-server.py :linenos: :lineno-match: :pyobject: main What this does is call :func:`serve_tcp`, which is a convenience function Trio provides that runs forever (or at least until you hit control-C or otherwise cancel it). This function does several helpful things: * It creates a nursery internally, so that our server will be able to handle multiple connections at the same time. * It listens for incoming TCP connections on the specified ``PORT``. * Whenever a connection arrives, it starts a new task running the function we pass (in this example it's ``echo_server``), and passes it a stream representing that connection. * When each task exits, it makes sure to close the corresponding connection. (That's why you don't see any ``async with server_stream`` in the server – :func:`serve_tcp` takes care of this for us.) So :func:`serve_tcp` is pretty handy! This part works pretty much the same for any server, whether it's an echo server, HTTP server, SSH server, or whatever, so it makes sense to bundle it all up together in a helper function like this. Now let's look at ``echo_server``, which handles each client connection – so if there are multiple clients, there might be multiple calls to ``echo_server`` running at the same time. This is where we implement our server's "echo" behavior. This should be pretty straightforward to understand, because it uses the same stream functions we saw in the last section: .. literalinclude:: tutorial/echo-server.py :linenos: :lineno-match: :pyobject: echo_server The argument ``server_stream`` is provided by :func:`serve_tcp`, and is the other end of the connection we made in the client: so the data that the client passes to ``send_all`` will come out here. Then we have a ``try`` block discussed below, and finally the server loop which alternates between reading some data from the socket and then sending it back out again (unless the socket was closed, in which case we quit). So what's that ``try`` block for? Remember that in Trio, like Python in general, exceptions keep propagating until they're caught. Here we think it's plausible there might be unexpected exceptions, and we want to isolate that to making just this one task crash, without taking down the whole program. For example, if the client closes the connection at the wrong moment then it's possible this code will end up calling ``send_all`` on a closed connection and get a :exc:`BrokenResourceError`; that's unfortunate, and in a more serious program we might want to handle it more explicitly, but it doesn't indicate a problem for any *other* connections. On the other hand, if the exception is something like a :exc:`KeyboardInterrupt`, we *do* want that to propagate out into the parent task and cause the whole program to exit. To express this, we use a ``try`` block with an ``except Exception:`` handler. In general, Trio leaves it up to you to decide whether and how you want to handle exceptions, just like Python in general. Try it out ~~~~~~~~~~ Open a few terminals, run ``echo-server.py`` in one, run ``echo-client.py`` in another, and watch the messages scroll by! When you get bored, you can exit by hitting control-C. Some things to try: * Open several terminals, and run multiple clients at the same time, all talking to the same server. * See how the server reacts when you hit control-C on the client. * See how the client reacts when you hit control-C on the server. Flow control in our echo client and server ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here's a question you might be wondering about: why does our client use two separate tasks for sending and receiving, instead of a single task that alternates between them – like the server has? For example, our client could use a single task like:: # Can you spot the two problems with this code? async def send_and_receive(client_stream): while True: data = ... await client_stream.send_all(data) received = await client_stream.receive_some() if not received: sys.exit() await trio.sleep(1) It turns out there are two problems with this – one minor and one major. Both relate to flow control. The minor problem is that when we call ``receive_some`` here we're not waiting for *all* the data to be available; ``receive_some`` returns as soon as *any* data is available. If ``data`` is small, then our operating systems / network / server will *probably* keep it all together in a single chunk, but there's no guarantee. If the server sends ``hello`` then we might get ``hello``, or ``hel`` ``lo``, or ``h`` ``e`` ``l`` ``l`` ``o``, or ... bottom line, any time we're expecting more than one byte of data, we have to be prepared to call ``receive_some`` multiple times. And where this would go especially wrong is if we find ourselves in the situation where ``data`` is big enough that it passes some internal threshold, and the operating system or network decide to always break it up into multiple pieces. Now on each pass through the loop, we send ``len(data)`` bytes, but read less than that. The result is something like a memory leak: we'll end up with more and more data backed up in the network, until eventually something breaks. .. note:: If you're curious *how* things break, then you can use `~trio.abc.ReceiveStream.receive_some`\'s optional argument to put a limit on how many bytes you read each time, and see what happens. We could fix this by keeping track of how much data we're expecting at each moment, and then keep calling ``receive_some`` until we get it all:: expected = len(data) while expected > 0: received = await client_stream.receive_some(expected) if not received: sys.exit(1) expected -= len(received) This is a bit cumbersome, but it would solve this problem. There's another problem, though, that's deeper. We're still alternating between sending and receiving. Notice that when we send data, we use ``await``: this means that sending can potentially *block*. Why does this happen? Any data that we send goes first into an operating system buffer, and from there onto the network, and then another operating system buffer on the receiving computer, before the receiving program finally calls ``receive_some`` to take the data out of these buffers. If we call ``send_all`` with a small amount of data, then it goes into these buffers and ``send_all`` returns immediately. But if we send enough data fast enough, eventually the buffers fill up, and ``send_all`` will block until the remote side calls ``receive_some`` and frees up some space. Now let's think about this from the server's point of view. Each time it calls ``receive_some``, it gets some data that it needs to send back. And until it sends it back, the data that is sitting around takes up memory. Computers have finite amounts of RAM, so if our server is well behaved then at some point it needs to stop calling ``receive_some`` until it gets rid of some of the old data by doing its own call to ``send_all``. So for the server, really the only viable option is to alternate between receiving and sending. But we need to remember that it's not just the client's call to ``send_all`` that might block: the server's call to ``send_all`` can also get into a situation where it blocks until the client calls ``receive_some``. So if the server is waiting for ``send_all`` to finish before it calls ``receive_some``, and our client also waits for ``send_all`` to finish before it calls ``receive_some``,... we have a problem! The client won't call ``receive_some`` until the server has called ``receive_some``, and the server won't call ``receive_some`` until the client has called ``receive_some``. If our client is written to alternate between sending and receiving, and the chunk of data it's trying to send is large enough (e.g. 10 megabytes will probably do it in most configurations), then the two processes will `deadlock `__. Moral: Trio gives you powerful tools to manage sequential and concurrent execution. In this example we saw that the server needs ``send`` and ``receive_some`` to alternate in sequence, while the client needs them to run concurrently, and both were straightforward to implement. But when you're implementing network code like this then it's important to think carefully about flow control and buffering, because it's up to you to choose the right execution mode! Other popular async libraries like `Twisted `__ and :mod:`asyncio` tend to paper over these kinds of issues by throwing in unbounded buffers everywhere. This can avoid deadlocks, but can introduce its own problems and in particular can make it difficult to keep `memory usage and latency under control `__. While both approaches have their advantages, Trio takes the position that it's better to expose the underlying problem as directly as possible and provide good tools to confront it head-on. .. note:: If you want to try and make the deadlock happen on purpose to see for yourself, and you're using Windows, then you might need to split the ``send_all`` call up into two calls that each send half of the data. This is because Windows has a `somewhat unusual way of handling buffering `__. When things go wrong: timeouts, cancellation and exceptions in concurrent tasks ------------------------------------------------------------------------------- TODO: give an example using :func:`fail_after` TODO: explain :exc:`Cancelled` TODO: explain how cancellation is also used when one child raises an exception TODO: show an example :exc:`MultiError` traceback and walk through its structure TODO: maybe a brief discussion of :exc:`KeyboardInterrupt` handling? .. Timeouts -------- XX todo timeout example:: async def counter(): for i in range(100000): print(i) await trio.sleep(1) async def main(): with trio.fail_after(10): await counter() you can stick anything inside a timeout block, even child tasks [show something like the first example but with a timeout – they both get cancelled, the cancelleds get packed into a multierror, and then the timeout block catches the cancelled] brief discussion of KI? tasks-with-trace.py + control-C is pretty interesting or maybe leave it for a blog post? ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/pyproject.toml0000644000076500000000000000264400000000000015042 0ustar00quentinwheel[tool.black] target-version = ['py36'] [tool.towncrier] # Usage: # - PRs should drop a file like "issuenumber.feature" in newsfragments # (or "bugfix", "doc", "removal", "misc"; misc gets no text, we can # customize this) # - At release time after bumping version number, run: towncrier # (or towncrier --draft) package = "trio" filename = "docs/source/history.rst" directory = "newsfragments" underlines = ["-", "~", "^"] issue_format = "`#{issue} `__" [[tool.towncrier.type]] directory = "headline" name = "Headline features" showcontent = true [[tool.towncrier.type]] directory = "breaking" name = "Breaking changes" showcontent = true [[tool.towncrier.type]] directory = "feature" name = "Features" showcontent = true [[tool.towncrier.type]] directory = "bugfix" name = "Bugfixes" showcontent = true [[tool.towncrier.type]] directory = "doc" name = "Improved documentation" showcontent = true [[tool.towncrier.type]] directory = "deprecated" name = "Deprecations and removals" showcontent = true [[tool.towncrier.type]] directory = "misc" name = "Miscellaneous internal changes" showcontent = true [tool.pytest.ini_options] addopts = ["--strict-markers", "--strict-config"] xfail_strict = true faulthandler_timeout = 60 markers = ["redistributors_should_skip: tests that should be skipped by downstream redistributors"] junit_family = "xunit2" filterwarnings = ["error"] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8761904 trio-0.19.0/setup.cfg0000644000076500000000000000004600000000000013741 0ustar00quentinwheel[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/setup.py0000644000076500000000000001173700000000000013643 0ustar00quentinwheelfrom setuptools import setup, find_packages exec(open("trio/_version.py", encoding="utf-8").read()) LONG_DESC = """\ .. image:: https://raw.githubusercontent.com/python-trio/trio/9b0bec646a31e0d0f67b8b6ecc6939726faf3e17/logo/logo-with-background.svg :width: 200px :align: right The Trio project's goal is to produce a production-quality, `permissively licensed `__, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do **multiple things at the same time** with **parallelized I/O**. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections at the same time, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on **usability** and **correctness**. Concurrency is complicated; we try to make it *easy* to get things *right*. Trio was built from the ground up to take advantage of the `latest Python features `__, and draws inspiration from `many sources `__, in particular Dave Beazley's `Curio `__. The resulting design is radically simpler than older competitors like `asyncio `__ and `Twisted `__, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. `Perhaps you'll find the same `__. This project is young and still somewhat experimental: the overall design is solid and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We *do* encourage you do use it, but you should `read and subscribe to issue #1 `__ to get warning and a chance to give feedback about any compatibility-breaking changes. Vital statistics: * Supported environments: Linux, macOS, or Windows running some kind of Python 3.6-or-better (either CPython or PyPy3 is fine). \\*BSD and illumos likely work too, but are not tested. * Install: ``python3 -m pip install -U trio`` (or on Windows, maybe ``py -3 -m pip install -U trio``). No compiler needed. * Tutorial and reference manual: https://trio.readthedocs.io * Bug tracker and source code: https://github.com/python-trio/trio * Real-time chat: https://gitter.im/python-trio/general * Discussion forum: https://trio.discourse.group * License: MIT or Apache 2, your choice * Contributor guide: https://trio.readthedocs.io/en/latest/contributing.html * Code of conduct: Contributors are requested to follow our `code of conduct `_ in all project spaces. """ setup( name="trio", version=__version__, description="A friendly Python library for async concurrency and I/O", long_description=LONG_DESC, author="Nathaniel J. Smith", author_email="njs@pobox.com", url="https://github.com/python-trio/trio", license="MIT -or- Apache License 2.0", packages=find_packages(), install_requires=[ "attrs >= 19.2.0", # for eq "sortedcontainers", "async_generator >= 1.9", "idna", "outcome", "sniffio", # cffi 1.12 adds from_buffer(require_writable=True) and ffi.release() # cffi 1.14 fixes memory leak inside ffi.getwinerror() # cffi is required on Windows, except on PyPy where it is built-in "cffi>=1.14; os_name == 'nt' and implementation_name != 'pypy'", "contextvars>=2.1; python_version < '3.7'", ], # This means, just install *everything* you see under trio/, even if it # doesn't look like a source file, so long as it appears in MANIFEST.in: include_package_data=True, python_requires=">=3.6", keywords=["async", "io", "networking", "trio"], classifiers=[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "License :: OSI Approved :: Apache Software License", "Operating System :: POSIX :: Linux", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX :: BSD", "Operating System :: Microsoft :: Windows", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Topic :: System :: Networking", "Framework :: Trio", ], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/test-requirements.txt0000644000076500000000000000546400000000000016372 0ustar00quentinwheel# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file test-requirements.txt test-requirements.in # appdirs==1.4.4 # via black astor==0.8.1 # via -r test-requirements.in astroid==2.5.6 # via pylint async-generator==1.10 # via -r test-requirements.in attrs==20.3.0 # via # -r test-requirements.in # outcome # pytest backcall==0.2.0 # via ipython black==21.6b0 ; implementation_name == "cpython" # via -r test-requirements.in cffi==1.14.5 # via cryptography click==8.0.1 # via black coverage==5.5 # via pytest-cov cryptography==3.4.7 # via # pyopenssl # trustme decorator==5.0.9 # via ipython flake8==3.9.2 # via -r test-requirements.in idna==2.10 # via # -r test-requirements.in # trustme immutables==0.15 # via -r test-requirements.in iniconfig==1.1.1 # via pytest ipython-genutils==0.2.0 # via traitlets ipython==7.24.1 # via -r test-requirements.in isort==5.8.0 # via pylint jedi==0.18.0 # via # -r test-requirements.in # ipython lazy-object-proxy==1.6.0 # via astroid matplotlib-inline==0.1.2 # via ipython mccabe==0.6.1 # via # flake8 # pylint mypy-extensions==0.4.3 ; implementation_name == "cpython" # via # -r test-requirements.in # black # mypy mypy==0.902 ; implementation_name == "cpython" # via -r test-requirements.in outcome==1.1.0 # via -r test-requirements.in packaging==20.9 # via pytest parso==0.8.2 # via jedi pathspec==0.8.1 # via black pexpect==4.8.0 # via ipython pickleshare==0.7.5 # via ipython pluggy==0.13.1 # via pytest prompt-toolkit==3.0.18 # via ipython ptyprocess==0.7.0 # via pexpect py==1.10.0 # via pytest pycodestyle==2.7.0 # via flake8 pycparser==2.20 # via cffi pyflakes==2.3.1 # via flake8 pygments==2.7.4 # via ipython pylint==2.8.3 # via -r test-requirements.in pyopenssl==20.0.1 # via -r test-requirements.in pyparsing==2.4.7 # via packaging pytest-cov==2.12.1 # via -r test-requirements.in pytest==6.2.4 # via # -r test-requirements.in # pytest-cov regex==2021.4.4 # via black six==1.16.0 # via pyopenssl sniffio==1.2.0 # via -r test-requirements.in sortedcontainers==2.4.0 # via -r test-requirements.in toml==0.10.2 # via # black # mypy # pylint # pytest # pytest-cov traitlets==5.0.5 # via # ipython # matplotlib-inline trustme==0.8.0 # via -r test-requirements.in typed-ast==1.4.3 ; implementation_name == "cpython" # via -r test-requirements.in typing-extensions==3.10.0.0 ; implementation_name == "cpython" # via # -r test-requirements.in # mypy wcwidth==0.2.5 # via prompt-toolkit wrapt==1.12.1 # via astroid ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.6925182 trio-0.19.0/trio/0000755000076500000000000000000000000000000013075 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/__init__.py0000644000076500000000000000600100000000000015203 0ustar00quentinwheel"""Trio - A friendly Python library for async concurrency and I/O """ # General layout: # # trio/_core/... is the self-contained core library. It does various # shenanigans to export a consistent "core API", but parts of the core API are # too low-level to be recommended for regular use. # # trio/*.py define a set of more usable tools on top of this. They import from # trio._core and from each other. # # This file pulls together the friendly public API, by re-exporting the more # innocuous bits of the _core API + the higher-level tools from trio/*.py. from ._version import __version__ from ._core import ( TrioInternalError, RunFinishedError, WouldBlock, Cancelled, BusyResourceError, ClosedResourceError, MultiError, run, open_nursery, CancelScope, current_effective_deadline, TASK_STATUS_IGNORED, current_time, BrokenResourceError, EndOfChannel, Nursery, ) from ._timeouts import ( move_on_at, move_on_after, sleep_forever, sleep_until, sleep, fail_at, fail_after, TooSlowError, ) from ._sync import ( Event, CapacityLimiter, Semaphore, Lock, StrictFIFOLock, Condition, ) from ._highlevel_generic import aclose_forcefully, StapledStream from ._channel import ( open_memory_channel, MemorySendChannel, MemoryReceiveChannel, ) from ._signals import open_signal_receiver from ._highlevel_socket import SocketStream, SocketListener from ._file_io import open_file, wrap_file from ._path import Path from ._subprocess import Process, open_process, run_process from ._ssl import SSLStream, SSLListener, NeedHandshakeError from ._highlevel_serve_listeners import serve_listeners from ._highlevel_open_tcp_stream import open_tcp_stream from ._highlevel_open_tcp_listeners import open_tcp_listeners, serve_tcp from ._highlevel_open_unix_stream import open_unix_socket from ._highlevel_ssl_helpers import ( open_ssl_over_tcp_stream, open_ssl_over_tcp_listeners, serve_ssl_over_tcp, ) from ._deprecate import TrioDeprecationWarning # Submodules imported by default from . import lowlevel from . import socket from . import abc from . import from_thread from . import to_thread # Not imported by default, but mentioned here so static analysis tools like # pylint will know that it exists. if False: from . import testing from . import _deprecate _deprecate.enable_attribute_deprecations(__name__) # Having the public path in .__module__ attributes is important for: # - exception names in printed tracebacks # - sphinx :show-inheritance: # - deprecation warnings # - pickle # - probably other stuff from ._util import fixup_module_metadata fixup_module_metadata(__name__, globals()) fixup_module_metadata(lowlevel.__name__, lowlevel.__dict__) fixup_module_metadata(socket.__name__, socket.__dict__) fixup_module_metadata(abc.__name__, abc.__dict__) fixup_module_metadata(from_thread.__name__, from_thread.__dict__) fixup_module_metadata(to_thread.__name__, to_thread.__dict__) del fixup_module_metadata ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_abc.py0000644000076500000000000005643300000000000014346 0ustar00quentinwheel# coding: utf-8 from abc import ABCMeta, abstractmethod from typing import Generic, TypeVar import trio # We use ABCMeta instead of ABC, plus set __slots__=(), so as not to force a # __dict__ onto subclasses. class Clock(metaclass=ABCMeta): """The interface for custom run loop clocks.""" __slots__ = () @abstractmethod def start_clock(self): """Do any setup this clock might need. Called at the beginning of the run. """ @abstractmethod def current_time(self): """Return the current time, according to this clock. This is used to implement functions like :func:`trio.current_time` and :func:`trio.move_on_after`. Returns: float: The current time. """ @abstractmethod def deadline_to_sleep_time(self, deadline): """Compute the real time until the given deadline. This is called before we enter a system-specific wait function like :func:`select.select`, to get the timeout to pass. For a clock using wall-time, this should be something like:: return deadline - self.current_time() but of course it may be different if you're implementing some kind of virtual clock. Args: deadline (float): The absolute time of the next deadline, according to this clock. Returns: float: The number of real seconds to sleep until the given deadline. May be :data:`math.inf`. """ class Instrument(metaclass=ABCMeta): """The interface for run loop instrumentation. Instruments don't have to inherit from this abstract base class, and all of these methods are optional. This class serves mostly as documentation. """ __slots__ = () def before_run(self): """Called at the beginning of :func:`trio.run`.""" def after_run(self): """Called just before :func:`trio.run` returns.""" def task_spawned(self, task): """Called when the given task is created. Args: task (trio.lowlevel.Task): The new task. """ def task_scheduled(self, task): """Called when the given task becomes runnable. It may still be some time before it actually runs, if there are other runnable tasks ahead of it. Args: task (trio.lowlevel.Task): The task that became runnable. """ def before_task_step(self, task): """Called immediately before we resume running the given task. Args: task (trio.lowlevel.Task): The task that is about to run. """ def after_task_step(self, task): """Called when we return to the main run loop after a task has yielded. Args: task (trio.lowlevel.Task): The task that just ran. """ def task_exited(self, task): """Called when the given task exits. Args: task (trio.lowlevel.Task): The finished task. """ def before_io_wait(self, timeout): """Called before blocking to wait for I/O readiness. Args: timeout (float): The number of seconds we are willing to wait. """ def after_io_wait(self, timeout): """Called after handling pending I/O. Args: timeout (float): The number of seconds we were willing to wait. This much time may or may not have elapsed, depending on whether any I/O was ready. """ class HostnameResolver(metaclass=ABCMeta): """If you have a custom hostname resolver, then implementing :class:`HostnameResolver` allows you to register this to be used by Trio. See :func:`trio.socket.set_custom_hostname_resolver`. """ __slots__ = () @abstractmethod async def getaddrinfo(self, host, port, family=0, type=0, proto=0, flags=0): """A custom implementation of :func:`~trio.socket.getaddrinfo`. Called by :func:`trio.socket.getaddrinfo`. If ``host`` is given as a numeric IP address, then :func:`~trio.socket.getaddrinfo` may handle the request itself rather than calling this method. Any required IDNA encoding is handled before calling this function; your implementation can assume that it will never see U-labels like ``"café.com"``, and only needs to handle A-labels like ``b"xn--caf-dma.com"``. """ @abstractmethod async def getnameinfo(self, sockaddr, flags): """A custom implementation of :func:`~trio.socket.getnameinfo`. Called by :func:`trio.socket.getnameinfo`. """ class SocketFactory(metaclass=ABCMeta): """If you write a custom class implementing the Trio socket interface, then you can use a :class:`SocketFactory` to get Trio to use it. See :func:`trio.socket.set_custom_socket_factory`. """ @abstractmethod def socket(self, family=None, type=None, proto=None): """Create and return a socket object. Your socket object must inherit from :class:`trio.socket.SocketType`, which is an empty class whose only purpose is to "mark" which classes should be considered valid Trio sockets. Called by :func:`trio.socket.socket`. Note that unlike :func:`trio.socket.socket`, this does not take a ``fileno=`` argument. If a ``fileno=`` is specified, then :func:`trio.socket.socket` returns a regular Trio socket object instead of calling this method. """ class AsyncResource(metaclass=ABCMeta): """A standard interface for resources that needs to be cleaned up, and where that cleanup may require blocking operations. This class distinguishes between "graceful" closes, which may perform I/O and thus block, and a "forceful" close, which cannot. For example, cleanly shutting down a TLS-encrypted connection requires sending a "goodbye" message; but if a peer has become non-responsive, then sending this message might block forever, so we may want to just drop the connection instead. Therefore the :meth:`aclose` method is unusual in that it should always close the connection (or at least make its best attempt) *even if it fails*; failure indicates a failure to achieve grace, not a failure to close the connection. Objects that implement this interface can be used as async context managers, i.e., you can write:: async with create_resource() as some_async_resource: ... Entering the context manager is synchronous (not a checkpoint); exiting it calls :meth:`aclose`. The default implementations of ``__aenter__`` and ``__aexit__`` should be adequate for all subclasses. """ __slots__ = () @abstractmethod async def aclose(self): """Close this resource, possibly blocking. IMPORTANT: This method may block in order to perform a "graceful" shutdown. But, if this fails, then it still *must* close any underlying resources before returning. An error from this method indicates a failure to achieve grace, *not* a failure to close the connection. For example, suppose we call :meth:`aclose` on a TLS-encrypted connection. This requires sending a "goodbye" message; but if the peer has become non-responsive, then our attempt to send this message might block forever, and eventually time out and be cancelled. In this case the :meth:`aclose` method on :class:`~trio.SSLStream` will immediately close the underlying transport stream using :func:`trio.aclose_forcefully` before raising :exc:`~trio.Cancelled`. If the resource is already closed, then this method should silently succeed. Once this method completes, any other pending or future operations on this resource should generally raise :exc:`~trio.ClosedResourceError`, unless there's a good reason to do otherwise. See also: :func:`trio.aclose_forcefully`. """ async def __aenter__(self): return self async def __aexit__(self, *args): await self.aclose() class SendStream(AsyncResource): """A standard interface for sending data on a byte stream. The underlying stream may be unidirectional, or bidirectional. If it's bidirectional, then you probably want to also implement :class:`ReceiveStream`, which makes your object a :class:`Stream`. :class:`SendStream` objects also implement the :class:`AsyncResource` interface, so they can be closed by calling :meth:`~AsyncResource.aclose` or using an ``async with`` block. If you want to send Python objects rather than raw bytes, see :class:`SendChannel`. """ __slots__ = () @abstractmethod async def send_all(self, data): """Sends the given data through the stream, blocking if necessary. Args: data (bytes, bytearray, or memoryview): The data to send. Raises: trio.BusyResourceError: if another task is already executing a :meth:`send_all`, :meth:`wait_send_all_might_not_block`, or :meth:`HalfCloseableStream.send_eof` on this stream. trio.BrokenResourceError: if something has gone wrong, and the stream is broken. trio.ClosedResourceError: if you previously closed this stream object, or if another task closes this stream object while :meth:`send_all` is running. Most low-level operations in Trio provide a guarantee: if they raise :exc:`trio.Cancelled`, this means that they had no effect, so the system remains in a known state. This is **not true** for :meth:`send_all`. If this operation raises :exc:`trio.Cancelled` (or any other exception for that matter), then it may have sent some, all, or none of the requested data, and there is no way to know which. """ @abstractmethod async def wait_send_all_might_not_block(self): """Block until it's possible that :meth:`send_all` might not block. This method may return early: it's possible that after it returns, :meth:`send_all` will still block. (In the worst case, if no better implementation is available, then it might always return immediately without blocking. It's nice to do better than that when possible, though.) This method **must not** return *late*: if it's possible for :meth:`send_all` to complete without blocking, then it must return. When implementing it, err on the side of returning early. Raises: trio.BusyResourceError: if another task is already executing a :meth:`send_all`, :meth:`wait_send_all_might_not_block`, or :meth:`HalfCloseableStream.send_eof` on this stream. trio.BrokenResourceError: if something has gone wrong, and the stream is broken. trio.ClosedResourceError: if you previously closed this stream object, or if another task closes this stream object while :meth:`wait_send_all_might_not_block` is running. Note: This method is intended to aid in implementing protocols that want to delay choosing which data to send until the last moment. E.g., suppose you're working on an implementation of a remote display server like `VNC `__, and the network connection is currently backed up so that if you call :meth:`send_all` now then it will sit for 0.5 seconds before actually sending anything. In this case it doesn't make sense to take a screenshot, then wait 0.5 seconds, and then send it, because the screen will keep changing while you wait; it's better to wait 0.5 seconds, then take the screenshot, and then send it, because this way the data you deliver will be more up-to-date. Using :meth:`wait_send_all_might_not_block` makes it possible to implement the better strategy. If you use this method, you might also want to read up on ``TCP_NOTSENT_LOWAT``. Further reading: * `Prioritization Only Works When There's Pending Data to Prioritize `__ * WWDC 2015: Your App and Next Generation Networks: `slides `__, `video and transcript `__ """ class ReceiveStream(AsyncResource): """A standard interface for receiving data on a byte stream. The underlying stream may be unidirectional, or bidirectional. If it's bidirectional, then you probably want to also implement :class:`SendStream`, which makes your object a :class:`Stream`. :class:`ReceiveStream` objects also implement the :class:`AsyncResource` interface, so they can be closed by calling :meth:`~AsyncResource.aclose` or using an ``async with`` block. If you want to receive Python objects rather than raw bytes, see :class:`ReceiveChannel`. `ReceiveStream` objects can be used in ``async for`` loops. Each iteration will produce an arbitrary sized chunk of bytes, like calling `receive_some` with no arguments. Every chunk will contain at least one byte, and the loop automatically exits when reaching end-of-file. """ __slots__ = () @abstractmethod async def receive_some(self, max_bytes=None): """Wait until there is data available on this stream, and then return some of it. A return value of ``b""`` (an empty bytestring) indicates that the stream has reached end-of-file. Implementations should be careful that they return ``b""`` if, and only if, the stream has reached end-of-file! Args: max_bytes (int): The maximum number of bytes to return. Must be greater than zero. Optional; if omitted, then the stream object is free to pick a reasonable default. Returns: bytes or bytearray: The data received. Raises: trio.BusyResourceError: if two tasks attempt to call :meth:`receive_some` on the same stream at the same time. trio.BrokenResourceError: if something has gone wrong, and the stream is broken. trio.ClosedResourceError: if you previously closed this stream object, or if another task closes this stream object while :meth:`receive_some` is running. """ def __aiter__(self): return self async def __anext__(self): data = await self.receive_some() if not data: raise StopAsyncIteration return data class Stream(SendStream, ReceiveStream): """A standard interface for interacting with bidirectional byte streams. A :class:`Stream` is an object that implements both the :class:`SendStream` and :class:`ReceiveStream` interfaces. If implementing this interface, you should consider whether you can go one step further and implement :class:`HalfCloseableStream`. """ __slots__ = () class HalfCloseableStream(Stream): """This interface extends :class:`Stream` to also allow closing the send part of the stream without closing the receive part. """ __slots__ = () @abstractmethod async def send_eof(self): """Send an end-of-file indication on this stream, if possible. The difference between :meth:`send_eof` and :meth:`~AsyncResource.aclose` is that :meth:`send_eof` is a *unidirectional* end-of-file indication. After you call this method, you shouldn't try sending any more data on this stream, and your remote peer should receive an end-of-file indication (eventually, after receiving all the data you sent before that). But, they may continue to send data to you, and you can continue to receive it by calling :meth:`~ReceiveStream.receive_some`. You can think of it as calling :meth:`~AsyncResource.aclose` on just the :class:`SendStream` "half" of the stream object (and in fact that's literally how :class:`trio.StapledStream` implements it). Examples: * On a socket, this corresponds to ``shutdown(..., SHUT_WR)`` (`man page `__). * The SSH protocol provides the ability to multiplex bidirectional "channels" on top of a single encrypted connection. A Trio implementation of SSH could expose these channels as :class:`HalfCloseableStream` objects, and calling :meth:`send_eof` would send an ``SSH_MSG_CHANNEL_EOF`` request (see `RFC 4254 §5.3 `__). * On an SSL/TLS-encrypted connection, the protocol doesn't provide any way to do a unidirectional shutdown without closing the connection entirely, so :class:`~trio.SSLStream` implements :class:`Stream`, not :class:`HalfCloseableStream`. If an EOF has already been sent, then this method should silently succeed. Raises: trio.BusyResourceError: if another task is already executing a :meth:`~SendStream.send_all`, :meth:`~SendStream.wait_send_all_might_not_block`, or :meth:`send_eof` on this stream. trio.BrokenResourceError: if something has gone wrong, and the stream is broken. trio.ClosedResourceError: if you previously closed this stream object, or if another task closes this stream object while :meth:`send_eof` is running. """ # A regular invariant generic type T = TypeVar("T") # The type of object produced by a ReceiveChannel (covariant because # ReceiveChannel[Derived] can be passed to someone expecting # ReceiveChannel[Base]) ReceiveType = TypeVar("ReceiveType", covariant=True) # The type of object accepted by a SendChannel (contravariant because # SendChannel[Base] can be passed to someone expecting # SendChannel[Derived]) SendType = TypeVar("SendType", contravariant=True) # The type of object produced by a Listener (covariant plus must be # an AsyncResource) T_resource = TypeVar("T_resource", bound=AsyncResource, covariant=True) class Listener(AsyncResource, Generic[T_resource]): """A standard interface for listening for incoming connections. :class:`Listener` objects also implement the :class:`AsyncResource` interface, so they can be closed by calling :meth:`~AsyncResource.aclose` or using an ``async with`` block. """ __slots__ = () @abstractmethod async def accept(self): """Wait until an incoming connection arrives, and then return it. Returns: AsyncResource: An object representing the incoming connection. In practice this is generally some kind of :class:`Stream`, but in principle you could also define a :class:`Listener` that returned, say, channel objects. Raises: trio.BusyResourceError: if two tasks attempt to call :meth:`accept` on the same listener at the same time. trio.ClosedResourceError: if you previously closed this listener object, or if another task closes this listener object while :meth:`accept` is running. Listeners don't generally raise :exc:`~trio.BrokenResourceError`, because for listeners there is no general condition of "the network/remote peer broke the connection" that can be handled in a generic way, like there is for streams. Other errors *can* occur and be raised from :meth:`accept` – for example, if you run out of file descriptors then you might get an :class:`OSError` with its errno set to ``EMFILE``. """ class SendChannel(AsyncResource, Generic[SendType]): """A standard interface for sending Python objects to some receiver. `SendChannel` objects also implement the `AsyncResource` interface, so they can be closed by calling `~AsyncResource.aclose` or using an ``async with`` block. If you want to send raw bytes rather than Python objects, see `SendStream`. """ __slots__ = () @abstractmethod async def send(self, value: SendType) -> None: """Attempt to send an object through the channel, blocking if necessary. Args: value (object): The object to send. Raises: trio.BrokenResourceError: if something has gone wrong, and the channel is broken. For example, you may get this if the receiver has already been closed. trio.ClosedResourceError: if you previously closed this :class:`SendChannel` object, or if another task closes it while :meth:`send` is running. trio.BusyResourceError: some channels allow multiple tasks to call `send` at the same time, but others don't. If you try to call `send` simultaneously from multiple tasks on a channel that doesn't support it, then you can get `~trio.BusyResourceError`. """ class ReceiveChannel(AsyncResource, Generic[ReceiveType]): """A standard interface for receiving Python objects from some sender. You can iterate over a :class:`ReceiveChannel` using an ``async for`` loop:: async for value in receive_channel: ... This is equivalent to calling :meth:`receive` repeatedly. The loop exits without error when `receive` raises `~trio.EndOfChannel`. `ReceiveChannel` objects also implement the `AsyncResource` interface, so they can be closed by calling `~AsyncResource.aclose` or using an ``async with`` block. If you want to receive raw bytes rather than Python objects, see `ReceiveStream`. """ __slots__ = () @abstractmethod async def receive(self) -> ReceiveType: """Attempt to receive an incoming object, blocking if necessary. Returns: object: Whatever object was received. Raises: trio.EndOfChannel: if the sender has been closed cleanly, and no more objects are coming. This is not an error condition. trio.ClosedResourceError: if you previously closed this :class:`ReceiveChannel` object. trio.BrokenResourceError: if something has gone wrong, and the channel is broken. trio.BusyResourceError: some channels allow multiple tasks to call `receive` at the same time, but others don't. If you try to call `receive` simultaneously from multiple tasks on a channel that doesn't support it, then you can get `~trio.BusyResourceError`. """ def __aiter__(self): return self async def __anext__(self) -> ReceiveType: try: return await self.receive() except trio.EndOfChannel: raise StopAsyncIteration class Channel(SendChannel[T], ReceiveChannel[T]): """A standard interface for interacting with bidirectional channels. A `Channel` is an object that implements both the `SendChannel` and `ReceiveChannel` interfaces, so you can both send and receive objects. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_channel.py0000644000076500000000000003402000000000000015215 0ustar00quentinwheelfrom collections import deque, OrderedDict from math import inf import attr from outcome import Error, Value from .abc import SendChannel, ReceiveChannel, Channel from ._util import generic_function, NoPublicConstructor import trio from ._core import enable_ki_protection @generic_function def open_memory_channel(max_buffer_size): """Open a channel for passing objects between tasks within a process. Memory channels are lightweight, cheap to allocate, and entirely in-memory. They don't involve any operating-system resources, or any kind of serialization. They just pass Python objects directly between tasks (with a possible stop in an internal buffer along the way). Channel objects can be closed by calling `~trio.abc.AsyncResource.aclose` or using ``async with``. They are *not* automatically closed when garbage collected. Closing memory channels isn't mandatory, but it is generally a good idea, because it helps avoid situations where tasks get stuck waiting on a channel when there's no-one on the other side. See :ref:`channel-shutdown` for details. Memory channel operations are all atomic with respect to cancellation, either `~trio.abc.ReceiveChannel.receive` will successfully return an object, or it will raise :exc:`Cancelled` while leaving the channel unchanged. Args: max_buffer_size (int or math.inf): The maximum number of items that can be buffered in the channel before :meth:`~trio.abc.SendChannel.send` blocks. Choosing a sensible value here is important to ensure that backpressure is communicated promptly and avoid unnecessary latency; see :ref:`channel-buffering` for more details. If in doubt, use 0. Returns: A pair ``(send_channel, receive_channel)``. If you have trouble remembering which order these go in, remember: data flows from left → right. In addition to the standard channel methods, all memory channel objects provide a ``statistics()`` method, which returns an object with the following fields: * ``current_buffer_used``: The number of items currently stored in the channel buffer. * ``max_buffer_size``: The maximum number of items allowed in the buffer, as passed to :func:`open_memory_channel`. * ``open_send_channels``: The number of open :class:`MemorySendChannel` endpoints pointing to this channel. Initially 1, but can be increased by :meth:`MemorySendChannel.clone`. * ``open_receive_channels``: Likewise, but for open :class:`MemoryReceiveChannel` endpoints. * ``tasks_waiting_send``: The number of tasks blocked in ``send`` on this channel (summing over all clones). * ``tasks_waiting_receive``: The number of tasks blocked in ``receive`` on this channel (summing over all clones). """ if max_buffer_size != inf and not isinstance(max_buffer_size, int): raise TypeError("max_buffer_size must be an integer or math.inf") if max_buffer_size < 0: raise ValueError("max_buffer_size must be >= 0") state = MemoryChannelState(max_buffer_size) return ( MemorySendChannel._create(state), MemoryReceiveChannel._create(state), ) @attr.s(frozen=True, slots=True) class MemoryChannelStats: current_buffer_used = attr.ib() max_buffer_size = attr.ib() open_send_channels = attr.ib() open_receive_channels = attr.ib() tasks_waiting_send = attr.ib() tasks_waiting_receive = attr.ib() @attr.s(slots=True) class MemoryChannelState: max_buffer_size = attr.ib() data = attr.ib(factory=deque) # Counts of open endpoints using this state open_send_channels = attr.ib(default=0) open_receive_channels = attr.ib(default=0) # {task: value} send_tasks = attr.ib(factory=OrderedDict) # {task: None} receive_tasks = attr.ib(factory=OrderedDict) def statistics(self): return MemoryChannelStats( current_buffer_used=len(self.data), max_buffer_size=self.max_buffer_size, open_send_channels=self.open_send_channels, open_receive_channels=self.open_receive_channels, tasks_waiting_send=len(self.send_tasks), tasks_waiting_receive=len(self.receive_tasks), ) @attr.s(eq=False, repr=False) class MemorySendChannel(SendChannel, metaclass=NoPublicConstructor): _state = attr.ib() _closed = attr.ib(default=False) # This is just the tasks waiting on *this* object. As compared to # self._state.send_tasks, which includes tasks from this object and # all clones. _tasks = attr.ib(factory=set) def __attrs_post_init__(self): self._state.open_send_channels += 1 def __repr__(self): return "".format( id(self), id(self._state) ) def statistics(self): # XX should we also report statistics specific to this object? return self._state.statistics() @enable_ki_protection def send_nowait(self, value): """Like `~trio.abc.SendChannel.send`, but if the channel's buffer is full, raises `WouldBlock` instead of blocking. """ if self._closed: raise trio.ClosedResourceError if self._state.open_receive_channels == 0: raise trio.BrokenResourceError if self._state.receive_tasks: assert not self._state.data task, _ = self._state.receive_tasks.popitem(last=False) task.custom_sleep_data._tasks.remove(task) trio.lowlevel.reschedule(task, Value(value)) elif len(self._state.data) < self._state.max_buffer_size: self._state.data.append(value) else: raise trio.WouldBlock @enable_ki_protection async def send(self, value): """See `SendChannel.send `. Memory channels allow multiple tasks to call `send` at the same time. """ await trio.lowlevel.checkpoint_if_cancelled() try: self.send_nowait(value) except trio.WouldBlock: pass else: await trio.lowlevel.cancel_shielded_checkpoint() return task = trio.lowlevel.current_task() self._tasks.add(task) self._state.send_tasks[task] = value task.custom_sleep_data = self def abort_fn(_): self._tasks.remove(task) del self._state.send_tasks[task] return trio.lowlevel.Abort.SUCCEEDED await trio.lowlevel.wait_task_rescheduled(abort_fn) @enable_ki_protection def clone(self): """Clone this send channel object. This returns a new `MemorySendChannel` object, which acts as a duplicate of the original: sending on the new object does exactly the same thing as sending on the old object. (If you're familiar with `os.dup`, then this is a similar idea.) However, closing one of the objects does not close the other, and receivers don't get `EndOfChannel` until *all* clones have been closed. This is useful for communication patterns that involve multiple producers all sending objects to the same destination. If you give each producer its own clone of the `MemorySendChannel`, and then make sure to close each `MemorySendChannel` when it's finished, receivers will automatically get notified when all producers are finished. See :ref:`channel-mpmc` for examples. Raises: trio.ClosedResourceError: if you already closed this `MemorySendChannel` object. """ if self._closed: raise trio.ClosedResourceError return MemorySendChannel._create(self._state) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() @enable_ki_protection def close(self): """Close this send channel object synchronously. All channel objects have an asynchronous `~.AsyncResource.aclose` method. Memory channels can also be closed synchronously. This has the same effect on the channel and other tasks using it, but `close` is not a trio checkpoint. This simplifies cleaning up in cancelled tasks. Using ``with send_channel:`` will close the channel object on leaving the with block. """ if self._closed: return self._closed = True for task in self._tasks: trio.lowlevel.reschedule(task, Error(trio.ClosedResourceError())) del self._state.send_tasks[task] self._tasks.clear() self._state.open_send_channels -= 1 if self._state.open_send_channels == 0: assert not self._state.send_tasks for task in self._state.receive_tasks: task.custom_sleep_data._tasks.remove(task) trio.lowlevel.reschedule(task, Error(trio.EndOfChannel())) self._state.receive_tasks.clear() @enable_ki_protection async def aclose(self): self.close() await trio.lowlevel.checkpoint() @attr.s(eq=False, repr=False) class MemoryReceiveChannel(ReceiveChannel, metaclass=NoPublicConstructor): _state = attr.ib() _closed = attr.ib(default=False) _tasks = attr.ib(factory=set) def __attrs_post_init__(self): self._state.open_receive_channels += 1 def statistics(self): return self._state.statistics() def __repr__(self): return "".format( id(self), id(self._state) ) @enable_ki_protection def receive_nowait(self): """Like `~trio.abc.ReceiveChannel.receive`, but if there's nothing ready to receive, raises `WouldBlock` instead of blocking. """ if self._closed: raise trio.ClosedResourceError if self._state.send_tasks: task, value = self._state.send_tasks.popitem(last=False) task.custom_sleep_data._tasks.remove(task) trio.lowlevel.reschedule(task) self._state.data.append(value) # Fall through if self._state.data: return self._state.data.popleft() if not self._state.open_send_channels: raise trio.EndOfChannel raise trio.WouldBlock @enable_ki_protection async def receive(self): """See `ReceiveChannel.receive `. Memory channels allow multiple tasks to call `receive` at the same time. The first task will get the first item sent, the second task will get the second item sent, and so on. """ await trio.lowlevel.checkpoint_if_cancelled() try: value = self.receive_nowait() except trio.WouldBlock: pass else: await trio.lowlevel.cancel_shielded_checkpoint() return value task = trio.lowlevel.current_task() self._tasks.add(task) self._state.receive_tasks[task] = None task.custom_sleep_data = self def abort_fn(_): self._tasks.remove(task) del self._state.receive_tasks[task] return trio.lowlevel.Abort.SUCCEEDED return await trio.lowlevel.wait_task_rescheduled(abort_fn) @enable_ki_protection def clone(self): """Clone this receive channel object. This returns a new `MemoryReceiveChannel` object, which acts as a duplicate of the original: receiving on the new object does exactly the same thing as receiving on the old object. However, closing one of the objects does not close the other, and the underlying channel is not closed until all clones are closed. (If you're familiar with `os.dup`, then this is a similar idea.) This is useful for communication patterns that involve multiple consumers all receiving objects from the same underlying channel. See :ref:`channel-mpmc` for examples. .. warning:: The clones all share the same underlying channel. Whenever a clone :meth:`receive`\\s a value, it is removed from the channel and the other clones do *not* receive that value. If you want to send multiple copies of the same stream of values to multiple destinations, like :func:`itertools.tee`, then you need to find some other solution; this method does *not* do that. Raises: trio.ClosedResourceError: if you already closed this `MemoryReceiveChannel` object. """ if self._closed: raise trio.ClosedResourceError return MemoryReceiveChannel._create(self._state) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() @enable_ki_protection def close(self): """Close this receive channel object synchronously. All channel objects have an asynchronous `~.AsyncResource.aclose` method. Memory channels can also be closed synchronously. This has the same effect on the channel and other tasks using it, but `close` is not a trio checkpoint. This simplifies cleaning up in cancelled tasks. Using ``with receive_channel:`` will close the channel object on leaving the with block. """ if self._closed: return self._closed = True for task in self._tasks: trio.lowlevel.reschedule(task, Error(trio.ClosedResourceError())) del self._state.receive_tasks[task] self._tasks.clear() self._state.open_receive_channels -= 1 if self._state.open_receive_channels == 0: assert not self._state.receive_tasks for task in self._state.send_tasks: task.custom_sleep_data._tasks.remove(task) trio.lowlevel.reschedule(task, Error(trio.BrokenResourceError())) self._state.send_tasks.clear() self._state.data.clear() @enable_ki_protection async def aclose(self): self.close() await trio.lowlevel.checkpoint() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.7851634 trio-0.19.0/trio/_core/0000755000076500000000000000000000000000000014164 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/__init__.py0000644000076500000000000000403500000000000016277 0ustar00quentinwheel""" This namespace represents the core functionality that has to be built-in and deal with private internal data structures. Things in this namespace are publicly available in either trio, trio.lowlevel, or trio.testing. """ import sys from ._exceptions import ( TrioInternalError, RunFinishedError, WouldBlock, Cancelled, BusyResourceError, ClosedResourceError, BrokenResourceError, EndOfChannel, ) from ._multierror import MultiError from ._ki import ( enable_ki_protection, disable_ki_protection, currently_ki_protected, ) # Imports that always exist from ._run import ( Task, CancelScope, run, open_nursery, checkpoint, current_task, current_effective_deadline, checkpoint_if_cancelled, TASK_STATUS_IGNORED, current_statistics, current_trio_token, reschedule, remove_instrument, add_instrument, current_clock, current_root_task, spawn_system_task, current_time, wait_all_tasks_blocked, wait_readable, wait_writable, notify_closing, Nursery, start_guest_run, ) # Has to come after _run to resolve a circular import from ._traps import ( cancel_shielded_checkpoint, Abort, wait_task_rescheduled, temporarily_detach_coroutine_object, permanently_detach_coroutine_object, reattach_detached_coroutine_object, ) from ._entry_queue import TrioToken from ._parking_lot import ParkingLot from ._unbounded_queue import UnboundedQueue from ._local import RunVar from ._thread_cache import start_thread_soon from ._mock_clock import MockClock # Windows imports if sys.platform == "win32": from ._run import ( monitor_completion_key, current_iocp, register_with_iocp, wait_overlapped, write_overlapped, readinto_overlapped, ) # Kqueue imports elif sys.platform != "linux" and sys.platform != "win32": from ._run import current_kqueue, monitor_kevent, wait_kevent del sys # It would be better to import sys as _sys, but mypy does not understand it ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_asyncgens.py0000644000076500000000000002142000000000000016666 0ustar00quentinwheelimport attr import logging import sys import warnings import weakref from .._util import name_asyncgen from . import _run from .. import _core # Used to log exceptions in async generator finalizers ASYNCGEN_LOGGER = logging.getLogger("trio.async_generator_errors") @attr.s(eq=False, slots=True) class AsyncGenerators: # Async generators are added to this set when first iterated. Any # left after the main task exits will be closed before trio.run() # returns. During most of the run, this is a WeakSet so GC works. # During shutdown, when we're finalizing all the remaining # asyncgens after the system nursery has been closed, it's a # regular set so we don't have to deal with GC firing at # unexpected times. alive = attr.ib(factory=weakref.WeakSet) # This collects async generators that get garbage collected during # the one-tick window between the system nursery closing and the # init task starting end-of-run asyncgen finalization. trailing_needs_finalize = attr.ib(factory=set) prev_hooks = attr.ib(init=False) def install_hooks(self, runner): def firstiter(agen): if hasattr(_run.GLOBAL_RUN_CONTEXT, "task"): self.alive.add(agen) else: # An async generator first iterated outside of a Trio # task doesn't belong to Trio. Probably we're in guest # mode and the async generator belongs to our host. # The locals dictionary is the only good place to # remember this fact, at least until # https://bugs.python.org/issue40916 is implemented. agen.ag_frame.f_locals["@trio_foreign_asyncgen"] = True if self.prev_hooks.firstiter is not None: self.prev_hooks.firstiter(agen) def finalize_in_trio_context(agen, agen_name): try: runner.spawn_system_task( self._finalize_one, agen, agen_name, name=f"close asyncgen {agen_name} (abandoned)", ) except RuntimeError: # There is a one-tick window where the system nursery # is closed but the init task hasn't yet made # self.asyncgens a strong set to disable GC. We seem to # have hit it. self.trailing_needs_finalize.add(agen) def finalizer(agen): agen_name = name_asyncgen(agen) try: is_ours = not agen.ag_frame.f_locals.get("@trio_foreign_asyncgen") except AttributeError: # pragma: no cover is_ours = True if is_ours: runner.entry_queue.run_sync_soon( finalize_in_trio_context, agen, agen_name ) # Do this last, because it might raise an exception # depending on the user's warnings filter. (That # exception will be printed to the terminal and # ignored, since we're running in GC context.) warnings.warn( f"Async generator {agen_name!r} was garbage collected before it " f"had been exhausted. Surround its use in 'async with " f"aclosing(...):' to ensure that it gets cleaned up as soon as " f"you're done using it.", ResourceWarning, stacklevel=2, source=agen, ) else: # Not ours -> forward to the host loop's async generator finalizer if self.prev_hooks.finalizer is not None: self.prev_hooks.finalizer(agen) else: # Host has no finalizer. Reimplement the default # Python behavior with no hooks installed: throw in # GeneratorExit, step once, raise RuntimeError if # it doesn't exit. closer = agen.aclose() try: # If the next thing is a yield, this will raise RuntimeError # which we allow to propagate closer.send(None) except StopIteration: pass else: # If the next thing is an await, we get here. Give a nicer # error than the default "async generator ignored GeneratorExit" raise RuntimeError( f"Non-Trio async generator {agen_name!r} awaited something " f"during finalization; install a finalization hook to " f"support this, or wrap it in 'async with aclosing(...):'" ) self.prev_hooks = sys.get_asyncgen_hooks() sys.set_asyncgen_hooks(firstiter=firstiter, finalizer=finalizer) async def finalize_remaining(self, runner): # This is called from init after shutting down the system nursery. # The only tasks running at this point are init and # the run_sync_soon task, and since the system nursery is closed, # there's no way for user code to spawn more. assert _core.current_task() is runner.init_task assert len(runner.tasks) == 2 # To make async generator finalization easier to reason # about, we'll shut down asyncgen garbage collection by turning # the alive WeakSet into a regular set. self.alive = set(self.alive) # Process all pending run_sync_soon callbacks, in case one of # them was an asyncgen finalizer that snuck in under the wire. runner.entry_queue.run_sync_soon(runner.reschedule, runner.init_task) await _core.wait_task_rescheduled( lambda _: _core.Abort.FAILED # pragma: no cover ) self.alive.update(self.trailing_needs_finalize) self.trailing_needs_finalize.clear() # None of the still-living tasks use async generators, so # every async generator must be suspended at a yield point -- # there's no one to be doing the iteration. That's good, # because aclose() only works on an asyncgen that's suspended # at a yield point. (If it's suspended at an event loop trap, # because someone is in the middle of iterating it, then you # get a RuntimeError on 3.8+, and a nasty surprise on earlier # versions due to https://bugs.python.org/issue32526.) # # However, once we start aclose() of one async generator, it # might start fetching the next value from another, thus # preventing us from closing that other (at least until # aclose() of the first one is complete). This constraint # effectively requires us to finalize the remaining asyncgens # in arbitrary order, rather than doing all of them at the # same time. On 3.8+ we could defer any generator with # ag_running=True to a later batch, but that only catches # the case where our aclose() starts after the user's # asend()/etc. If our aclose() starts first, then the # user's asend()/etc will raise RuntimeError, since they're # probably not checking ag_running. # # It might be possible to allow some parallelized cleanup if # we can determine that a certain set of asyncgens have no # interdependencies, using gc.get_referents() and such. # But just doing one at a time will typically work well enough # (since each aclose() executes in a cancelled scope) and # is much easier to reason about. # It's possible that that cleanup code will itself create # more async generators, so we iterate repeatedly until # all are gone. while self.alive: batch = self.alive self.alive = set() for agen in batch: await self._finalize_one(agen, name_asyncgen(agen)) def close(self): sys.set_asyncgen_hooks(*self.prev_hooks) async def _finalize_one(self, agen, name): try: # This shield ensures that finalize_asyncgen never exits # with an exception, not even a Cancelled. The inside # is cancelled so there's no deadlock risk. with _core.CancelScope(shield=True) as cancel_scope: cancel_scope.cancel() await agen.aclose() except BaseException: ASYNCGEN_LOGGER.exception( "Exception ignored during finalization of async generator %r -- " "surround your use of the generator in 'async with aclosing(...):' " "to raise exceptions like this in the context where they're generated", name, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_entry_queue.py0000644000076500000000000002040500000000000017243 0ustar00quentinwheelfrom collections import deque import threading import attr from .. import _core from .._util import NoPublicConstructor from ._wakeup_socketpair import WakeupSocketpair @attr.s(slots=True) class EntryQueue: # This used to use a queue.Queue. but that was broken, because Queues are # implemented in Python, and not reentrant -- so it was thread-safe, but # not signal-safe. deque is implemented in C, so each operation is atomic # WRT threads (and this is guaranteed in the docs), AND each operation is # atomic WRT signal delivery (signal handlers can run on either side, but # not *during* a deque operation). dict makes similar guarantees - and on # CPython 3.6 and PyPy, it's even ordered! queue = attr.ib(factory=deque) idempotent_queue = attr.ib(factory=dict) wakeup = attr.ib(factory=WakeupSocketpair) done = attr.ib(default=False) # Must be a reentrant lock, because it's acquired from signal handlers. # RLock is signal-safe as of cpython 3.2. NB that this does mean that the # lock is effectively *disabled* when we enter from signal context. The # way we use the lock this is OK though, because when # run_sync_soon is called from a signal it's atomic WRT the # main thread -- it just might happen at some inconvenient place. But if # you look at the one place where the main thread holds the lock, it's # just to make 1 assignment, so that's atomic WRT a signal anyway. lock = attr.ib(factory=threading.RLock) async def task(self): assert _core.currently_ki_protected() # RLock has two implementations: a signal-safe version in _thread, and # and signal-UNsafe version in threading. We need the signal safe # version. Python 3.2 and later should always use this anyway, but, # since the symptoms if this goes wrong are just "weird rare # deadlocks", then let's make a little check. # See: # https://bugs.python.org/issue13697#msg237140 assert self.lock.__class__.__module__ == "_thread" def run_cb(job): # We run this with KI protection enabled; it's the callback's # job to disable it if it wants it disabled. Exceptions are # treated like system task exceptions (i.e., converted into # TrioInternalError and cause everything to shut down). sync_fn, args = job try: sync_fn(*args) except BaseException as exc: async def kill_everything(exc): raise exc try: _core.spawn_system_task(kill_everything, exc) except RuntimeError: # We're quite late in the shutdown process and the # system nursery is already closed. # TODO(2020-06): this is a gross hack and should # be fixed soon when we address #1607. _core.current_task().parent_nursery.start_soon(kill_everything, exc) return True # This has to be carefully written to be safe in the face of new items # being queued while we iterate, and to do a bounded amount of work on # each pass: def run_all_bounded(): for _ in range(len(self.queue)): run_cb(self.queue.popleft()) for job in list(self.idempotent_queue): del self.idempotent_queue[job] run_cb(job) try: while True: run_all_bounded() if not self.queue and not self.idempotent_queue: await self.wakeup.wait_woken() else: await _core.checkpoint() except _core.Cancelled: # Keep the work done with this lock held as minimal as possible, # because it doesn't protect us against concurrent signal delivery # (see the comment above). Notice that this code would still be # correct if written like: # self.done = True # with self.lock: # pass # because all we want is to force run_sync_soon # to either be completely before or completely after the write to # done. That's why we don't need the lock to protect # against signal handlers. with self.lock: self.done = True # No more jobs will be submitted, so just clear out any residual # ones: run_all_bounded() assert not self.queue assert not self.idempotent_queue def close(self): self.wakeup.close() def size(self): return len(self.queue) + len(self.idempotent_queue) def run_sync_soon(self, sync_fn, *args, idempotent=False): with self.lock: if self.done: raise _core.RunFinishedError("run() has exited") # We have to hold the lock all the way through here, because # otherwise the main thread might exit *while* we're doing these # calls, and then our queue item might not be processed, or the # wakeup call might trigger an OSError b/c the IO manager has # already been shut down. if idempotent: self.idempotent_queue[(sync_fn, args)] = None else: self.queue.append((sync_fn, args)) self.wakeup.wakeup_thread_and_signal_safe() @attr.s(eq=False, hash=False, slots=True) class TrioToken(metaclass=NoPublicConstructor): """An opaque object representing a single call to :func:`trio.run`. It has no public constructor; instead, see :func:`current_trio_token`. This object has two uses: 1. It lets you re-enter the Trio run loop from external threads or signal handlers. This is the low-level primitive that :func:`trio.to_thread` and `trio.from_thread` use to communicate with worker threads, that `trio.open_signal_receiver` uses to receive notifications about signals, and so forth. 2. Each call to :func:`trio.run` has exactly one associated :class:`TrioToken` object, so you can use it to identify a particular call. """ _reentry_queue = attr.ib() def run_sync_soon(self, sync_fn, *args, idempotent=False): """Schedule a call to ``sync_fn(*args)`` to occur in the context of a Trio task. This is safe to call from the main thread, from other threads, and from signal handlers. This is the fundamental primitive used to re-enter the Trio run loop from outside of it. The call will happen "soon", but there's no guarantee about exactly when, and no mechanism provided for finding out when it's happened. If you need this, you'll have to build your own. The call is effectively run as part of a system task (see :func:`~trio.lowlevel.spawn_system_task`). In particular this means that: * :exc:`KeyboardInterrupt` protection is *enabled* by default; if you want ``sync_fn`` to be interruptible by control-C, then you need to use :func:`~trio.lowlevel.disable_ki_protection` explicitly. * If ``sync_fn`` raises an exception, then it's converted into a :exc:`~trio.TrioInternalError` and *all* tasks are cancelled. You should be careful that ``sync_fn`` doesn't crash. All calls with ``idempotent=False`` are processed in strict first-in first-out order. If ``idempotent=True``, then ``sync_fn`` and ``args`` must be hashable, and Trio will make a best-effort attempt to discard any call submission which is equal to an already-pending call. Trio will process these in first-in first-out order. Any ordering guarantees apply separately to ``idempotent=False`` and ``idempotent=True`` calls; there's no rule for how calls in the different categories are ordered with respect to each other. :raises trio.RunFinishedError: if the associated call to :func:`trio.run` has already exited. (Any call that *doesn't* raise this error is guaranteed to be fully processed before :func:`trio.run` exits.) """ self._reentry_queue.run_sync_soon(sync_fn, *args, idempotent=idempotent) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_exceptions.py0000644000076500000000000001010500000000000017053 0ustar00quentinwheelimport attr from trio._util import NoPublicConstructor class TrioInternalError(Exception): """Raised by :func:`run` if we encounter a bug in Trio, or (possibly) a misuse of one of the low-level :mod:`trio.lowlevel` APIs. This should never happen! If you get this error, please file a bug. Unfortunately, if you get this error it also means that all bets are off – Trio doesn't know what is going on and its normal invariants may be void. (For example, we might have "lost track" of a task. Or lost track of all tasks.) Again, though, this shouldn't happen. """ class RunFinishedError(RuntimeError): """Raised by `trio.from_thread.run` and similar functions if the corresponding call to :func:`trio.run` has already finished. """ class WouldBlock(Exception): """Raised by ``X_nowait`` functions if ``X`` would block.""" class Cancelled(BaseException, metaclass=NoPublicConstructor): """Raised by blocking calls if the surrounding scope has been cancelled. You should let this exception propagate, to be caught by the relevant cancel scope. To remind you of this, it inherits from :exc:`BaseException` instead of :exc:`Exception`, just like :exc:`KeyboardInterrupt` and :exc:`SystemExit` do. This means that if you write something like:: try: ... except Exception: ... then this *won't* catch a :exc:`Cancelled` exception. You cannot raise :exc:`Cancelled` yourself. Attempting to do so will produce a :exc:`TypeError`. Use :meth:`cancel_scope.cancel() ` instead. .. note:: In the US it's also common to see this word spelled "canceled", with only one "l". This is a `recent `__ and `US-specific `__ innovation, and even in the US both forms are still commonly used. So for consistency with the rest of the world and with "cancellation" (which always has two "l"s), Trio uses the two "l" spelling everywhere. """ def __str__(self): return "Cancelled" class BusyResourceError(Exception): """Raised when a task attempts to use a resource that some other task is already using, and this would lead to bugs and nonsense. For example, if two tasks try to send data through the same socket at the same time, Trio will raise :class:`BusyResourceError` instead of letting the data get scrambled. """ class ClosedResourceError(Exception): """Raised when attempting to use a resource after it has been closed. Note that "closed" here means that *your* code closed the resource, generally by calling a method with a name like ``close`` or ``aclose``, or by exiting a context manager. If a problem arises elsewhere – for example, because of a network failure, or because a remote peer closed their end of a connection – then that should be indicated by a different exception class, like :exc:`BrokenResourceError` or an :exc:`OSError` subclass. """ class BrokenResourceError(Exception): """Raised when an attempt to use a resource fails due to external circumstances. For example, you might get this if you try to send data on a stream where the remote side has already closed the connection. You *don't* get this error if *you* closed the resource – in that case you get :class:`ClosedResourceError`. This exception's ``__cause__`` attribute will often contain more information about the underlying error. """ class EndOfChannel(Exception): """Raised when trying to receive from a :class:`trio.abc.ReceiveChannel` that has no more data to receive. This is analogous to an "end-of-file" condition, but for channels. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_generated_instrumentation.py0000644000076500000000000000312300000000000022155 0ustar00quentinwheel# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off def add_instrument(instrument: Instrument) ->None: """Start instrumenting the current run loop with the given instrument. Args: instrument (trio.abc.Instrument): The instrument to activate. If ``instrument`` is already active, does nothing. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.instruments.add_instrument(instrument) except AttributeError: raise RuntimeError("must be called from async context") def remove_instrument(instrument: Instrument) ->None: """Stop instrumenting the current run loop with the given instrument. Args: instrument (trio.abc.Instrument): The instrument to de-activate. Raises: KeyError: if the instrument is not currently active. This could occur either because you never added it, or because you added it and then it raised an unhandled exception and was automatically deactivated. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.instruments.remove_instrument(instrument) except AttributeError: raise RuntimeError("must be called from async context") # fmt: on ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_generated_io_epoll.py0000644000076500000000000000214500000000000020517 0ustar00quentinwheel# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off async def wait_readable(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_readable(fd) except AttributeError: raise RuntimeError("must be called from async context") async def wait_writable(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_writable(fd) except AttributeError: raise RuntimeError("must be called from async context") def notify_closing(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.notify_closing(fd) except AttributeError: raise RuntimeError("must be called from async context") # fmt: on ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_generated_io_kqueue.py0000644000076500000000000000363400000000000020707 0ustar00quentinwheel# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off def current_kqueue(): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.current_kqueue() except AttributeError: raise RuntimeError("must be called from async context") def monitor_kevent(ident, filter): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.monitor_kevent(ident, filter) except AttributeError: raise RuntimeError("must be called from async context") async def wait_kevent(ident, filter, abort_func): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_kevent(ident, filter, abort_func) except AttributeError: raise RuntimeError("must be called from async context") async def wait_readable(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_readable(fd) except AttributeError: raise RuntimeError("must be called from async context") async def wait_writable(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_writable(fd) except AttributeError: raise RuntimeError("must be called from async context") def notify_closing(fd): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.notify_closing(fd) except AttributeError: raise RuntimeError("must be called from async context") # fmt: on ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_generated_io_windows.py0000644000076500000000000000544700000000000021106 0ustar00quentinwheel# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off async def wait_readable(sock): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_readable(sock) except AttributeError: raise RuntimeError("must be called from async context") async def wait_writable(sock): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_writable(sock) except AttributeError: raise RuntimeError("must be called from async context") def notify_closing(handle): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.notify_closing(handle) except AttributeError: raise RuntimeError("must be called from async context") def register_with_iocp(handle): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.register_with_iocp(handle) except AttributeError: raise RuntimeError("must be called from async context") async def wait_overlapped(handle, lpOverlapped): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.wait_overlapped(handle, lpOverlapped) except AttributeError: raise RuntimeError("must be called from async context") async def write_overlapped(handle, data, file_offset=0): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.write_overlapped(handle, data, file_offset) except AttributeError: raise RuntimeError("must be called from async context") async def readinto_overlapped(handle, buffer, file_offset=0): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.io_manager.readinto_overlapped(handle, buffer, file_offset) except AttributeError: raise RuntimeError("must be called from async context") def current_iocp(): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.current_iocp() except AttributeError: raise RuntimeError("must be called from async context") def monitor_completion_key(): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.io_manager.monitor_completion_key() except AttributeError: raise RuntimeError("must be called from async context") # fmt: on ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_generated_run.py0000644000076500000000000002276000000000000017526 0ustar00quentinwheel# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off def current_statistics(): """Returns an object containing run-loop-level debugging information. Currently the following fields are defined: * ``tasks_living`` (int): The number of tasks that have been spawned and not yet exited. * ``tasks_runnable`` (int): The number of tasks that are currently queued on the run queue (as opposed to blocked waiting for something to happen). * ``seconds_to_next_deadline`` (float): The time until the next pending cancel scope deadline. May be negative if the deadline has expired but we haven't yet processed cancellations. May be :data:`~math.inf` if there are no pending deadlines. * ``run_sync_soon_queue_size`` (int): The number of unprocessed callbacks queued via :meth:`trio.lowlevel.TrioToken.run_sync_soon`. * ``io_statistics`` (object): Some statistics from Trio's I/O backend. This always has an attribute ``backend`` which is a string naming which operating-system-specific I/O backend is in use; the other attributes vary between backends. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.current_statistics() except AttributeError: raise RuntimeError("must be called from async context") def current_time(): """Returns the current time according to Trio's internal clock. Returns: float: The current time. Raises: RuntimeError: if not inside a call to :func:`trio.run`. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.current_time() except AttributeError: raise RuntimeError("must be called from async context") def current_clock(): """Returns the current :class:`~trio.abc.Clock`.""" locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.current_clock() except AttributeError: raise RuntimeError("must be called from async context") def current_root_task(): """Returns the current root :class:`Task`. This is the task that is the ultimate parent of all other tasks. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.current_root_task() except AttributeError: raise RuntimeError("must be called from async context") def reschedule(task, next_send=_NO_SEND): """Reschedule the given task with the given :class:`outcome.Outcome`. See :func:`wait_task_rescheduled` for the gory details. There must be exactly one call to :func:`reschedule` for every call to :func:`wait_task_rescheduled`. (And when counting, keep in mind that returning :data:`Abort.SUCCEEDED` from an abort callback is equivalent to calling :func:`reschedule` once.) Args: task (trio.lowlevel.Task): the task to be rescheduled. Must be blocked in a call to :func:`wait_task_rescheduled`. next_send (outcome.Outcome): the value (or error) to return (or raise) from :func:`wait_task_rescheduled`. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.reschedule(task, next_send) except AttributeError: raise RuntimeError("must be called from async context") def spawn_system_task(async_fn, *args, name=None): """Spawn a "system" task. System tasks have a few differences from regular tasks: * They don't need an explicit nursery; instead they go into the internal "system nursery". * If a system task raises an exception, then it's converted into a :exc:`~trio.TrioInternalError` and *all* tasks are cancelled. If you write a system task, you should be careful to make sure it doesn't crash. * System tasks are automatically cancelled when the main task exits. * By default, system tasks have :exc:`KeyboardInterrupt` protection *enabled*. If you want your task to be interruptible by control-C, then you need to use :func:`disable_ki_protection` explicitly (and come up with some plan for what to do with a :exc:`KeyboardInterrupt`, given that system tasks aren't allowed to raise exceptions). * System tasks do not inherit context variables from their creator. Towards the end of a call to :meth:`trio.run`, after the main task and all system tasks have exited, the system nursery becomes closed. At this point, new calls to :func:`spawn_system_task` will raise ``RuntimeError("Nursery is closed to new arrivals")`` instead of creating a system task. It's possible to encounter this state either in a ``finally`` block in an async generator, or in a callback passed to :meth:`TrioToken.run_sync_soon` at the right moment. Args: async_fn: An async callable. args: Positional arguments for ``async_fn``. If you want to pass keyword arguments, use :func:`functools.partial`. name: The name for this task. Only used for debugging/introspection (e.g. ``repr(task_obj)``). If this isn't a string, :func:`spawn_system_task` will try to make it one. A common use case is if you're wrapping a function before spawning a new task, you might pass the original function as the ``name=`` to make debugging easier. Returns: Task: the newly spawned task """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.spawn_system_task(async_fn, *args, name=name) except AttributeError: raise RuntimeError("must be called from async context") def current_trio_token(): """Retrieve the :class:`TrioToken` for the current call to :func:`trio.run`. """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return GLOBAL_RUN_CONTEXT.runner.current_trio_token() except AttributeError: raise RuntimeError("must be called from async context") async def wait_all_tasks_blocked(cushion=0.0): """Block until there are no runnable tasks. This is useful in testing code when you want to give other tasks a chance to "settle down". The calling task is blocked, and doesn't wake up until all other tasks are also blocked for at least ``cushion`` seconds. (Setting a non-zero ``cushion`` is intended to handle cases like two tasks talking to each other over a local socket, where we want to ignore the potential brief moment between a send and receive when all tasks are blocked.) Note that ``cushion`` is measured in *real* time, not the Trio clock time. If there are multiple tasks blocked in :func:`wait_all_tasks_blocked`, then the one with the shortest ``cushion`` is the one woken (and this task becoming unblocked resets the timers for the remaining tasks). If there are multiple tasks that have exactly the same ``cushion``, then all are woken. You should also consider :class:`trio.testing.Sequencer`, which provides a more explicit way to control execution ordering within a test, and will often produce more readable tests. Example: Here's an example of one way to test that Trio's locks are fair: we take the lock in the parent, start a child, wait for the child to be blocked waiting for the lock (!), and then check that we can't release and immediately re-acquire the lock:: async def lock_taker(lock): await lock.acquire() lock.release() async def test_lock_fairness(): lock = trio.Lock() await lock.acquire() async with trio.open_nursery() as nursery: nursery.start_soon(lock_taker, lock) # child hasn't run yet, we have the lock assert lock.locked() assert lock._owner is trio.lowlevel.current_task() await trio.testing.wait_all_tasks_blocked() # now the child has run and is blocked on lock.acquire(), we # still have the lock assert lock.locked() assert lock._owner is trio.lowlevel.current_task() lock.release() try: # The child has a prior claim, so we can't have it lock.acquire_nowait() except trio.WouldBlock: assert lock._owner is not trio.lowlevel.current_task() print("PASS") else: print("FAIL") """ locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return await GLOBAL_RUN_CONTEXT.runner.wait_all_tasks_blocked(cushion) except AttributeError: raise RuntimeError("must be called from async context") # fmt: on ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_instrumentation.py0000644000076500000000000000731400000000000020145 0ustar00quentinwheelimport logging import types import attr from typing import Any, Callable, Dict, List, Sequence, Iterator, TypeVar from .._abc import Instrument # Used to log exceptions in instruments INSTRUMENT_LOGGER = logging.getLogger("trio.abc.Instrument") F = TypeVar("F", bound=Callable[..., Any]) # Decorator to mark methods public. This does nothing by itself, but # trio/_tools/gen_exports.py looks for it. def _public(fn: F) -> F: return fn class Instruments(Dict[str, Dict[Instrument, None]]): """A collection of `trio.abc.Instrument` organized by hook. Instrumentation calls are rather expensive, and we don't want a rarely-used instrument (like before_run()) to slow down hot operations (like before_task_step()). Thus, we cache the set of instruments to be called for each hook, and skip the instrumentation call if there's nothing currently installed for that hook. """ __slots__ = () def __init__(self, incoming: Sequence[Instrument]): self["_all"] = {} for instrument in incoming: self.add_instrument(instrument) @_public def add_instrument(self, instrument: Instrument) -> None: """Start instrumenting the current run loop with the given instrument. Args: instrument (trio.abc.Instrument): The instrument to activate. If ``instrument`` is already active, does nothing. """ if instrument in self["_all"]: return self["_all"][instrument] = None try: for name in dir(instrument): if name.startswith("_"): continue try: prototype = getattr(Instrument, name) except AttributeError: continue impl = getattr(instrument, name) if isinstance(impl, types.MethodType) and impl.__func__ is prototype: # Inherited unchanged from _abc.Instrument continue self.setdefault(name, {})[instrument] = None except: self.remove_instrument(instrument) raise @_public def remove_instrument(self, instrument: Instrument) -> None: """Stop instrumenting the current run loop with the given instrument. Args: instrument (trio.abc.Instrument): The instrument to de-activate. Raises: KeyError: if the instrument is not currently active. This could occur either because you never added it, or because you added it and then it raised an unhandled exception and was automatically deactivated. """ # If instrument isn't present, the KeyError propagates out self["_all"].pop(instrument) for hookname, instruments in list(self.items()): if instrument in instruments: del instruments[instrument] if not instruments: del self[hookname] def call(self, hookname: str, *args: Any) -> None: """Call hookname(*args) on each applicable instrument. You must first check whether there are any instruments installed for that hook, e.g.:: if "before_task_step" in instruments: instruments.call("before_task_step", task) """ for instrument in list(self[hookname]): try: getattr(instrument, hookname)(*args) except: self.remove_instrument(instrument) INSTRUMENT_LOGGER.exception( "Exception raised when calling %r on instrument %r. " "Instrument has been disabled.", hookname, instrument, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_io_common.py0000644000076500000000000000117500000000000016660 0ustar00quentinwheelimport copy import outcome from .. import _core # Utility function shared between _io_epoll and _io_windows def wake_all(waiters, exc): try: current_task = _core.current_task() except RuntimeError: current_task = None raise_at_end = False for attr_name in ["read_task", "write_task"]: task = getattr(waiters, attr_name) if task is not None: if task is current_task: raise_at_end = True else: _core.reschedule(task, outcome.Error(copy.copy(exc))) setattr(waiters, attr_name, None) if raise_at_end: raise exc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_io_epoll.py0000644000076500000000000003423300000000000016504 0ustar00quentinwheelimport select import sys import attr from collections import defaultdict from typing import Dict, TYPE_CHECKING from .. import _core from ._run import _public from ._io_common import wake_all from ._wakeup_socketpair import WakeupSocketpair assert not TYPE_CHECKING or sys.platform == "linux" @attr.s(slots=True, eq=False, frozen=True) class _EpollStatistics: tasks_waiting_read = attr.ib() tasks_waiting_write = attr.ib() backend = attr.ib(default="epoll") # Some facts about epoll # ---------------------- # # Internally, an epoll object is sort of like a WeakKeyDictionary where the # keys are tuples of (fd number, file object). When you call epoll_ctl, you # pass in an fd; that gets converted to an (fd number, file object) tuple by # looking up the fd in the process's fd table at the time of the call. When an # event happens on the file object, epoll_wait drops the file object part, and # just returns the fd number in its event. So from the outside it looks like # it's keeping a table of fds, but really it's a bit more complicated. This # has some subtle consequences. # # In general, file objects inside the kernel are reference counted. Each entry # in a process's fd table holds a strong reference to the corresponding file # object, and most operations that use file objects take a temporary strong # reference while they're working. So when you call close() on an fd, that # might or might not cause the file object to be deallocated -- it depends on # whether there are any other references to that file object. Some common ways # this can happen: # # - after calling dup(), you have two fds in the same process referring to the # same file object. Even if you close one fd (= remove that entry from the # fd table), the file object will be kept alive by the other fd. # - when calling fork(), the child inherits a copy of the parent's fd table, # so all the file objects get another reference. (But if the fork() is # followed by exec(), then all of the child's fds that have the CLOEXEC flag # set will be closed at that point.) # - most syscalls that work on fds take a strong reference to the underlying # file object while they're using it. So there's one thread blocked in # read(fd), and then another thread calls close() on the last fd referring # to that object, the underlying file won't actually be closed until # after read() returns. # # However, epoll does *not* take a reference to any of the file objects in its # interest set (that's what makes it similar to a WeakKeyDictionary). File # objects inside an epoll interest set will be deallocated if all *other* # references to them are closed. And when that happens, the epoll object will # automatically deregister that file object and stop reporting events on it. # So that's quite handy. # # But, what happens if we do this? # # fd1 = open(...) # epoll_ctl(EPOLL_CTL_ADD, fd1, ...) # fd2 = dup(fd1) # close(fd1) # # In this case, the dup() keeps the underlying file object alive, so it # remains registered in the epoll object's interest set, as the tuple (fd1, # file object). But, fd1 no longer refers to this file object! You might think # there was some magic to handle this, but unfortunately no; the consequences # are totally predictable from what I said above: # # If any events occur on the file object, then epoll will report them as # happening on fd1, even though that doesn't make sense. # # Perhaps we would like to deregister fd1 to stop getting nonsensical events. # But how? When we call epoll_ctl, we have to pass an fd number, which will # get expanded to an (fd number, file object) tuple. We can't pass fd1, # because when epoll_ctl tries to look it up, it won't find our file object. # And we can't pass fd2, because that will get expanded to (fd2, file object), # which is a different lookup key. In fact, it's *impossible* to de-register # this fd! # # We could even have fd1 get assigned to another file object, and then we can # have multiple keys registered simultaneously using the same fd number, like: # (fd1, file object 1), (fd1, file object 2). And if events happen on either # file object, then epoll will happily report that something happened to # "fd1". # # Now here's what makes this especially nasty: suppose the old file object # becomes, say, readable. That means that every time we call epoll_wait, it # will return immediately to tell us that "fd1" is readable. Normally, we # would handle this by de-registering fd1, waking up the corresponding call to # wait_readable, then the user will call read() or recv() or something, and # we're fine. But if this happens on a stale fd where we can't remove the # registration, then we might get stuck in a state where epoll_wait *always* # returns immediately, so our event loop becomes unable to sleep, and now our # program is burning 100% of the CPU doing nothing, with no way out. # # # What does this mean for Trio? # ----------------------------- # # Since we don't control the user's code, we have no way to guarantee that we # don't get stuck with stale fd's in our epoll interest set. For example, a # user could call wait_readable(fd) in one task, and then while that's # running, they might close(fd) from another task. In this situation, they're # *supposed* to call notify_closing(fd) to let us know what's happening, so we # can interrupt the wait_readable() call and avoid getting into this mess. And # that's the only thing that can possibly work correctly in all cases. But # sometimes user code has bugs. So if this does happen, we'd like to degrade # gracefully, and survive without corrupting Trio's internal state or # otherwise causing the whole program to explode messily. # # Our solution: we always use EPOLLONESHOT. This way, we might get *one* # spurious event on a stale fd, but then epoll will automatically silence it # until we explicitly say that we want more events... and if we have a stale # fd, then we actually can't re-enable it! So we can't get stuck in an # infinite busy-loop. If there's a stale fd hanging around, then it might # cause a spurious `BusyResourceError`, or cause one wait_* call to return # before it should have... but in general, the wait_* functions are allowed to # have some spurious wakeups; the user code will just attempt the operation, # get EWOULDBLOCK, and call wait_* again. And the program as a whole will # survive, any exceptions will propagate, etc. # # As a bonus, EPOLLONESHOT also saves us having to explicitly deregister fds # on the normal wakeup path, so it's a bit more efficient in general. # # However, EPOLLONESHOT has a few trade-offs to consider: # # First, you can't combine EPOLLONESHOT with EPOLLEXCLUSIVE. This is a bit sad # in one somewhat rare case: if you have a multi-process server where a group # of processes all share the same listening socket, then EPOLLEXCLUSIVE can be # used to avoid "thundering herd" problems when a new connection comes in. But # this isn't too bad. It's not clear if EPOLLEXCLUSIVE even works for us # anyway: # # https://stackoverflow.com/questions/41582560/how-does-epolls-epollexclusive-mode-interact-with-level-triggering # # And it's not clear that EPOLLEXCLUSIVE is a great approach either: # # https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/ # # And if we do need to support this, we could always add support through some # more-specialized API in the future. So this isn't a blocker to using # EPOLLONESHOT. # # Second, EPOLLONESHOT does not actually *deregister* the fd after delivering # an event (EPOLL_CTL_DEL). Instead, it keeps the fd registered, but # effectively does an EPOLL_CTL_MOD to set the fd's interest flags to # all-zeros. So we could still end up with an fd hanging around in the # interest set for a long time, even if we're not using it. # # Fortunately, this isn't a problem, because it's only a weak reference – if # we have a stale fd that's been silenced by EPOLLONESHOT, then it wastes a # tiny bit of kernel memory remembering this fd that can never be revived, but # when the underlying file object is eventually closed, that memory will be # reclaimed. So that's OK. # # The other issue is that when someone calls wait_*, using EPOLLONESHOT means # that if we have ever waited for this fd before, we have to use EPOLL_CTL_MOD # to re-enable it; but if it's a new fd, we have to use EPOLL_CTL_ADD. How do # we know which one to use? There's no reasonable way to track which fds are # currently registered -- remember, we're assuming the user might have gone # and rearranged their fds without telling us! # # Fortunately, this also has a simple solution: if we wait on a socket or # other fd once, then we'll probably wait on it lots of times. And the epoll # object itself knows which fds it already has registered. So when an fd comes # in, we optimistically assume that it's been waited on before, and try doing # EPOLL_CTL_MOD. And if that fails with an ENOENT error, then we try again # with EPOLL_CTL_ADD. # # So that's why this code is the way it is. And now you know more than you # wanted to about how epoll works. @attr.s(slots=True, eq=False) class EpollWaiters: read_task = attr.ib(default=None) write_task = attr.ib(default=None) current_flags = attr.ib(default=0) @attr.s(slots=True, eq=False, hash=False) class EpollIOManager: _epoll = attr.ib(factory=select.epoll) # {fd: EpollWaiters} _registered = attr.ib( factory=lambda: defaultdict(EpollWaiters), type=Dict[int, EpollWaiters] ) _force_wakeup = attr.ib(factory=WakeupSocketpair) _force_wakeup_fd = attr.ib(default=None) def __attrs_post_init__(self): self._epoll.register(self._force_wakeup.wakeup_sock, select.EPOLLIN) self._force_wakeup_fd = self._force_wakeup.wakeup_sock.fileno() def statistics(self): tasks_waiting_read = 0 tasks_waiting_write = 0 for waiter in self._registered.values(): if waiter.read_task is not None: tasks_waiting_read += 1 if waiter.write_task is not None: tasks_waiting_write += 1 return _EpollStatistics( tasks_waiting_read=tasks_waiting_read, tasks_waiting_write=tasks_waiting_write, ) def close(self): self._epoll.close() self._force_wakeup.close() def force_wakeup(self): self._force_wakeup.wakeup_thread_and_signal_safe() # Return value must be False-y IFF the timeout expired, NOT if any I/O # happened or force_wakeup was called. Otherwise it can be anything; gets # passed straight through to process_events. def get_events(self, timeout): # max_events must be > 0 or epoll gets cranky # accessing self._registered from a thread looks dangerous, but it's # OK because it doesn't matter if our value is a little bit off. max_events = max(1, len(self._registered)) return self._epoll.poll(timeout, max_events) def process_events(self, events): for fd, flags in events: if fd == self._force_wakeup_fd: self._force_wakeup.drain() continue waiters = self._registered[fd] # EPOLLONESHOT always clears the flags when an event is delivered waiters.current_flags = 0 # Clever hack stolen from selectors.EpollSelector: an event # with EPOLLHUP or EPOLLERR flags wakes both readers and # writers. if flags & ~select.EPOLLIN and waiters.write_task is not None: _core.reschedule(waiters.write_task) waiters.write_task = None if flags & ~select.EPOLLOUT and waiters.read_task is not None: _core.reschedule(waiters.read_task) waiters.read_task = None self._update_registrations(fd) def _update_registrations(self, fd): waiters = self._registered[fd] wanted_flags = 0 if waiters.read_task is not None: wanted_flags |= select.EPOLLIN if waiters.write_task is not None: wanted_flags |= select.EPOLLOUT if wanted_flags != waiters.current_flags: try: try: # First try EPOLL_CTL_MOD self._epoll.modify(fd, wanted_flags | select.EPOLLONESHOT) except OSError: # If that fails, it might be a new fd; try EPOLL_CTL_ADD self._epoll.register(fd, wanted_flags | select.EPOLLONESHOT) waiters.current_flags = wanted_flags except OSError as exc: # If everything fails, probably it's a bad fd, e.g. because # the fd was closed behind our back. In this case we don't # want to try to unregister the fd, because that will probably # fail too. Just clear our state and wake everyone up. del self._registered[fd] # This could raise (in case we're calling this inside one of # the to-be-woken tasks), so we have to do it last. wake_all(waiters, exc) return if not wanted_flags: del self._registered[fd] async def _epoll_wait(self, fd, attr_name): if not isinstance(fd, int): fd = fd.fileno() waiters = self._registered[fd] if getattr(waiters, attr_name) is not None: raise _core.BusyResourceError( "another task is already reading / writing this fd" ) setattr(waiters, attr_name, _core.current_task()) self._update_registrations(fd) def abort(_): setattr(waiters, attr_name, None) self._update_registrations(fd) return _core.Abort.SUCCEEDED await _core.wait_task_rescheduled(abort) @_public async def wait_readable(self, fd): await self._epoll_wait(fd, "read_task") @_public async def wait_writable(self, fd): await self._epoll_wait(fd, "write_task") @_public def notify_closing(self, fd): if not isinstance(fd, int): fd = fd.fileno() wake_all( self._registered[fd], _core.ClosedResourceError("another task closed this fd"), ) del self._registered[fd] try: self._epoll.unregister(fd) except (OSError, ValueError): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_io_kqueue.py0000644000076500000000000001600100000000000016661 0ustar00quentinwheelimport select import sys from typing import TYPE_CHECKING import outcome from contextlib import contextmanager import attr import errno from .. import _core from ._run import _public from ._wakeup_socketpair import WakeupSocketpair assert not TYPE_CHECKING or (sys.platform != "linux" and sys.platform != "win32") @attr.s(slots=True, eq=False, frozen=True) class _KqueueStatistics: tasks_waiting = attr.ib() monitors = attr.ib() backend = attr.ib(default="kqueue") @attr.s(slots=True, eq=False) class KqueueIOManager: _kqueue = attr.ib(factory=select.kqueue) # {(ident, filter): Task or UnboundedQueue} _registered = attr.ib(factory=dict) _force_wakeup = attr.ib(factory=WakeupSocketpair) _force_wakeup_fd = attr.ib(default=None) def __attrs_post_init__(self): force_wakeup_event = select.kevent( self._force_wakeup.wakeup_sock, select.KQ_FILTER_READ, select.KQ_EV_ADD ) self._kqueue.control([force_wakeup_event], 0) self._force_wakeup_fd = self._force_wakeup.wakeup_sock.fileno() def statistics(self): tasks_waiting = 0 monitors = 0 for receiver in self._registered.values(): if type(receiver) is _core.Task: tasks_waiting += 1 else: monitors += 1 return _KqueueStatistics(tasks_waiting=tasks_waiting, monitors=monitors) def close(self): self._kqueue.close() self._force_wakeup.close() def force_wakeup(self): self._force_wakeup.wakeup_thread_and_signal_safe() def get_events(self, timeout): # max_events must be > 0 or kqueue gets cranky # and we generally want this to be strictly larger than the actual # number of events we get, so that we can tell that we've gotten # all the events in just 1 call. max_events = len(self._registered) + 1 events = [] while True: batch = self._kqueue.control([], max_events, timeout) events += batch if len(batch) < max_events: break else: timeout = 0 # and loop back to the start return events def process_events(self, events): for event in events: key = (event.ident, event.filter) if event.ident == self._force_wakeup_fd: self._force_wakeup.drain() continue receiver = self._registered[key] if event.flags & select.KQ_EV_ONESHOT: del self._registered[key] if type(receiver) is _core.Task: _core.reschedule(receiver, outcome.Value(event)) else: receiver.put_nowait(event) # kevent registration is complicated -- e.g. aio submission can # implicitly perform a EV_ADD, and EVFILT_PROC with NOTE_TRACK will # automatically register filters for child processes. So our lowlevel # API is *very* low-level: we expose the kqueue itself for adding # events or sticking into AIO submission structs, and split waiting # off into separate methods. It's your responsibility to make sure # that handle_io never receives an event without a corresponding # registration! This may be challenging if you want to be careful # about e.g. KeyboardInterrupt. Possibly this API could be improved to # be more ergonomic... @_public def current_kqueue(self): return self._kqueue @contextmanager @_public def monitor_kevent(self, ident, filter): key = (ident, filter) if key in self._registered: raise _core.BusyResourceError( "attempt to register multiple listeners for same ident/filter pair" ) q = _core.UnboundedQueue() self._registered[key] = q try: yield q finally: del self._registered[key] @_public async def wait_kevent(self, ident, filter, abort_func): key = (ident, filter) if key in self._registered: raise _core.BusyResourceError( "attempt to register multiple listeners for same ident/filter pair" ) self._registered[key] = _core.current_task() def abort(raise_cancel): r = abort_func(raise_cancel) if r is _core.Abort.SUCCEEDED: del self._registered[key] return r return await _core.wait_task_rescheduled(abort) async def _wait_common(self, fd, filter): if not isinstance(fd, int): fd = fd.fileno() flags = select.KQ_EV_ADD | select.KQ_EV_ONESHOT event = select.kevent(fd, filter, flags) self._kqueue.control([event], 0) def abort(_): event = select.kevent(fd, filter, select.KQ_EV_DELETE) try: self._kqueue.control([event], 0) except OSError as exc: # kqueue tracks individual fds (*not* the underlying file # object, see _io_epoll.py for a long discussion of why this # distinction matters), and automatically deregisters an event # if the fd is closed. So if kqueue.control says that it # doesn't know about this event, then probably it's because # the fd was closed behind our backs. (Too bad we can't ask it # to wake us up when this happens, versus discovering it after # the fact... oh well, you can't have everything.) # # FreeBSD reports this using EBADF. macOS uses ENOENT. if exc.errno in (errno.EBADF, errno.ENOENT): # pragma: no branch pass else: # pragma: no cover # As far as we know, this branch can't happen. raise return _core.Abort.SUCCEEDED await self.wait_kevent(fd, filter, abort) @_public async def wait_readable(self, fd): await self._wait_common(fd, select.KQ_FILTER_READ) @_public async def wait_writable(self, fd): await self._wait_common(fd, select.KQ_FILTER_WRITE) @_public def notify_closing(self, fd): if not isinstance(fd, int): fd = fd.fileno() for filter in [select.KQ_FILTER_READ, select.KQ_FILTER_WRITE]: key = (fd, filter) receiver = self._registered.get(key) if receiver is None: continue if type(receiver) is _core.Task: event = select.kevent(fd, filter, select.KQ_EV_DELETE) self._kqueue.control([event], 0) exc = _core.ClosedResourceError("another task closed this fd") _core.reschedule(receiver, outcome.Error(exc)) del self._registered[key] else: # XX this is an interesting example of a case where being able # to close a queue would be useful... raise NotImplementedError( "can't close an fd that monitor_kevent is using" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_io_windows.py0000644000076500000000000011015400000000000017060 0ustar00quentinwheelimport itertools from contextlib import contextmanager import enum import socket import sys from typing import TYPE_CHECKING import attr from outcome import Value from .. import _core from ._run import _public from ._io_common import wake_all from ._windows_cffi import ( ffi, kernel32, ntdll, ws2_32, INVALID_HANDLE_VALUE, raise_winerror, _handle, ErrorCodes, FileFlags, AFDPollFlags, WSAIoctls, CompletionModes, IoControlCodes, ) assert not TYPE_CHECKING or sys.platform == "win32" # There's a lot to be said about the overall design of a Windows event # loop. See # # https://github.com/python-trio/trio/issues/52 # # for discussion. This now just has some lower-level notes: # # How IOCP fits together: # # The general model is that you call some function like ReadFile or WriteFile # to tell the kernel that you want it to perform some operation, and the # kernel goes off and does that in the background, then at some point later it # sends you a notification that the operation is complete. There are some more # exotic APIs that don't quite fit this pattern, but most APIs do. # # Each background operation is tracked using an OVERLAPPED struct, that # uniquely identifies that particular operation. # # An "IOCP" (or "I/O completion port") is an object that lets the kernel send # us these notifications -- basically it's just a kernel->userspace queue. # # Each IOCP notification is represented by an OVERLAPPED_ENTRY struct, which # contains 3 fields: # - The "completion key". This is an opaque integer that we pick, and use # however is convenient. # - pointer to the OVERLAPPED struct for the completed operation. # - dwNumberOfBytesTransferred (an integer). # # And in addition, for regular I/O, the OVERLAPPED structure gets filled in # with: # - result code (named "Internal") # - number of bytes transferred (named "InternalHigh"); usually redundant # with dwNumberOfBytesTransferred. # # There are also some other entries in OVERLAPPED which only matter on input: # - Offset and OffsetHigh which are inputs to {Read,Write}File and # otherwise always zero # - hEvent which is for if you aren't using IOCP; we always set it to zero. # # That describes the usual pattern for operations and the usual meaning of # these struct fields, but really these are just some arbitrary chunks of # bytes that get passed back and forth, so some operations like to overload # them to mean something else. # # You can also directly queue an OVERLAPPED_ENTRY object to an IOCP by calling # PostQueuedCompletionStatus. When you use this you get to set all the # OVERLAPPED_ENTRY fields to arbitrary values. # # You can request to cancel any operation if you know which handle it was # issued on + the OVERLAPPED struct that identifies it (via CancelIoEx). This # request might fail because the operation has already completed, or it might # be queued to happen in the background, so you only find out whether it # succeeded or failed later, when we get back the notification for the # operation being complete. # # There are three types of operations that we support: # # == Regular I/O operations on handles (e.g. files or named pipes) == # # Implemented by: register_with_iocp, wait_overlapped # # To use these, you have to register the handle with your IOCP first. Once # it's registered, any operations on that handle will automatically send # completion events to that IOCP, with a completion key that you specify *when # the handle is registered* (so you can't use different completion keys for # different operations). # # We give these two dedicated completion keys: CKeys.WAIT_OVERLAPPED for # regular operations, and CKeys.LATE_CANCEL that's used to make # wait_overlapped cancellable even if the user forgot to call # register_with_iocp. The problem here is that after we request the cancel, # wait_overlapped keeps blocking until it sees the completion notification... # but if the user forgot to register_with_iocp, then the completion will never # come, so the cancellation will never resolve. To avoid this, whenever we try # to cancel an I/O operation and the cancellation fails, we use # PostQueuedCompletionStatus to send a CKeys.LATE_CANCEL notification. If this # arrives before the real completion, we assume the user forgot to call # register_with_iocp on their handle, and raise an error accordingly. # # == Socket state notifications == # # Implemented by: wait_readable, wait_writable # # The public APIs that windows provides for this are all really awkward and # don't integrate with IOCP. So we drop down to a lower level, and talk # directly to the socket device driver in the kernel, which is called "AFD". # Unfortunately, this is a totally undocumented internal API. Fortunately # libuv also does this, so we can be pretty confident that MS won't break it # on us, and there is a *little* bit of information out there if you go # digging. # # Basically: we open a magic file that refers to the AFD driver, register the # magic file with our IOCP, and then we can issue regular overlapped I/O # operations on that handle. Specifically, the operation we use is called # IOCTL_AFD_POLL, which lets us pass in a buffer describing which events we're # interested in on a given socket (readable, writable, etc.). Later, when the # operation completes, the kernel rewrites the buffer we passed in to record # which events happened, and uses IOCP as normal to notify us that this # operation has completed. # # Unfortunately, the Windows kernel seems to have bugs if you try to issue # multiple simultaneous IOCTL_AFD_POLL operations on the same socket (see # notes-to-self/afd-lab.py). So if a user calls wait_readable and # wait_writable at the same time, we have to combine those into a single # IOCTL_AFD_POLL. This means we can't just use the wait_overlapped machinery. # Instead we have some dedicated code to handle these operations, and a # dedicated completion key CKeys.AFD_POLL. # # Sources of information: # - https://github.com/python-trio/trio/issues/52 # - Wepoll: https://github.com/piscisaureus/wepoll/ # - libuv: https://github.com/libuv/libuv/ # - ReactOS: https://github.com/reactos/reactos/ # - Ancient leaked copies of the Windows NT and Winsock source code: # https://github.com/pustladi/Windows-2000/blob/661d000d50637ed6fab2329d30e31775046588a9/private/net/sockets/winsock2/wsp/msafd/select.c#L59-L655 # https://github.com/metoo10987/WinNT4/blob/f5c14e6b42c8f45c20fe88d14c61f9d6e0386b8e/private/ntos/afd/poll.c#L68-L707 # - The WSAEventSelect docs (this exposes a finer-grained set of events than # select(), so if you squint you can treat it as a source of information on # the fine-grained AFD poll types) # # # == Everything else == # # There are also some weirder APIs for interacting with IOCP. For example, the # "Job" API lets you specify an IOCP handle and "completion key", and then in # the future whenever certain events happen it sends uses IOCP to send a # notification. These notifications don't correspond to any particular # operation; they're just spontaneous messages you get. The # "dwNumberOfBytesTransferred" field gets repurposed to carry an identifier # for the message type (e.g. JOB_OBJECT_MSG_EXIT_PROCESS), and the # "lpOverlapped" field gets repurposed to carry some arbitrary data that # depends on the message type (e.g. the pid of the process that exited). # # To handle these, we have monitor_completion_key, where we hand out an # unassigned completion key, let users set it up however they want, and then # get any events that arrive on that key. # # (Note: monitor_completion_key is not documented or fully baked; expect it to # change in the future.) # Our completion keys class CKeys(enum.IntEnum): AFD_POLL = 0 WAIT_OVERLAPPED = 1 LATE_CANCEL = 2 FORCE_WAKEUP = 3 USER_DEFINED = 4 # and above def _check(success): if not success: raise_winerror() return success def _get_underlying_socket(sock, *, which=WSAIoctls.SIO_BASE_HANDLE): if hasattr(sock, "fileno"): sock = sock.fileno() base_ptr = ffi.new("HANDLE *") out_size = ffi.new("DWORD *") failed = ws2_32.WSAIoctl( ffi.cast("SOCKET", sock), which, ffi.NULL, 0, base_ptr, ffi.sizeof("HANDLE"), out_size, ffi.NULL, ffi.NULL, ) if failed: code = ws2_32.WSAGetLastError() raise_winerror(code) return base_ptr[0] def _get_base_socket(sock): # There is a development kit for LSPs called Komodia Redirector. # It does some unusual (some might say evil) things like intercepting # SIO_BASE_HANDLE (fails) and SIO_BSP_HANDLE_SELECT (returns the same # socket) in a misguided attempt to prevent bypassing it. It's been used # in malware including the infamous Lenovo Superfish incident from 2015, # but unfortunately is also used in some legitimate products such as # parental control tools and Astrill VPN. Komodia happens to not # block SIO_BSP_HANDLE_POLL, so we'll try SIO_BASE_HANDLE and fall back # to SIO_BSP_HANDLE_POLL if it doesn't work. # References: # - https://github.com/piscisaureus/wepoll/blob/0598a791bf9cbbf480793d778930fc635b044980/wepoll.c#L2223 # - https://github.com/tokio-rs/mio/issues/1314 while True: try: # If this is not a Komodia-intercepted socket, we can just use # SIO_BASE_HANDLE. return _get_underlying_socket(sock) except OSError as ex: if ex.winerror == ErrorCodes.ERROR_NOT_SOCKET: # SIO_BASE_HANDLE might fail even without LSP intervention, # if we get something that's not a socket. raise if hasattr(sock, "fileno"): sock = sock.fileno() sock = _handle(sock) next_sock = _get_underlying_socket( sock, which=WSAIoctls.SIO_BSP_HANDLE_POLL ) if next_sock == sock: # If BSP_HANDLE_POLL returns the same socket we already had, # then there's no layering going on and we need to fail # to prevent an infinite loop. raise RuntimeError( "Unexpected network configuration detected: " "SIO_BASE_HANDLE failed and SIO_BSP_HANDLE_POLL didn't " "return a different socket. Please file a bug at " "https://github.com/python-trio/trio/issues/new, " "and include the output of running: " "netsh winsock show catalog" ) # Otherwise we've gotten at least one layer deeper, so # loop back around to keep digging. sock = next_sock def _afd_helper_handle(): # The "AFD" driver is exposed at the NT path "\Device\Afd". We're using # the Win32 CreateFile, though, so we have to pass a Win32 path. \\.\ is # how Win32 refers to the NT \GLOBAL??\ directory, and GLOBALROOT is a # symlink inside that directory that points to the root of the NT path # system. So by sticking that in front of the NT path, we get a Win32 # path. Alternatively, we could use NtCreateFile directly, since it takes # an NT path. But we already wrap CreateFileW so this was easier. # References: # https://blogs.msdn.microsoft.com/jeremykuhne/2016/05/02/dos-to-nt-a-paths-journey/ # https://stackoverflow.com/a/21704022 # # I'm actually not sure what the \Trio part at the end of the path does. # Wepoll uses \Device\Afd\Wepoll, so I just copied them. (I'm guessing it # might be visible in some debug tools, and is otherwise arbitrary?) rawname = r"\\.\GLOBALROOT\Device\Afd\Trio".encode("utf-16le") + b"\0\0" rawname_buf = ffi.from_buffer(rawname) handle = kernel32.CreateFileW( ffi.cast("LPCWSTR", rawname_buf), FileFlags.SYNCHRONIZE, FileFlags.FILE_SHARE_READ | FileFlags.FILE_SHARE_WRITE, ffi.NULL, # no security attributes FileFlags.OPEN_EXISTING, FileFlags.FILE_FLAG_OVERLAPPED, ffi.NULL, # no template file ) if handle == INVALID_HANDLE_VALUE: # pragma: no cover raise_winerror() return handle # AFD_POLL has a finer-grained set of events than other APIs. We collapse them # down into Unix-style "readable" and "writable". # # Note: AFD_POLL_LOCAL_CLOSE isn't a reliable substitute for notify_closing(), # because even if the user closes the socket *handle*, the socket *object* # could still remain open, e.g. if the socket was dup'ed (possibly into # another process). Explicitly calling notify_closing() guarantees that # everyone waiting on the *handle* wakes up, which is what you'd expect. # # However, we can't avoid getting LOCAL_CLOSE notifications -- the kernel # delivers them whether we ask for them or not -- so better to include them # here for documentation, and so that when we check (delivered & requested) we # get a match. READABLE_FLAGS = ( AFDPollFlags.AFD_POLL_RECEIVE | AFDPollFlags.AFD_POLL_ACCEPT | AFDPollFlags.AFD_POLL_DISCONNECT # other side sent an EOF | AFDPollFlags.AFD_POLL_ABORT | AFDPollFlags.AFD_POLL_LOCAL_CLOSE ) WRITABLE_FLAGS = ( AFDPollFlags.AFD_POLL_SEND | AFDPollFlags.AFD_POLL_CONNECT_FAIL | AFDPollFlags.AFD_POLL_ABORT | AFDPollFlags.AFD_POLL_LOCAL_CLOSE ) # Annoyingly, while the API makes it *seem* like you can happily issue as many # independent AFD_POLL operations as you want without them interfering with # each other, in fact if you issue two AFD_POLL operations for the same socket # at the same time with notification going to the same IOCP port, then Windows # gets super confused. For example, if we issue one operation from # wait_readable, and another independent operation from wait_writable, then # Windows may complete the wait_writable operation when the socket becomes # readable. # # To avoid this, we have to coalesce all the operations on a single socket # into one, and when the set of waiters changes we have to throw away the old # operation and start a new one. @attr.s(slots=True, eq=False) class AFDWaiters: read_task = attr.ib(default=None) write_task = attr.ib(default=None) current_op = attr.ib(default=None) # We also need to bundle up all the info for a single op into a standalone # object, because we need to keep all these objects alive until the operation # finishes, even if we're throwing it away. @attr.s(slots=True, eq=False, frozen=True) class AFDPollOp: lpOverlapped = attr.ib() poll_info = attr.ib() waiters = attr.ib() afd_group = attr.ib() # The Windows kernel has a weird issue when using AFD handles. If you have N # instances of wait_readable/wait_writable registered with a single AFD handle, # then cancelling any one of them takes something like O(N**2) time. So if we # used just a single AFD handle, then cancellation would quickly become very # expensive, e.g. a program with N active sockets would take something like # O(N**3) time to unwind after control-C. The solution is to spread our sockets # out over multiple AFD handles, so that N doesn't grow too large for any # individual handle. MAX_AFD_GROUP_SIZE = 500 # at 1000, the cubic scaling is just starting to bite @attr.s(slots=True, eq=False) class AFDGroup: size = attr.ib() handle = attr.ib() @attr.s(slots=True, eq=False, frozen=True) class _WindowsStatistics: tasks_waiting_read = attr.ib() tasks_waiting_write = attr.ib() tasks_waiting_overlapped = attr.ib() completion_key_monitors = attr.ib() backend = attr.ib(default="windows") # Maximum number of events to dequeue from the completion port on each pass # through the run loop. Somewhat arbitrary. Should be large enough to collect # a good set of tasks on each loop, but not so large to waste tons of memory. # (Each WindowsIOManager holds a buffer whose size is ~32x this number.) MAX_EVENTS = 1000 @attr.s(frozen=True) class CompletionKeyEventInfo: lpOverlapped = attr.ib() dwNumberOfBytesTransferred = attr.ib() class WindowsIOManager: def __init__(self): # If this method raises an exception, then __del__ could run on a # half-initialized object. So we initialize everything that __del__ # touches to safe values up front, before we do anything that can # fail. self._iocp = None self._all_afd_handles = [] self._iocp = _check( kernel32.CreateIoCompletionPort(INVALID_HANDLE_VALUE, ffi.NULL, 0, 0) ) self._events = ffi.new("OVERLAPPED_ENTRY[]", MAX_EVENTS) self._vacant_afd_groups = set() # {lpOverlapped: AFDPollOp} self._afd_ops = {} # {socket handle: AFDWaiters} self._afd_waiters = {} # {lpOverlapped: task} self._overlapped_waiters = {} self._posted_too_late_to_cancel = set() self._completion_key_queues = {} self._completion_key_counter = itertools.count(CKeys.USER_DEFINED) with socket.socket() as s: # We assume we're not working with any LSP that changes # how select() is supposed to work. Validate this by # ensuring that the result of SIO_BSP_HANDLE_SELECT (the # LSP-hookable mechanism for "what should I use for # select()?") matches that of SIO_BASE_HANDLE ("what is # the real non-hooked underlying socket here?"). # # This doesn't work for Komodia-based LSPs; see the comments # in _get_base_socket() for details. But we have special # logic for those, so we just skip this check if # SIO_BASE_HANDLE fails. # LSPs can in theory override this, but we believe that it never # actually happens in the wild (except Komodia) select_handle = _get_underlying_socket( s, which=WSAIoctls.SIO_BSP_HANDLE_SELECT ) try: # LSPs shouldn't override this... base_handle = _get_underlying_socket(s, which=WSAIoctls.SIO_BASE_HANDLE) except OSError: # But Komodia-based LSPs do anyway, in a way that causes # a failure with WSAEFAULT. We have special handling for # them in _get_base_socket(). Make sure it works. _get_base_socket(s) else: if base_handle != select_handle: raise RuntimeError( "Unexpected network configuration detected: " "SIO_BASE_HANDLE and SIO_BSP_HANDLE_SELECT differ. " "Please file a bug at " "https://github.com/python-trio/trio/issues/new, " "and include the output of running: " "netsh winsock show catalog" ) def close(self): try: if self._iocp is not None: iocp = self._iocp self._iocp = None _check(kernel32.CloseHandle(iocp)) finally: while self._all_afd_handles: afd_handle = self._all_afd_handles.pop() _check(kernel32.CloseHandle(afd_handle)) def __del__(self): self.close() def statistics(self): tasks_waiting_read = 0 tasks_waiting_write = 0 for waiter in self._afd_waiters.values(): if waiter.read_task is not None: tasks_waiting_read += 1 if waiter.write_task is not None: tasks_waiting_write += 1 return _WindowsStatistics( tasks_waiting_read=tasks_waiting_read, tasks_waiting_write=tasks_waiting_write, tasks_waiting_overlapped=len(self._overlapped_waiters), completion_key_monitors=len(self._completion_key_queues), ) def force_wakeup(self): _check( kernel32.PostQueuedCompletionStatus( self._iocp, 0, CKeys.FORCE_WAKEUP, ffi.NULL ) ) def get_events(self, timeout): received = ffi.new("PULONG") milliseconds = round(1000 * timeout) if timeout > 0 and milliseconds == 0: milliseconds = 1 try: _check( kernel32.GetQueuedCompletionStatusEx( self._iocp, self._events, MAX_EVENTS, received, milliseconds, 0 ) ) except OSError as exc: if exc.winerror != ErrorCodes.WAIT_TIMEOUT: # pragma: no cover raise return 0 return received[0] def process_events(self, received): for i in range(received): entry = self._events[i] if entry.lpCompletionKey == CKeys.AFD_POLL: lpo = entry.lpOverlapped op = self._afd_ops.pop(lpo) waiters = op.waiters if waiters.current_op is not op: # Stale op, nothing to do pass else: waiters.current_op = None # I don't think this can happen, so if it does let's crash # and get a debug trace. if lpo.Internal != 0: # pragma: no cover code = ntdll.RtlNtStatusToDosError(lpo.Internal) raise_winerror(code) flags = op.poll_info.Handles[0].Events if waiters.read_task and flags & READABLE_FLAGS: _core.reschedule(waiters.read_task) waiters.read_task = None if waiters.write_task and flags & WRITABLE_FLAGS: _core.reschedule(waiters.write_task) waiters.write_task = None self._refresh_afd(op.poll_info.Handles[0].Handle) elif entry.lpCompletionKey == CKeys.WAIT_OVERLAPPED: # Regular I/O event, dispatch on lpOverlapped waiter = self._overlapped_waiters.pop(entry.lpOverlapped) overlapped = entry.lpOverlapped transferred = entry.dwNumberOfBytesTransferred info = CompletionKeyEventInfo( lpOverlapped=overlapped, dwNumberOfBytesTransferred=transferred ) _core.reschedule(waiter, Value(info)) elif entry.lpCompletionKey == CKeys.LATE_CANCEL: # Post made by a regular I/O event's abort_fn # after it failed to cancel the I/O. If we still # have a waiter with this lpOverlapped, we didn't # get the regular I/O completion and almost # certainly the user forgot to call # register_with_iocp. self._posted_too_late_to_cancel.remove(entry.lpOverlapped) try: waiter = self._overlapped_waiters.pop(entry.lpOverlapped) except KeyError: # Looks like the actual completion got here before this # fallback post did -- we're in the "expected" case of # too-late-to-cancel, where the user did nothing wrong. # Nothing more to do. pass else: exc = _core.TrioInternalError( "Failed to cancel overlapped I/O in {} and didn't " "receive the completion either. Did you forget to " "call register_with_iocp()?".format(waiter.name) ) # Raising this out of handle_io ensures that # the user will see our message even if some # other task is in an uncancellable wait due # to the same underlying forgot-to-register # issue (if their CancelIoEx succeeds, we # have no way of noticing that their completion # won't arrive). Unfortunately it loses the # task traceback. If you're debugging this # error and can't tell where it's coming from, # try changing this line to # _core.reschedule(waiter, outcome.Error(exc)) raise exc elif entry.lpCompletionKey == CKeys.FORCE_WAKEUP: pass else: # dispatch on lpCompletionKey queue = self._completion_key_queues[entry.lpCompletionKey] overlapped = int(ffi.cast("uintptr_t", entry.lpOverlapped)) transferred = entry.dwNumberOfBytesTransferred info = CompletionKeyEventInfo( lpOverlapped=overlapped, dwNumberOfBytesTransferred=transferred ) queue.put_nowait(info) def _register_with_iocp(self, handle, completion_key): handle = _handle(handle) _check(kernel32.CreateIoCompletionPort(handle, self._iocp, completion_key, 0)) # Supposedly this makes things slightly faster, by disabling the # ability to do WaitForSingleObject(handle). We would never want to do # that anyway, so might as well get the extra speed (if any). # Ref: http://www.lenholgate.com/blog/2009/09/interesting-blog-posts-on-high-performance-servers.html _check( kernel32.SetFileCompletionNotificationModes( handle, CompletionModes.FILE_SKIP_SET_EVENT_ON_HANDLE ) ) ################################################################ # AFD stuff ################################################################ def _refresh_afd(self, base_handle): waiters = self._afd_waiters[base_handle] if waiters.current_op is not None: afd_group = waiters.current_op.afd_group try: _check( kernel32.CancelIoEx( afd_group.handle, waiters.current_op.lpOverlapped ) ) except OSError as exc: if exc.winerror != ErrorCodes.ERROR_NOT_FOUND: # I don't think this is possible, so if it happens let's # crash noisily. raise # pragma: no cover waiters.current_op = None afd_group.size -= 1 self._vacant_afd_groups.add(afd_group) flags = 0 if waiters.read_task is not None: flags |= READABLE_FLAGS if waiters.write_task is not None: flags |= WRITABLE_FLAGS if not flags: del self._afd_waiters[base_handle] else: try: afd_group = self._vacant_afd_groups.pop() except KeyError: afd_group = AFDGroup(0, _afd_helper_handle()) self._register_with_iocp(afd_group.handle, CKeys.AFD_POLL) self._all_afd_handles.append(afd_group.handle) self._vacant_afd_groups.add(afd_group) lpOverlapped = ffi.new("LPOVERLAPPED") poll_info = ffi.new("AFD_POLL_INFO *") poll_info.Timeout = 2 ** 63 - 1 # INT64_MAX poll_info.NumberOfHandles = 1 poll_info.Exclusive = 0 poll_info.Handles[0].Handle = base_handle poll_info.Handles[0].Status = 0 poll_info.Handles[0].Events = flags try: _check( kernel32.DeviceIoControl( afd_group.handle, IoControlCodes.IOCTL_AFD_POLL, poll_info, ffi.sizeof("AFD_POLL_INFO"), poll_info, ffi.sizeof("AFD_POLL_INFO"), ffi.NULL, lpOverlapped, ) ) except OSError as exc: if exc.winerror != ErrorCodes.ERROR_IO_PENDING: # This could happen if the socket handle got closed behind # our back while a wait_* call was pending, and we tried # to re-issue the call. Clear our state and wake up any # pending calls. del self._afd_waiters[base_handle] # Do this last, because it could raise. wake_all(waiters, exc) return op = AFDPollOp(lpOverlapped, poll_info, waiters, afd_group) waiters.current_op = op self._afd_ops[lpOverlapped] = op afd_group.size += 1 if afd_group.size >= MAX_AFD_GROUP_SIZE: self._vacant_afd_groups.remove(afd_group) async def _afd_poll(self, sock, mode): base_handle = _get_base_socket(sock) waiters = self._afd_waiters.get(base_handle) if waiters is None: waiters = AFDWaiters() self._afd_waiters[base_handle] = waiters if getattr(waiters, mode) is not None: raise _core.BusyResourceError setattr(waiters, mode, _core.current_task()) # Could potentially raise if the handle is somehow invalid; that's OK, # we let it escape. self._refresh_afd(base_handle) def abort_fn(_): setattr(waiters, mode, None) self._refresh_afd(base_handle) return _core.Abort.SUCCEEDED await _core.wait_task_rescheduled(abort_fn) @_public async def wait_readable(self, sock): await self._afd_poll(sock, "read_task") @_public async def wait_writable(self, sock): await self._afd_poll(sock, "write_task") @_public def notify_closing(self, handle): handle = _get_base_socket(handle) waiters = self._afd_waiters.get(handle) if waiters is not None: wake_all(waiters, _core.ClosedResourceError()) self._refresh_afd(handle) ################################################################ # Regular overlapped operations ################################################################ @_public def register_with_iocp(self, handle): self._register_with_iocp(handle, CKeys.WAIT_OVERLAPPED) @_public async def wait_overlapped(self, handle, lpOverlapped): handle = _handle(handle) if isinstance(lpOverlapped, int): lpOverlapped = ffi.cast("LPOVERLAPPED", lpOverlapped) if lpOverlapped in self._overlapped_waiters: raise _core.BusyResourceError( "another task is already waiting on that lpOverlapped" ) task = _core.current_task() self._overlapped_waiters[lpOverlapped] = task raise_cancel = None def abort(raise_cancel_): nonlocal raise_cancel raise_cancel = raise_cancel_ try: _check(kernel32.CancelIoEx(handle, lpOverlapped)) except OSError as exc: if exc.winerror == ErrorCodes.ERROR_NOT_FOUND: # Too late to cancel. If this happens because the # operation is already completed, we don't need to do # anything; we'll get a notification of that completion # soon. But another possibility is that the operation was # performed on a handle that wasn't registered with our # IOCP (ie, the user forgot to call register_with_iocp), # in which case we're just never going to see the # completion. To avoid an uncancellable infinite sleep in # the latter case, we'll PostQueuedCompletionStatus here, # and if our post arrives before the original completion # does, we'll assume the handle wasn't registered. _check( kernel32.PostQueuedCompletionStatus( self._iocp, 0, CKeys.LATE_CANCEL, lpOverlapped ) ) # Keep the lpOverlapped referenced so its address # doesn't get reused until our posted completion # status has been processed. Otherwise, we can # get confused about which completion goes with # which I/O. self._posted_too_late_to_cancel.add(lpOverlapped) else: # pragma: no cover raise _core.TrioInternalError( "CancelIoEx failed with unexpected error" ) from exc return _core.Abort.FAILED info = await _core.wait_task_rescheduled(abort) if lpOverlapped.Internal != 0: # the lpOverlapped reports the error as an NT status code, # which we must convert back to a Win32 error code before # it will produce the right sorts of exceptions code = ntdll.RtlNtStatusToDosError(lpOverlapped.Internal) if code == ErrorCodes.ERROR_OPERATION_ABORTED: if raise_cancel is not None: raise_cancel() else: # We didn't request this cancellation, so assume # it happened due to the underlying handle being # closed before the operation could complete. raise _core.ClosedResourceError("another task closed this resource") else: raise_winerror(code) return info async def _perform_overlapped(self, handle, submit_fn): # submit_fn(lpOverlapped) submits some I/O # it may raise an OSError with ERROR_IO_PENDING # the handle must already be registered using # register_with_iocp(handle) # This always does a schedule point, but it's possible that the # operation will not be cancellable, depending on how Windows is # feeling today. So we need to check for cancellation manually. await _core.checkpoint_if_cancelled() lpOverlapped = ffi.new("LPOVERLAPPED") try: submit_fn(lpOverlapped) except OSError as exc: if exc.winerror != ErrorCodes.ERROR_IO_PENDING: raise await self.wait_overlapped(handle, lpOverlapped) return lpOverlapped @_public async def write_overlapped(self, handle, data, file_offset=0): with ffi.from_buffer(data) as cbuf: def submit_write(lpOverlapped): # yes, these are the real documented names offset_fields = lpOverlapped.DUMMYUNIONNAME.DUMMYSTRUCTNAME offset_fields.Offset = file_offset & 0xFFFFFFFF offset_fields.OffsetHigh = file_offset >> 32 _check( kernel32.WriteFile( _handle(handle), ffi.cast("LPCVOID", cbuf), len(cbuf), ffi.NULL, lpOverlapped, ) ) lpOverlapped = await self._perform_overlapped(handle, submit_write) # this is "number of bytes transferred" return lpOverlapped.InternalHigh @_public async def readinto_overlapped(self, handle, buffer, file_offset=0): with ffi.from_buffer(buffer, require_writable=True) as cbuf: def submit_read(lpOverlapped): offset_fields = lpOverlapped.DUMMYUNIONNAME.DUMMYSTRUCTNAME offset_fields.Offset = file_offset & 0xFFFFFFFF offset_fields.OffsetHigh = file_offset >> 32 _check( kernel32.ReadFile( _handle(handle), ffi.cast("LPVOID", cbuf), len(cbuf), ffi.NULL, lpOverlapped, ) ) lpOverlapped = await self._perform_overlapped(handle, submit_read) return lpOverlapped.InternalHigh ################################################################ # Raw IOCP operations ################################################################ @_public def current_iocp(self): return int(ffi.cast("uintptr_t", self._iocp)) @contextmanager @_public def monitor_completion_key(self): key = next(self._completion_key_counter) queue = _core.UnboundedQueue() self._completion_key_queues[key] = queue try: yield (key, queue) finally: del self._completion_key_queues[key] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_ki.py0000644000076500000000000001740500000000000015307 0ustar00quentinwheelimport inspect import signal import sys from functools import wraps import attr import async_generator from .._util import is_main_thread if False: from typing import Any, TypeVar, Callable F = TypeVar("F", bound=Callable[..., Any]) # In ordinary single-threaded Python code, when you hit control-C, it raises # an exception and automatically does all the regular unwinding stuff. # # In Trio code, we would like hitting control-C to raise an exception and # automatically do all the regular unwinding stuff. In particular, we would # like to maintain our invariant that all tasks always run to completion (one # way or another), by unwinding all of them. # # But it's basically impossible to write the core task running code in such a # way that it can maintain this invariant in the face of KeyboardInterrupt # exceptions arising at arbitrary bytecode positions. Similarly, if a # KeyboardInterrupt happened at the wrong moment inside pretty much any of our # inter-task synchronization or I/O primitives, then the system state could # get corrupted and prevent our being able to clean up properly. # # So, we need a way to defer KeyboardInterrupt processing from these critical # sections. # # Things that don't work: # # - Listen for SIGINT and process it in a system task: works fine for # well-behaved programs that regularly pass through the event loop, but if # user-code goes into an infinite loop then it can't be interrupted. Which # is unfortunate, since dealing with infinite loops is what # KeyboardInterrupt is for! # # - Use pthread_sigmask to disable signal delivery during critical section: # (a) windows has no pthread_sigmask, (b) python threads start with all # signals unblocked, so if there are any threads around they'll receive the # signal and then tell the main thread to run the handler, even if the main # thread has that signal blocked. # # - Install a signal handler which checks a global variable to decide whether # to raise the exception immediately (if we're in a non-critical section), # or to schedule it on the event loop (if we're in a critical section). The # problem here is that it's impossible to transition safely out of user code: # # with keyboard_interrupt_enabled: # msg = coro.send(value) # # If this raises a KeyboardInterrupt, it might be because the coroutine got # interrupted and has unwound... or it might be the KeyboardInterrupt # arrived just *after* 'send' returned, so the coroutine is still running # but we just lost the message it sent. (And worse, in our actual task # runner, the send is hidden inside a utility function etc.) # # Solution: # # Mark *stack frames* as being interrupt-safe or interrupt-unsafe, and from # the signal handler check which kind of frame we're currently in when # deciding whether to raise or schedule the exception. # # There are still some cases where this can fail, like if someone hits # control-C while the process is in the event loop, and then it immediately # enters an infinite loop in user code. In this case the user has to hit # control-C a second time. And of course if the user code is written so that # it doesn't actually exit after a task crashes and everything gets cancelled, # then there's not much to be done. (Hitting control-C repeatedly might help, # but in general the solution is to kill the process some other way, just like # for any Python program that's written to catch and ignore # KeyboardInterrupt.) # We use this special string as a unique key into the frame locals dictionary. # The @ ensures it is not a valid identifier and can't clash with any possible # real local name. See: https://github.com/python-trio/trio/issues/469 LOCALS_KEY_KI_PROTECTION_ENABLED = "@TRIO_KI_PROTECTION_ENABLED" # NB: according to the signal.signal docs, 'frame' can be None on entry to # this function: def ki_protection_enabled(frame): while frame is not None: if LOCALS_KEY_KI_PROTECTION_ENABLED in frame.f_locals: return frame.f_locals[LOCALS_KEY_KI_PROTECTION_ENABLED] if frame.f_code.co_name == "__del__": return True frame = frame.f_back return True def currently_ki_protected(): r"""Check whether the calling code has :exc:`KeyboardInterrupt` protection enabled. It's surprisingly easy to think that one's :exc:`KeyboardInterrupt` protection is enabled when it isn't, or vice-versa. This function tells you what Trio thinks of the matter, which makes it useful for ``assert``\s and unit tests. Returns: bool: True if protection is enabled, and False otherwise. """ return ki_protection_enabled(sys._getframe()) def _ki_protection_decorator(enabled): def decorator(fn): # In some version of Python, isgeneratorfunction returns true for # coroutine functions, so we have to check for coroutine functions # first. if inspect.iscoroutinefunction(fn): @wraps(fn) def wrapper(*args, **kwargs): # See the comment for regular generators below coro = fn(*args, **kwargs) coro.cr_frame.f_locals[LOCALS_KEY_KI_PROTECTION_ENABLED] = enabled return coro return wrapper elif inspect.isgeneratorfunction(fn): @wraps(fn) def wrapper(*args, **kwargs): # It's important that we inject this directly into the # generator's locals, as opposed to setting it here and then # doing 'yield from'. The reason is, if a generator is # throw()n into, then it may magically pop to the top of the # stack. And @contextmanager generators in particular are a # case where we often want KI protection, and which are often # thrown into! See: # https://bugs.python.org/issue29590 gen = fn(*args, **kwargs) gen.gi_frame.f_locals[LOCALS_KEY_KI_PROTECTION_ENABLED] = enabled return gen return wrapper elif async_generator.isasyncgenfunction(fn): @wraps(fn) def wrapper(*args, **kwargs): # See the comment for regular generators above agen = fn(*args, **kwargs) agen.ag_frame.f_locals[LOCALS_KEY_KI_PROTECTION_ENABLED] = enabled return agen return wrapper else: @wraps(fn) def wrapper(*args, **kwargs): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = enabled return fn(*args, **kwargs) return wrapper return decorator enable_ki_protection = _ki_protection_decorator(True) # type: Callable[[F], F] enable_ki_protection.__name__ = "enable_ki_protection" disable_ki_protection = _ki_protection_decorator(False) # type: Callable[[F], F] disable_ki_protection.__name__ = "disable_ki_protection" @attr.s class KIManager: handler = attr.ib(default=None) def install(self, deliver_cb, restrict_keyboard_interrupt_to_checkpoints): assert self.handler is None if ( not is_main_thread() or signal.getsignal(signal.SIGINT) != signal.default_int_handler ): return def handler(signum, frame): assert signum == signal.SIGINT protection_enabled = ki_protection_enabled(frame) if protection_enabled or restrict_keyboard_interrupt_to_checkpoints: deliver_cb() else: raise KeyboardInterrupt self.handler = handler signal.signal(signal.SIGINT, handler) def close(self): if self.handler is not None: if signal.getsignal(signal.SIGINT) is self.handler: signal.signal(signal.SIGINT, signal.default_int_handler) self.handler = None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_local.py0000644000076500000000000000527500000000000016000 0ustar00quentinwheel# Runvar implementations import attr from . import _run from .._util import Final @attr.s(eq=False, hash=False, slots=True) class _RunVarToken: _no_value = object() _var = attr.ib() previous_value = attr.ib(default=_no_value) redeemed = attr.ib(default=False, init=False) @classmethod def empty(cls, var): return cls(var) @attr.s(eq=False, hash=False, slots=True) class RunVar(metaclass=Final): """The run-local variant of a context variable. :class:`RunVar` objects are similar to context variable objects, except that they are shared across a single call to :func:`trio.run` rather than a single task. """ _NO_DEFAULT = object() _name = attr.ib() _default = attr.ib(default=_NO_DEFAULT) def get(self, default=_NO_DEFAULT): """Gets the value of this :class:`RunVar` for the current run call.""" try: return _run.GLOBAL_RUN_CONTEXT.runner._locals[self] except AttributeError: raise RuntimeError("Cannot be used outside of a run context") from None except KeyError: # contextvars consistency if default is not self._NO_DEFAULT: return default if self._default is not self._NO_DEFAULT: return self._default raise LookupError(self) from None def set(self, value): """Sets the value of this :class:`RunVar` for this current run call. """ try: old_value = self.get() except LookupError: token = _RunVarToken.empty(self) else: token = _RunVarToken(self, old_value) # This can't fail, because if we weren't in Trio context then the # get() above would have failed. _run.GLOBAL_RUN_CONTEXT.runner._locals[self] = value return token def reset(self, token): """Resets the value of this :class:`RunVar` to what it was previously specified by the token. """ if token is None: raise TypeError("token must not be none") if token.redeemed: raise ValueError("token has already been used") if token._var is not self: raise ValueError("token is not for us") previous = token.previous_value try: if previous is _RunVarToken._no_value: _run.GLOBAL_RUN_CONTEXT.runner._locals.pop(self) else: _run.GLOBAL_RUN_CONTEXT.runner._locals[self] = previous except AttributeError: raise RuntimeError("Cannot be used outside of a run context") token.redeemed = True def __repr__(self): return "".format(self._name) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_mock_clock.py0000644000076500000000000001406400000000000017006 0ustar00quentinwheelimport time from math import inf from .. import _core from ._run import GLOBAL_RUN_CONTEXT from .._abc import Clock from .._util import Final ################################################################ # The glorious MockClock ################################################################ # Prior art: # https://twistedmatrix.com/documents/current/api/twisted.internet.task.Clock.html # https://github.com/ztellman/manifold/issues/57 class MockClock(Clock, metaclass=Final): """A user-controllable clock suitable for writing tests. Args: rate (float): the initial :attr:`rate`. autojump_threshold (float): the initial :attr:`autojump_threshold`. .. attribute:: rate How many seconds of clock time pass per second of real time. Default is 0.0, i.e. the clock only advances through manuals calls to :meth:`jump` or when the :attr:`autojump_threshold` is triggered. You can assign to this attribute to change it. .. attribute:: autojump_threshold The clock keeps an eye on the run loop, and if at any point it detects that all tasks have been blocked for this many real seconds (i.e., according to the actual clock, not this clock), then the clock automatically jumps ahead to the run loop's next scheduled timeout. Default is :data:`math.inf`, i.e., to never autojump. You can assign to this attribute to change it. Basically the idea is that if you have code or tests that use sleeps and timeouts, you can use this to make it run much faster, totally automatically. (At least, as long as those sleeps/timeouts are happening inside Trio; if your test involves talking to external service and waiting for it to timeout then obviously we can't help you there.) You should set this to the smallest value that lets you reliably avoid "false alarms" where some I/O is in flight (e.g. between two halves of a socketpair) but the threshold gets triggered and time gets advanced anyway. This will depend on the details of your tests and test environment. If you aren't doing any I/O (like in our sleeping example above) then just set it to zero, and the clock will jump whenever all tasks are blocked. .. note:: If you use ``autojump_threshold`` and `wait_all_tasks_blocked` at the same time, then you might wonder how they interact, since they both cause things to happen after the run loop goes idle for some time. The answer is: `wait_all_tasks_blocked` takes priority. If there's a task blocked in `wait_all_tasks_blocked`, then the autojump feature treats that as active task and does *not* jump the clock. """ def __init__(self, rate=0.0, autojump_threshold=inf): # when the real clock said 'real_base', the virtual time was # 'virtual_base', and since then it's advanced at 'rate' virtual # seconds per real second. self._real_base = 0.0 self._virtual_base = 0.0 self._rate = 0.0 self._autojump_threshold = 0.0 # kept as an attribute so that our tests can monkeypatch it self._real_clock = time.perf_counter # use the property update logic to set initial values self.rate = rate self.autojump_threshold = autojump_threshold def __repr__(self): return "".format( self.current_time(), self._rate, id(self) ) @property def rate(self): return self._rate @rate.setter def rate(self, new_rate): if new_rate < 0: raise ValueError("rate must be >= 0") else: real = self._real_clock() virtual = self._real_to_virtual(real) self._virtual_base = virtual self._real_base = real self._rate = float(new_rate) @property def autojump_threshold(self): return self._autojump_threshold @autojump_threshold.setter def autojump_threshold(self, new_autojump_threshold): self._autojump_threshold = float(new_autojump_threshold) self._try_resync_autojump_threshold() # runner.clock_autojump_threshold is an internal API that isn't easily # usable by custom third-party Clock objects. If you need access to this # functionality, let us know, and we'll figure out how to make a public # API. Discussion: # # https://github.com/python-trio/trio/issues/1587 def _try_resync_autojump_threshold(self): try: runner = GLOBAL_RUN_CONTEXT.runner if runner.is_guest: runner.force_guest_tick_asap() except AttributeError: pass else: runner.clock_autojump_threshold = self._autojump_threshold # Invoked by the run loop when runner.clock_autojump_threshold is # exceeded. def _autojump(self): statistics = _core.current_statistics() jump = statistics.seconds_to_next_deadline if 0 < jump < inf: self.jump(jump) def _real_to_virtual(self, real): real_offset = real - self._real_base virtual_offset = self._rate * real_offset return self._virtual_base + virtual_offset def start_clock(self): self._try_resync_autojump_threshold() def current_time(self): return self._real_to_virtual(self._real_clock()) def deadline_to_sleep_time(self, deadline): virtual_timeout = deadline - self.current_time() if virtual_timeout <= 0: return 0 elif self._rate > 0: return virtual_timeout / self._rate else: return 999999999 def jump(self, seconds): """Manually advance the clock by the given number of seconds. Args: seconds (float): the number of seconds to jump the clock forward. Raises: ValueError: if you try to pass a negative value for ``seconds``. """ if seconds < 0: raise ValueError("time can't go backwards") self._virtual_base += seconds ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_multierror.py0000644000076500000000000004437500000000000017116 0ustar00quentinwheelimport sys import traceback import textwrap import warnings import attr # python traceback.TracebackException < 3.6.4 does not support unhashable exceptions # see https://github.com/python/cpython/pull/4014 for details if sys.version_info < (3, 6, 4): exc_key = lambda exc: exc else: exc_key = id ################################################################ # MultiError ################################################################ def _filter_impl(handler, root_exc): # We have a tree of MultiError's, like: # # MultiError([ # ValueError, # MultiError([ # KeyError, # ValueError, # ]), # ]) # # or similar. # # We want to # 1) apply the filter to each of the leaf exceptions -- each leaf # might stay the same, be replaced (with the original exception # potentially sticking around as __context__ or __cause__), or # disappear altogether. # 2) simplify the resulting tree -- remove empty nodes, and replace # singleton MultiError's with their contents, e.g.: # MultiError([KeyError]) -> KeyError # (This can happen recursively, e.g. if the two ValueErrors above # get caught then we'll just be left with a bare KeyError.) # 3) preserve sensible tracebacks # # It's the tracebacks that are most confusing. As a MultiError # propagates through the stack, it accumulates traceback frames, but # the exceptions inside it don't. Semantically, the traceback for a # leaf exception is the concatenation the tracebacks of all the # exceptions you see when traversing the exception tree from the root # to that leaf. Our correctness invariant is that this concatenated # traceback should be the same before and after. # # The easy way to do that would be to, at the beginning of this # function, "push" all tracebacks down to the leafs, so all the # MultiErrors have __traceback__=None, and all the leafs have complete # tracebacks. But whenever possible, we'd actually prefer to keep # tracebacks as high up in the tree as possible, because this lets us # keep only a single copy of the common parts of these exception's # tracebacks. This is cheaper (in memory + time -- tracebacks are # unpleasantly quadratic-ish to work with, and this might matter if # you have thousands of exceptions, which can happen e.g. after # cancelling a large task pool, and no-one will ever look at their # tracebacks!), and more importantly, factoring out redundant parts of # the tracebacks makes them more readable if/when users do see them. # # So instead our strategy is: # - first go through and construct the new tree, preserving any # unchanged subtrees # - then go through the original tree (!) and push tracebacks down # until either we hit a leaf, or we hit a subtree which was # preserved in the new tree. # This used to also support async handler functions. But that runs into: # https://bugs.python.org/issue29600 # which is difficult to fix on our end. # Filters a subtree, ignoring tracebacks, while keeping a record of # which MultiErrors were preserved unchanged def filter_tree(exc, preserved): if isinstance(exc, MultiError): new_exceptions = [] changed = False for child_exc in exc.exceptions: new_child_exc = filter_tree(child_exc, preserved) if new_child_exc is not child_exc: changed = True if new_child_exc is not None: new_exceptions.append(new_child_exc) if not new_exceptions: return None elif changed: return MultiError(new_exceptions) else: preserved.add(id(exc)) return exc else: new_exc = handler(exc) # Our version of implicit exception chaining if new_exc is not None and new_exc is not exc: new_exc.__context__ = exc return new_exc def push_tb_down(tb, exc, preserved): if id(exc) in preserved: return new_tb = concat_tb(tb, exc.__traceback__) if isinstance(exc, MultiError): for child_exc in exc.exceptions: push_tb_down(new_tb, child_exc, preserved) exc.__traceback__ = None else: exc.__traceback__ = new_tb preserved = set() new_root_exc = filter_tree(root_exc, preserved) push_tb_down(None, root_exc, preserved) # Delete the local functions to avoid a reference cycle (see # test_simple_cancel_scope_usage_doesnt_create_cyclic_garbage) del filter_tree, push_tb_down return new_root_exc # Normally I'm a big fan of (a)contextmanager, but in this case I found it # easier to use the raw context manager protocol, because it makes it a lot # easier to reason about how we're mutating the traceback as we go. (End # result: if the exception gets modified, then the 'raise' here makes this # frame show up in the traceback; otherwise, we leave no trace.) @attr.s(frozen=True) class MultiErrorCatcher: _handler = attr.ib() def __enter__(self): pass def __exit__(self, etype, exc, tb): if exc is not None: filtered_exc = MultiError.filter(self._handler, exc) if filtered_exc is exc: # Let the interpreter re-raise it return False if filtered_exc is None: # Swallow the exception return True # When we raise filtered_exc, Python will unconditionally blow # away its __context__ attribute and replace it with the original # exc we caught. So after we raise it, we have to pause it while # it's in flight to put the correct __context__ back. old_context = filtered_exc.__context__ try: raise filtered_exc finally: _, value, _ = sys.exc_info() assert value is filtered_exc value.__context__ = old_context class MultiError(BaseException): """An exception that contains other exceptions; also known as an "inception". It's main use is to represent the situation when multiple child tasks all raise errors "in parallel". Args: exceptions (list): The exceptions Returns: If ``len(exceptions) == 1``, returns that exception. This means that a call to ``MultiError(...)`` is not guaranteed to return a :exc:`MultiError` object! Otherwise, returns a new :exc:`MultiError` object. Raises: TypeError: if any of the passed in objects are not instances of :exc:`BaseException`. """ def __init__(self, exceptions): # Avoid recursion when exceptions[0] returned by __new__() happens # to be a MultiError and subsequently __init__() is called. if hasattr(self, "exceptions"): # __init__ was already called on this object assert len(exceptions) == 1 and exceptions[0] is self return self.exceptions = exceptions def __new__(cls, exceptions): exceptions = list(exceptions) for exc in exceptions: if not isinstance(exc, BaseException): raise TypeError("Expected an exception object, not {!r}".format(exc)) if len(exceptions) == 1: # If this lone object happens to itself be a MultiError, then # Python will implicitly call our __init__ on it again. See # special handling in __init__. return exceptions[0] else: # The base class __new__() implicitly invokes our __init__, which # is what we want. # # In an earlier version of the code, we didn't define __init__ and # simply set the `exceptions` attribute directly on the new object. # However, linters expect attributes to be initialized in __init__. return BaseException.__new__(cls, exceptions) def __str__(self): return ", ".join(repr(exc) for exc in self.exceptions) def __repr__(self): return "".format(self) @classmethod def filter(cls, handler, root_exc): """Apply the given ``handler`` to all the exceptions in ``root_exc``. Args: handler: A callable that takes an atomic (non-MultiError) exception as input, and returns either a new exception object or None. root_exc: An exception, often (though not necessarily) a :exc:`MultiError`. Returns: A new exception object in which each component exception ``exc`` has been replaced by the result of running ``handler(exc)`` – or, if ``handler`` returned None for all the inputs, returns None. """ return _filter_impl(handler, root_exc) @classmethod def catch(cls, handler): """Return a context manager that catches and re-throws exceptions after running :meth:`filter` on them. Args: handler: as for :meth:`filter` """ return MultiErrorCatcher(handler) # Clean up exception printing: MultiError.__module__ = "trio" ################################################################ # concat_tb ################################################################ # We need to compute a new traceback that is the concatenation of two existing # tracebacks. This requires copying the entries in 'head' and then pointing # the final tb_next to 'tail'. # # NB: 'tail' might be None, which requires some special handling in the ctypes # version. # # The complication here is that Python doesn't actually support copying or # modifying traceback objects, so we have to get creative... # # On CPython, we use ctypes. On PyPy, we use "transparent proxies". # # Jinja2 is a useful source of inspiration: # https://github.com/pallets/jinja/blob/master/jinja2/debug.py try: import tputil except ImportError: have_tproxy = False else: have_tproxy = True if have_tproxy: # http://doc.pypy.org/en/latest/objspace-proxies.html def copy_tb(base_tb, tb_next): def controller(operation): # Rationale for pragma: I looked fairly carefully and tried a few # things, and AFAICT it's not actually possible to get any # 'opname' that isn't __getattr__ or __getattribute__. So there's # no missing test we could add, and no value in coverage nagging # us about adding one. if operation.opname in [ "__getattribute__", "__getattr__", ]: # pragma: no cover if operation.args[0] == "tb_next": return tb_next return operation.delegate() return tputil.make_proxy(controller, type(base_tb), base_tb) else: # ctypes it is import ctypes # How to handle refcounting? I don't want to use ctypes.py_object because # I don't understand or trust it, and I don't want to use # ctypes.pythonapi.Py_{Inc,Dec}Ref because we might clash with user code # that also tries to use them but with different types. So private _ctypes # APIs it is! import _ctypes class CTraceback(ctypes.Structure): _fields_ = [ ("PyObject_HEAD", ctypes.c_byte * object().__sizeof__()), ("tb_next", ctypes.c_void_p), ("tb_frame", ctypes.c_void_p), ("tb_lasti", ctypes.c_int), ("tb_lineno", ctypes.c_int), ] def copy_tb(base_tb, tb_next): # TracebackType has no public constructor, so allocate one the hard way try: raise ValueError except ValueError as exc: new_tb = exc.__traceback__ c_new_tb = CTraceback.from_address(id(new_tb)) # At the C level, tb_next either pointer to the next traceback or is # NULL. c_void_p and the .tb_next accessor both convert NULL to None, # but we shouldn't DECREF None just because we assigned to a NULL # pointer! Here we know that our new traceback has only 1 frame in it, # so we can assume the tb_next field is NULL. assert c_new_tb.tb_next is None # If tb_next is None, then we want to set c_new_tb.tb_next to NULL, # which it already is, so we're done. Otherwise, we have to actually # do some work: if tb_next is not None: _ctypes.Py_INCREF(tb_next) c_new_tb.tb_next = id(tb_next) assert c_new_tb.tb_frame is not None _ctypes.Py_INCREF(base_tb.tb_frame) old_tb_frame = new_tb.tb_frame c_new_tb.tb_frame = id(base_tb.tb_frame) _ctypes.Py_DECREF(old_tb_frame) c_new_tb.tb_lasti = base_tb.tb_lasti c_new_tb.tb_lineno = base_tb.tb_lineno return new_tb def concat_tb(head, tail): # We have to use an iterative algorithm here, because in the worst case # this might be a RecursionError stack that is by definition too deep to # process by recursion! head_tbs = [] pointer = head while pointer is not None: head_tbs.append(pointer) pointer = pointer.tb_next current_head = tail for head_tb in reversed(head_tbs): current_head = copy_tb(head_tb, tb_next=current_head) return current_head ################################################################ # MultiError traceback formatting # # What follows is terrible, terrible monkey patching of # traceback.TracebackException to add support for handling # MultiErrors ################################################################ traceback_exception_original_init = traceback.TracebackException.__init__ def traceback_exception_init( self, exc_type, exc_value, exc_traceback, *, limit=None, lookup_lines=True, capture_locals=False, compact=False, _seen=None, ): if sys.version_info >= (3, 10): kwargs = {"compact": compact} else: kwargs = {} # Capture the original exception and its cause and context as TracebackExceptions traceback_exception_original_init( self, exc_type, exc_value, exc_traceback, limit=limit, lookup_lines=lookup_lines, capture_locals=capture_locals, _seen=_seen, **kwargs, ) seen_was_none = _seen is None if _seen is None: _seen = set() # Capture each of the exceptions in the MultiError along with each of their causes and contexts if isinstance(exc_value, MultiError): embedded = [] for exc in exc_value.exceptions: if exc_key(exc) not in _seen: embedded.append( traceback.TracebackException.from_exception( exc, limit=limit, lookup_lines=lookup_lines, capture_locals=capture_locals, # copy the set of _seen exceptions so that duplicates # shared between sub-exceptions are not omitted _seen=None if seen_was_none else set(_seen), ) ) self.embedded = embedded else: self.embedded = [] traceback.TracebackException.__init__ = traceback_exception_init # type: ignore traceback_exception_original_format = traceback.TracebackException.format def traceback_exception_format(self, *, chain=True): yield from traceback_exception_original_format(self, chain=chain) for i, exc in enumerate(self.embedded): yield "\nDetails of embedded exception {}:\n\n".format(i + 1) yield from (textwrap.indent(line, " " * 2) for line in exc.format(chain=chain)) traceback.TracebackException.format = traceback_exception_format # type: ignore def trio_excepthook(etype, value, tb): for chunk in traceback.format_exception(etype, value, tb): sys.stderr.write(chunk) monkeypatched_or_warned = False if "IPython" in sys.modules: import IPython ip = IPython.get_ipython() if ip is not None: if ip.custom_exceptions != (): warnings.warn( "IPython detected, but you already have a custom exception " "handler installed. I'll skip installing Trio's custom " "handler, but this means MultiErrors will not show full " "tracebacks.", category=RuntimeWarning, ) monkeypatched_or_warned = True else: def trio_show_traceback(self, etype, value, tb, tb_offset=None): # XX it would be better to integrate with IPython's fancy # exception formatting stuff (and not ignore tb_offset) trio_excepthook(etype, value, tb) ip.set_custom_exc((MultiError,), trio_show_traceback) monkeypatched_or_warned = True if sys.excepthook is sys.__excepthook__: sys.excepthook = trio_excepthook monkeypatched_or_warned = True # Ubuntu's system Python has a sitecustomize.py file that import # apport_python_hook and replaces sys.excepthook. # # The custom hook captures the error for crash reporting, and then calls # sys.__excepthook__ to actually print the error. # # We don't mind it capturing the error for crash reporting, but we want to # take over printing the error. So we monkeypatch the apport_python_hook # module so that instead of calling sys.__excepthook__, it calls our custom # hook. # # More details: https://github.com/python-trio/trio/issues/1065 if getattr(sys.excepthook, "__name__", None) == "apport_excepthook": import apport_python_hook assert sys.excepthook is apport_python_hook.apport_excepthook # Give it a descriptive name as a hint for anyone who's stuck trying to # debug this mess later. class TrioFakeSysModuleForApport: pass fake_sys = TrioFakeSysModuleForApport() fake_sys.__dict__.update(sys.__dict__) fake_sys.__excepthook__ = trio_excepthook # type: ignore apport_python_hook.sys = fake_sys monkeypatched_or_warned = True if not monkeypatched_or_warned: warnings.warn( "You seem to already have a custom sys.excepthook handler " "installed. I'll skip installing Trio's custom handler, but this " "means MultiErrors will not show full tracebacks.", category=RuntimeWarning, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_parking_lot.py0000644000076500000000000002044500000000000017213 0ustar00quentinwheel# ParkingLot provides an abstraction for a fair waitqueue with cancellation # and requeueing support. Inspiration: # # https://webkit.org/blog/6161/locking-in-webkit/ # https://amanieu.github.io/parking_lot/ # # which were in turn heavily influenced by # # http://gee.cs.oswego.edu/dl/papers/aqs.pdf # # Compared to these, our use of cooperative scheduling allows some # simplifications (no need for internal locking). On the other hand, the need # to support Trio's strong cancellation semantics adds some complications # (tasks need to know where they're queued so they can cancel). Also, in the # above work, the ParkingLot is a global structure that holds a collection of # waitqueues keyed by lock address, and which are opportunistically allocated # and destroyed as contention arises; this allows the worst-case memory usage # for all waitqueues to be O(#tasks). Here we allocate a separate wait queue # for each synchronization object, so we're O(#objects + #tasks). This isn't # *so* bad since compared to our synchronization objects are heavier than # theirs and our tasks are lighter, so for us #objects is smaller and #tasks # is larger. # # This is in the core because for two reasons. First, it's used by # UnboundedQueue, and UnboundedQueue is used for a number of things in the # core. And second, it's responsible for providing fairness to all of our # high-level synchronization primitives (locks, queues, etc.). For now with # our FIFO scheduler this is relatively trivial (it's just a FIFO waitqueue), # but in the future we ever start support task priorities or fair scheduling # # https://github.com/python-trio/trio/issues/32 # # then all we'll have to do is update this. (Well, full-fledged task # priorities might also require priority inheritance, which would require more # work.) # # For discussion of data structures to use here, see: # # https://github.com/dabeaz/curio/issues/136 # # (and also the articles above). Currently we use a SortedDict ordered by a # global monotonic counter that ensures FIFO ordering. The main advantage of # this is that it's easy to implement :-). An intrusive doubly-linked list # would also be a natural approach, so long as we only handle FIFO ordering. # # XX: should we switch to the shared global ParkingLot approach? # # XX: we should probably add support for "parking tokens" to allow for # task-fair RWlock (basically: when parking a task needs to be able to mark # itself as a reader or a writer, and then a task-fair wakeup policy is, wake # the next task, and if it's a reader than keep waking tasks so long as they # are readers). Without this I think you can implement write-biased or # read-biased RWlocks (by using two parking lots and drawing from whichever is # preferred), but not task-fair -- and task-fair plays much more nicely with # WFQ. (Consider what happens in the two-lot implementation if you're # write-biased but all the pending writers are blocked at the scheduler level # by the WFQ logic...) # ...alternatively, "phase-fair" RWlocks are pretty interesting: # http://www.cs.unc.edu/~anderson/papers/ecrts09b.pdf # Useful summary: # https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReadWriteLock.html # # XX: if we do add WFQ, then we might have to drop the current feature where # unpark returns the tasks that were unparked. Rationale: suppose that at the # time we call unpark, the next task is deprioritized... and then, before it # becomes runnable, a new task parks which *is* runnable. Ideally we should # immediately wake the new task, and leave the old task on the queue for # later. But this means we can't commit to which task we are unparking when # unpark is called. # # See: https://github.com/python-trio/trio/issues/53 from itertools import count import attr from collections import OrderedDict from .. import _core from .._util import Final _counter = count() @attr.s(frozen=True, slots=True) class _ParkingLotStatistics: tasks_waiting = attr.ib() @attr.s(eq=False, hash=False, slots=True) class ParkingLot(metaclass=Final): """A fair wait queue with cancellation and requeueing. This class encapsulates the tricky parts of implementing a wait queue. It's useful for implementing higher-level synchronization primitives like queues and locks. In addition to the methods below, you can use ``len(parking_lot)`` to get the number of parked tasks, and ``if parking_lot: ...`` to check whether there are any parked tasks. """ # {task: None}, we just want a deque where we can quickly delete random # items _parked = attr.ib(factory=OrderedDict, init=False) def __len__(self): """Returns the number of parked tasks.""" return len(self._parked) def __bool__(self): """True if there are parked tasks, False otherwise.""" return bool(self._parked) # XX this currently returns None # if we ever add the ability to repark while one's resuming place in # line (for false wakeups), then we could have it return a ticket that # abstracts the "place in line" concept. @_core.enable_ki_protection async def park(self): """Park the current task until woken by a call to :meth:`unpark` or :meth:`unpark_all`. """ task = _core.current_task() self._parked[task] = None task.custom_sleep_data = self def abort_fn(_): del task.custom_sleep_data._parked[task] return _core.Abort.SUCCEEDED await _core.wait_task_rescheduled(abort_fn) def _pop_several(self, count): for _ in range(min(count, len(self._parked))): task, _ = self._parked.popitem(last=False) yield task @_core.enable_ki_protection def unpark(self, *, count=1): """Unpark one or more tasks. This wakes up ``count`` tasks that are blocked in :meth:`park`. If there are fewer than ``count`` tasks parked, then wakes as many tasks are available and then returns successfully. Args: count (int): the number of tasks to unpark. """ tasks = list(self._pop_several(count)) for task in tasks: _core.reschedule(task) return tasks def unpark_all(self): """Unpark all parked tasks.""" return self.unpark(count=len(self)) @_core.enable_ki_protection def repark(self, new_lot, *, count=1): """Move parked tasks from one :class:`ParkingLot` object to another. This dequeues ``count`` tasks from one lot, and requeues them on another, preserving order. For example:: async def parker(lot): print("sleeping") await lot.park() print("woken") async def main(): lot1 = trio.lowlevel.ParkingLot() lot2 = trio.lowlevel.ParkingLot() async with trio.open_nursery() as nursery: nursery.start_soon(parker, lot1) await trio.testing.wait_all_tasks_blocked() assert len(lot1) == 1 assert len(lot2) == 0 lot1.repark(lot2) assert len(lot1) == 0 assert len(lot2) == 1 # This wakes up the task that was originally parked in lot1 lot2.unpark() If there are fewer than ``count`` tasks parked, then reparks as many tasks as are available and then returns successfully. Args: new_lot (ParkingLot): the parking lot to move tasks to. count (int): the number of tasks to move. """ if not isinstance(new_lot, ParkingLot): raise TypeError("new_lot must be a ParkingLot") for task in self._pop_several(count): new_lot._parked[task] = None task.custom_sleep_data = new_lot def repark_all(self, new_lot): """Move all parked tasks from one :class:`ParkingLot` object to another. See :meth:`repark` for details. """ return self.repark(new_lot, count=len(self)) def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``tasks_waiting``: The number of tasks blocked on this lot's :meth:`park` method. """ return _ParkingLotStatistics(tasks_waiting=len(self._parked)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_run.py0000644000076500000000000027361200000000000015514 0ustar00quentinwheel# coding: utf-8 import functools import itertools import logging import os import random import select import sys import threading from collections import deque import collections.abc from contextlib import contextmanager import warnings import weakref import enum from contextvars import copy_context from math import inf from time import perf_counter from typing import Callable, TYPE_CHECKING from sniffio import current_async_library_cvar import attr from heapq import heapify, heappop, heappush from sortedcontainers import SortedDict from outcome import Error, Outcome, Value, capture from ._entry_queue import EntryQueue, TrioToken from ._exceptions import TrioInternalError, RunFinishedError, Cancelled from ._ki import ( LOCALS_KEY_KI_PROTECTION_ENABLED, KIManager, enable_ki_protection, ) from ._multierror import MultiError from ._traps import ( Abort, wait_task_rescheduled, cancel_shielded_checkpoint, CancelShieldedCheckpoint, PermanentlyDetachCoroutineObject, WaitTaskRescheduled, ) from ._asyncgens import AsyncGenerators from ._thread_cache import start_thread_soon from ._instrumentation import Instruments from .. import _core from .._deprecate import warn_deprecated from .._util import Final, NoPublicConstructor, coroutine_or_error DEADLINE_HEAP_MIN_PRUNE_THRESHOLD = 1000 _NO_SEND = object() # Decorator to mark methods public. This does nothing by itself, but # trio/_tools/gen_exports.py looks for it. def _public(fn): return fn # When running under Hypothesis, we want examples to be reproducible and # shrinkable. pytest-trio's Hypothesis integration monkeypatches this # variable to True, and registers the Random instance _r for Hypothesis # to manage for each test case, which together should make Trio's task # scheduling loop deterministic. We have a test for that, of course. _ALLOW_DETERMINISTIC_SCHEDULING = False _r = random.Random() # On 3.7+, Context.run() is implemented in C and doesn't show up in # tracebacks. On 3.6, we use the contextvars backport, which is # currently implemented in Python and adds 1 frame to tracebacks. So this # function is a super-overkill version of "0 if sys.version_info >= (3, 7) # else 1". But if Context.run ever changes, we'll be ready! # # This can all be removed once we drop support for 3.6. def _count_context_run_tb_frames(): def function_with_unique_name_xyzzy(): 1 / 0 ctx = copy_context() try: ctx.run(function_with_unique_name_xyzzy) except ZeroDivisionError as exc: tb = exc.__traceback__ # Skip the frame where we caught it tb = tb.tb_next count = 0 while tb.tb_frame.f_code.co_name != "function_with_unique_name_xyzzy": tb = tb.tb_next count += 1 return count CONTEXT_RUN_TB_FRAMES = _count_context_run_tb_frames() @attr.s(frozen=True, slots=True) class SystemClock: # Add a large random offset to our clock to ensure that if people # accidentally call time.perf_counter() directly or start comparing clocks # between different runs, then they'll notice the bug quickly: offset = attr.ib(factory=lambda: _r.uniform(10000, 200000)) def start_clock(self): pass # In cPython 3, on every platform except Windows, perf_counter is # exactly the same as time.monotonic; and on Windows, it uses # QueryPerformanceCounter instead of GetTickCount64. def current_time(self): return self.offset + perf_counter() def deadline_to_sleep_time(self, deadline): return deadline - self.current_time() class IdlePrimedTypes(enum.Enum): WAITING_FOR_IDLE = 1 AUTOJUMP_CLOCK = 2 ################################################################ # CancelScope and friends ################################################################ @attr.s(eq=False, slots=True) class Deadlines: """A container of deadlined cancel scopes. Only contains scopes with non-infinite deadlines that are currently attached to at least one task. """ # Heap of (deadline, id(CancelScope), CancelScope) _heap = attr.ib(factory=list) # Count of active deadlines (those that haven't been changed) _active = attr.ib(default=0) def add(self, deadline, cancel_scope): heappush(self._heap, (deadline, id(cancel_scope), cancel_scope)) self._active += 1 def remove(self, deadline, cancel_scope): self._active -= 1 def next_deadline(self): while self._heap: deadline, _, cancel_scope = self._heap[0] if deadline == cancel_scope._registered_deadline: return deadline else: # This entry is stale; discard it and try again heappop(self._heap) return inf def _prune(self): # In principle, it's possible for a cancel scope to toggle back and # forth repeatedly between the same two deadlines, and end up with # lots of stale entries that *look* like they're still active, because # their deadline is correct, but in fact are redundant. So when # pruning we have to eliminate entries with the wrong deadline, *and* # eliminate duplicates. seen = set() pruned_heap = [] for deadline, tiebreaker, cancel_scope in self._heap: if deadline == cancel_scope._registered_deadline: if cancel_scope in seen: continue seen.add(cancel_scope) pruned_heap.append((deadline, tiebreaker, cancel_scope)) # See test_cancel_scope_deadline_duplicates for a test that exercises # this assert: assert len(pruned_heap) == self._active heapify(pruned_heap) self._heap = pruned_heap def expire(self, now): did_something = False while self._heap and self._heap[0][0] <= now: deadline, _, cancel_scope = heappop(self._heap) if deadline == cancel_scope._registered_deadline: did_something = True # This implicitly calls self.remove(), so we don't need to # decrement _active here cancel_scope.cancel() # If we've accumulated too many stale entries, then prune the heap to # keep it under control. (We only do this occasionally in a batch, to # keep the amortized cost down) if len(self._heap) > self._active * 2 + DEADLINE_HEAP_MIN_PRUNE_THRESHOLD: self._prune() return did_something @attr.s(eq=False, slots=True) class CancelStatus: """Tracks the cancellation status for a contiguous extent of code that will become cancelled, or not, as a unit. Each task has at all times a single "active" CancelStatus whose cancellation state determines whether checkpoints executed in that task raise Cancelled. Each 'with CancelScope(...)' context is associated with a particular CancelStatus. When a task enters such a context, a CancelStatus is created which becomes the active CancelStatus for that task; when the 'with' block is exited, the active CancelStatus for that task goes back to whatever it was before. CancelStatus objects are arranged in a tree whose structure mirrors the lexical nesting of the cancel scope contexts. When a CancelStatus becomes cancelled, it notifies all of its direct children, who become cancelled in turn (and continue propagating the cancellation down the tree) unless they are shielded. (There will be at most one such child except in the case of a CancelStatus that immediately encloses a nursery.) At the leaves of this tree are the tasks themselves, which get woken up to deliver an abort when their direct parent CancelStatus becomes cancelled. You can think of CancelStatus as being responsible for the "plumbing" of cancellations as oppposed to CancelScope which is responsible for the origination of them. """ # Our associated cancel scope. Can be any object with attributes # `deadline`, `shield`, and `cancel_called`, but in current usage # is always a CancelScope object. Must not be None. _scope = attr.ib() # True iff the tasks in self._tasks should receive cancellations # when they checkpoint. Always True when scope.cancel_called is True; # may also be True due to a cancellation propagated from our # parent. Unlike scope.cancel_called, this does not necessarily stay # true once it becomes true. For example, we might become # effectively cancelled due to the cancel scope two levels out # becoming cancelled, but then the cancel scope one level out # becomes shielded so we're not effectively cancelled anymore. effectively_cancelled = attr.ib(default=False) # The CancelStatus whose cancellations can propagate to us; we # become effectively cancelled when they do, unless scope.shield # is True. May be None (for the outermost CancelStatus in a call # to trio.run(), briefly during TaskStatus.started(), or during # recovery from mis-nesting of cancel scopes). _parent = attr.ib(default=None, repr=False) # All of the CancelStatuses that have this CancelStatus as their parent. _children = attr.ib(factory=set, init=False, repr=False) # Tasks whose cancellation state is currently tied directly to # the cancellation state of this CancelStatus object. Don't modify # this directly; instead, use Task._activate_cancel_status(). # Invariant: all(task._cancel_status is self for task in self._tasks) _tasks = attr.ib(factory=set, init=False, repr=False) # Set to True on still-active cancel statuses that are children # of a cancel status that's been closed. This is used to permit # recovery from mis-nested cancel scopes (well, at least enough # recovery to show a useful traceback). abandoned_by_misnesting = attr.ib(default=False, init=False, repr=False) def __attrs_post_init__(self): if self._parent is not None: self._parent._children.add(self) self.recalculate() # parent/children/tasks accessors are used by TaskStatus.started() @property def parent(self): return self._parent @parent.setter def parent(self, parent): if self._parent is not None: self._parent._children.remove(self) self._parent = parent if self._parent is not None: self._parent._children.add(self) self.recalculate() @property def children(self): return frozenset(self._children) @property def tasks(self): return frozenset(self._tasks) def encloses(self, other): """Returns true if this cancel status is a direct or indirect parent of cancel status *other*, or if *other* is *self*. """ while other is not None: if other is self: return True other = other.parent return False def close(self): self.parent = None # now we're not a child of self.parent anymore if self._tasks or self._children: # Cancel scopes weren't exited in opposite order of being # entered. CancelScope._close() deals with raising an error # if appropriate; our job is to leave things in a reasonable # state for unwinding our dangling children. We choose to leave # this part of the CancelStatus tree unlinked from everyone # else, cancelled, and marked so that exiting a CancelScope # within the abandoned subtree doesn't affect the active # CancelStatus. Note that it's possible for us to get here # without CancelScope._close() raising an error, if a # nursery's cancel scope is closed within the nursery's # nested child and no other cancel scopes are involved, # but in that case task_exited() will deal with raising # the error. self._mark_abandoned() # Since our CancelScope is about to forget about us, and we # have no parent anymore, there's nothing left to call # recalculate(). So, we can stay cancelled by setting # effectively_cancelled and updating our children. self.effectively_cancelled = True for task in self._tasks: task._attempt_delivery_of_any_pending_cancel() for child in self._children: child.recalculate() @property def parent_cancellation_is_visible_to_us(self): return ( self._parent is not None and not self._scope.shield and self._parent.effectively_cancelled ) def recalculate(self): # This does a depth-first traversal over this and descendent cancel # statuses, to ensure their state is up-to-date. It's basically a # recursive algorithm, but we use an explicit stack to avoid any # issues with stack overflow. todo = [self] while todo: current = todo.pop() new_state = ( current._scope.cancel_called or current.parent_cancellation_is_visible_to_us ) if new_state != current.effectively_cancelled: current.effectively_cancelled = new_state if new_state: for task in current._tasks: task._attempt_delivery_of_any_pending_cancel() todo.extend(current._children) def _mark_abandoned(self): self.abandoned_by_misnesting = True for child in self._children: child._mark_abandoned() def effective_deadline(self): if self.effectively_cancelled: return -inf if self._parent is None or self._scope.shield: return self._scope.deadline return min(self._scope.deadline, self._parent.effective_deadline()) MISNESTING_ADVICE = """ This is probably a bug in your code, that has caused Trio's internal state to become corrupted. We'll do our best to recover, but from now on there are no guarantees. Typically this is caused by one of the following: - yielding within a generator or async generator that's opened a cancel scope or nursery (unless the generator is a @contextmanager or @asynccontextmanager); see https://github.com/python-trio/trio/issues/638 - manually calling __enter__ or __exit__ on a trio.CancelScope, or __aenter__ or __aexit__ on the object returned by trio.open_nursery(); doing so correctly is difficult and you should use @[async]contextmanager instead, or maybe [Async]ExitStack - using [Async]ExitStack to interleave the entries/exits of cancel scopes and/or nurseries in a way that couldn't be achieved by some nesting of 'with' and 'async with' blocks - using the low-level coroutine object protocol to execute some parts of an async function in a different cancel scope/nursery context than other parts If you don't believe you're doing any of these things, please file a bug: https://github.com/python-trio/trio/issues/new """ @attr.s(eq=False, repr=False, slots=True) class CancelScope(metaclass=Final): """A *cancellation scope*: the link between a unit of cancellable work and Trio's cancellation system. A :class:`CancelScope` becomes associated with some cancellable work when it is used as a context manager surrounding that work:: cancel_scope = trio.CancelScope() ... with cancel_scope: await long_running_operation() Inside the ``with`` block, a cancellation of ``cancel_scope`` (via a call to its :meth:`cancel` method or via the expiry of its :attr:`deadline`) will immediately interrupt the ``long_running_operation()`` by raising :exc:`Cancelled` at its next :ref:`checkpoint `. The context manager ``__enter__`` returns the :class:`CancelScope` object itself, so you can also write ``with trio.CancelScope() as cancel_scope:``. If a cancel scope becomes cancelled before entering its ``with`` block, the :exc:`Cancelled` exception will be raised at the first checkpoint inside the ``with`` block. This allows a :class:`CancelScope` to be created in one :ref:`task ` and passed to another, so that the first task can later cancel some work inside the second. Cancel scopes are not reusable or reentrant; that is, each cancel scope can be used for at most one ``with`` block. (You'll get a :exc:`RuntimeError` if you violate this rule.) The :class:`CancelScope` constructor takes initial values for the cancel scope's :attr:`deadline` and :attr:`shield` attributes; these may be freely modified after construction, whether or not the scope has been entered yet, and changes take immediate effect. """ _cancel_status = attr.ib(default=None, init=False) _has_been_entered = attr.ib(default=False, init=False) _registered_deadline = attr.ib(default=inf, init=False) _cancel_called = attr.ib(default=False, init=False) cancelled_caught = attr.ib(default=False, init=False) # Constructor arguments: _deadline = attr.ib(default=inf, kw_only=True) _shield = attr.ib(default=False, kw_only=True) @enable_ki_protection def __enter__(self): task = _core.current_task() if self._has_been_entered: raise RuntimeError( "Each CancelScope may only be used for a single 'with' block" ) self._has_been_entered = True if current_time() >= self._deadline: self.cancel() with self._might_change_registered_deadline(): self._cancel_status = CancelStatus(scope=self, parent=task._cancel_status) task._activate_cancel_status(self._cancel_status) return self def _exc_filter(self, exc): if isinstance(exc, Cancelled): self.cancelled_caught = True return None return exc def _close(self, exc): if self._cancel_status is None: new_exc = RuntimeError( "Cancel scope stack corrupted: attempted to exit {!r} " "which had already been exited".format(self) ) new_exc.__context__ = exc return new_exc scope_task = current_task() if scope_task._cancel_status is not self._cancel_status: # Cancel scope mis-nesting: this cancel scope isn't the most # recently opened by this task (that's still open). That is, # our assumptions about context managers forming a stack # have been violated. Try and make the best of it. if self._cancel_status.abandoned_by_misnesting: # We are an inner cancel scope that was still active when # some outer scope was closed. The closure of that outer # scope threw an error, so we don't need to throw another # one; it would just confuse the traceback. pass elif not self._cancel_status.encloses(scope_task._cancel_status): # This task isn't even indirectly contained within the # cancel scope it's trying to close. Raise an error # without changing any state. new_exc = RuntimeError( "Cancel scope stack corrupted: attempted to exit {!r} " "from unrelated {!r}\n{}".format( self, scope_task, MISNESTING_ADVICE ) ) new_exc.__context__ = exc return new_exc else: # Otherwise, there's some inner cancel scope(s) that # we're abandoning by closing this outer one. # CancelStatus.close() will take care of the plumbing; # we just need to make sure we don't let the error # pass silently. new_exc = RuntimeError( "Cancel scope stack corrupted: attempted to exit {!r} " "in {!r} that's still within its child {!r}\n{}".format( self, scope_task, scope_task._cancel_status._scope, MISNESTING_ADVICE, ) ) new_exc.__context__ = exc exc = new_exc scope_task._activate_cancel_status(self._cancel_status.parent) else: scope_task._activate_cancel_status(self._cancel_status.parent) if ( exc is not None and self._cancel_status.effectively_cancelled and not self._cancel_status.parent_cancellation_is_visible_to_us ): exc = MultiError.filter(self._exc_filter, exc) self._cancel_status.close() with self._might_change_registered_deadline(): self._cancel_status = None return exc @enable_ki_protection def __exit__(self, etype, exc, tb): # NB: NurseryManager calls _close() directly rather than __exit__(), # so __exit__() must be just _close() plus this logic for adapting # the exception-filtering result to the context manager API. # Tracebacks show the 'raise' line below out of context, so let's give # this variable a name that makes sense out of context. remaining_error_after_cancel_scope = self._close(exc) if remaining_error_after_cancel_scope is None: return True elif remaining_error_after_cancel_scope is exc: return False else: # Copied verbatim from MultiErrorCatcher. Python doesn't # allow us to encapsulate this __context__ fixup. old_context = remaining_error_after_cancel_scope.__context__ try: raise remaining_error_after_cancel_scope finally: _, value, _ = sys.exc_info() assert value is remaining_error_after_cancel_scope value.__context__ = old_context def __repr__(self): if self._cancel_status is not None: binding = "active" elif self._has_been_entered: binding = "exited" else: binding = "unbound" if self._cancel_called: state = ", cancelled" elif self._deadline == inf: state = "" else: try: now = current_time() except RuntimeError: # must be called from async context state = "" else: state = ", deadline is {:.2f} seconds {}".format( abs(self._deadline - now), "from now" if self._deadline >= now else "ago", ) return "".format(id(self), binding, state) @contextmanager @enable_ki_protection def _might_change_registered_deadline(self): try: yield finally: old = self._registered_deadline if self._cancel_status is None or self._cancel_called: new = inf else: new = self._deadline if old != new: self._registered_deadline = new runner = GLOBAL_RUN_CONTEXT.runner if runner.is_guest: old_next_deadline = runner.deadlines.next_deadline() if old != inf: runner.deadlines.remove(old, self) if new != inf: runner.deadlines.add(new, self) if runner.is_guest: new_next_deadline = runner.deadlines.next_deadline() if old_next_deadline != new_next_deadline: runner.force_guest_tick_asap() @property def deadline(self): """Read-write, :class:`float`. An absolute time on the current run's clock at which this scope will automatically become cancelled. You can adjust the deadline by modifying this attribute, e.g.:: # I need a little more time! cancel_scope.deadline += 30 Note that for efficiency, the core run loop only checks for expired deadlines every once in a while. This means that in certain cases there may be a short delay between when the clock says the deadline should have expired, and when checkpoints start raising :exc:`~trio.Cancelled`. This is a very obscure corner case that you're unlikely to notice, but we document it for completeness. (If this *does* cause problems for you, of course, then `we want to know! `__) Defaults to :data:`math.inf`, which means "no deadline", though this can be overridden by the ``deadline=`` argument to the :class:`~trio.CancelScope` constructor. """ return self._deadline @deadline.setter def deadline(self, new_deadline): with self._might_change_registered_deadline(): self._deadline = float(new_deadline) @property def shield(self): """Read-write, :class:`bool`, default :data:`False`. So long as this is set to :data:`True`, then the code inside this scope will not receive :exc:`~trio.Cancelled` exceptions from scopes that are outside this scope. They can still receive :exc:`~trio.Cancelled` exceptions from (1) this scope, or (2) scopes inside this scope. You can modify this attribute:: with trio.CancelScope() as cancel_scope: cancel_scope.shield = True # This cannot be interrupted by any means short of # killing the process: await sleep(10) cancel_scope.shield = False # Now this can be cancelled normally: await sleep(10) Defaults to :data:`False`, though this can be overridden by the ``shield=`` argument to the :class:`~trio.CancelScope` constructor. """ return self._shield @shield.setter # type: ignore # "decorated property not supported" @enable_ki_protection def shield(self, new_value): if not isinstance(new_value, bool): raise TypeError("shield must be a bool") self._shield = new_value if self._cancel_status is not None: self._cancel_status.recalculate() @enable_ki_protection def cancel(self): """Cancels this scope immediately. This method is idempotent, i.e., if the scope was already cancelled then this method silently does nothing. """ if self._cancel_called: return with self._might_change_registered_deadline(): self._cancel_called = True if self._cancel_status is not None: self._cancel_status.recalculate() @property def cancel_called(self): """Readonly :class:`bool`. Records whether cancellation has been requested for this scope, either by an explicit call to :meth:`cancel` or by the deadline expiring. This attribute being True does *not* necessarily mean that the code within the scope has been, or will be, affected by the cancellation. For example, if :meth:`cancel` was called after the last checkpoint in the ``with`` block, when it's too late to deliver a :exc:`~trio.Cancelled` exception, then this attribute will still be True. This attribute is mostly useful for debugging and introspection. If you want to know whether or not a chunk of code was actually cancelled, then :attr:`cancelled_caught` is usually more appropriate. """ if self._cancel_status is not None or not self._has_been_entered: # Scope is active or not yet entered: make sure cancel_called # is true if the deadline has passed. This shouldn't # be able to actually change behavior, since we check for # deadline expiry on scope entry and at every checkpoint, # but it makes the value returned by cancel_called more # closely match expectations. if not self._cancel_called and current_time() >= self._deadline: self.cancel() return self._cancel_called ################################################################ # Nursery and friends ################################################################ # This code needs to be read alongside the code from Nursery.start to make # sense. @attr.s(eq=False, hash=False, repr=False) class _TaskStatus: _old_nursery = attr.ib() _new_nursery = attr.ib() _called_started = attr.ib(default=False) _value = attr.ib(default=None) def __repr__(self): return "".format(id(self)) def started(self, value=None): if self._called_started: raise RuntimeError("called 'started' twice on the same task status") self._called_started = True self._value = value # If the old nursery is cancelled, then quietly quit now; the child # will eventually exit on its own, and we don't want to risk moving # children that might have propagating Cancelled exceptions into # a place with no cancelled cancel scopes to catch them. if self._old_nursery._cancel_status.effectively_cancelled: return # Can't be closed, b/c we checked in start() and then _pending_starts # should keep it open. assert not self._new_nursery._closed # Move tasks from the old nursery to the new tasks = self._old_nursery._children self._old_nursery._children = set() for task in tasks: task._parent_nursery = self._new_nursery task._eventual_parent_nursery = None self._new_nursery._children.add(task) # Move all children of the old nursery's cancel status object # to be underneath the new nursery instead. This includes both # tasks and child cancel status objects. # NB: If the new nursery is cancelled, reparenting a cancel # status to be underneath it can invoke an abort_fn, which might # do something evil like cancel the old nursery. We thus break # everything off from the old nursery before we start attaching # anything to the new. cancel_status_children = self._old_nursery._cancel_status.children cancel_status_tasks = set(self._old_nursery._cancel_status.tasks) cancel_status_tasks.discard(self._old_nursery._parent_task) for cancel_status in cancel_status_children: cancel_status.parent = None for task in cancel_status_tasks: task._activate_cancel_status(None) for cancel_status in cancel_status_children: cancel_status.parent = self._new_nursery._cancel_status for task in cancel_status_tasks: task._activate_cancel_status(self._new_nursery._cancel_status) # That should have removed all the children from the old nursery assert not self._old_nursery._children # And finally, poke the old nursery so it notices that all its # children have disappeared and can exit. self._old_nursery._check_nursery_closed() class NurseryManager: """Nursery context manager. Note we explicitly avoid @asynccontextmanager and @async_generator since they add a lot of extraneous stack frames to exceptions, as well as cause problematic behavior with handling of StopIteration and StopAsyncIteration. """ @enable_ki_protection async def __aenter__(self): self._scope = CancelScope() self._scope.__enter__() self._nursery = Nursery._create(current_task(), self._scope) return self._nursery @enable_ki_protection async def __aexit__(self, etype, exc, tb): new_exc = await self._nursery._nested_child_finished(exc) # Tracebacks show the 'raise' line below out of context, so let's give # this variable a name that makes sense out of context. combined_error_from_nursery = self._scope._close(new_exc) if combined_error_from_nursery is None: return True elif combined_error_from_nursery is exc: return False else: # Copied verbatim from MultiErrorCatcher. Python doesn't # allow us to encapsulate this __context__ fixup. old_context = combined_error_from_nursery.__context__ try: raise combined_error_from_nursery finally: _, value, _ = sys.exc_info() assert value is combined_error_from_nursery value.__context__ = old_context def __enter__(self): raise RuntimeError( "use 'async with open_nursery(...)', not 'with open_nursery(...)'" ) def __exit__(self): # pragma: no cover assert False, """Never called, but should be defined""" def open_nursery(): """Returns an async context manager which must be used to create a new `Nursery`. It does not block on entry; on exit it blocks until all child tasks have exited. """ return NurseryManager() class Nursery(metaclass=NoPublicConstructor): """A context which may be used to spawn (or cancel) child tasks. Not constructed directly, use `open_nursery` instead. The nursery will remain open until all child tasks have completed, or until it is cancelled, at which point it will cancel all its remaining child tasks and close. Nurseries ensure the absence of orphaned Tasks, since all running tasks will belong to an open Nursery. Attributes: cancel_scope: Creating a nursery also implicitly creates a cancellation scope, which is exposed as the :attr:`cancel_scope` attribute. This is used internally to implement the logic where if an error occurs then ``__aexit__`` cancels all children, but you can use it for other things, e.g. if you want to explicitly cancel all children in response to some external event. """ def __init__(self, parent_task, cancel_scope): self._parent_task = parent_task parent_task._child_nurseries.append(self) # the cancel status that children inherit - we take a snapshot, so it # won't be affected by any changes in the parent. self._cancel_status = parent_task._cancel_status # the cancel scope that directly surrounds us; used for cancelling all # children. self.cancel_scope = cancel_scope assert self.cancel_scope._cancel_status is self._cancel_status self._children = set() self._pending_excs = [] # The "nested child" is how this code refers to the contents of the # nursery's 'async with' block, which acts like a child Task in all # the ways we can make it. self._nested_child_running = True self._parent_waiting_in_aexit = False self._pending_starts = 0 self._closed = False @property def child_tasks(self): """(`frozenset`): Contains all the child :class:`~trio.lowlevel.Task` objects which are still running.""" return frozenset(self._children) @property def parent_task(self): "(`~trio.lowlevel.Task`): The Task that opened this nursery." return self._parent_task def _add_exc(self, exc): self._pending_excs.append(exc) self.cancel_scope.cancel() def _check_nursery_closed(self): if not any([self._nested_child_running, self._children, self._pending_starts]): self._closed = True if self._parent_waiting_in_aexit: self._parent_waiting_in_aexit = False GLOBAL_RUN_CONTEXT.runner.reschedule(self._parent_task) def _child_finished(self, task, outcome): self._children.remove(task) if isinstance(outcome, Error): self._add_exc(outcome.error) self._check_nursery_closed() async def _nested_child_finished(self, nested_child_exc): """Returns MultiError instance if there are pending exceptions.""" if nested_child_exc is not None: self._add_exc(nested_child_exc) self._nested_child_running = False self._check_nursery_closed() if not self._closed: # If we get cancelled (or have an exception injected, like # KeyboardInterrupt), then save that, but still wait until our # children finish. def aborted(raise_cancel): self._add_exc(capture(raise_cancel).error) return Abort.FAILED self._parent_waiting_in_aexit = True await wait_task_rescheduled(aborted) else: # Nothing to wait for, so just execute a checkpoint -- but we # still need to mix any exception (e.g. from an external # cancellation) in with the rest of our exceptions. try: await checkpoint() except BaseException as exc: self._add_exc(exc) popped = self._parent_task._child_nurseries.pop() assert popped is self if self._pending_excs: try: return MultiError(self._pending_excs) finally: # avoid a garbage cycle # (see test_nursery_cancel_doesnt_create_cyclic_garbage) del self._pending_excs def start_soon(self, async_fn, *args, name=None): """Creates a child task, scheduling ``await async_fn(*args)``. This and :meth:`start` are the two fundamental methods for creating concurrent tasks in Trio. Note that this is *not* an async function and you don't use await when calling it. It sets up the new task, but then returns immediately, *before* it has a chance to run. The new task won’t actually get a chance to do anything until some later point when you execute a checkpoint and the scheduler decides to run it. If you want to run a function and immediately wait for its result, then you don't need a nursery; just use ``await async_fn(*args)``. If you want to wait for the task to initialize itself before continuing, see :meth:`start`. It's possible to pass a nursery object into another task, which allows that task to start new child tasks in the first task's nursery. The child task inherits its parent nursery's cancel scopes. Args: async_fn: An async callable. args: Positional arguments for ``async_fn``. If you want to pass keyword arguments, use :func:`functools.partial`. name: The name for this task. Only used for debugging/introspection (e.g. ``repr(task_obj)``). If this isn't a string, :meth:`start_soon` will try to make it one. A common use case is if you're wrapping a function before spawning a new task, you might pass the original function as the ``name=`` to make debugging easier. Raises: RuntimeError: If this nursery is no longer open (i.e. its ``async with`` block has exited). """ GLOBAL_RUN_CONTEXT.runner.spawn_impl(async_fn, args, self, name) async def start(self, async_fn, *args, name=None): r"""Creates and initializes a child task. Like :meth:`start_soon`, but blocks until the new task has finished initializing itself, and optionally returns some information from it. The ``async_fn`` must accept a ``task_status`` keyword argument, and it must make sure that it (or someone) eventually calls ``task_status.started()``. The conventional way to define ``async_fn`` is like:: async def async_fn(arg1, arg2, *, task_status=trio.TASK_STATUS_IGNORED): ... task_status.started() ... :attr:`trio.TASK_STATUS_IGNORED` is a special global object with a do-nothing ``started`` method. This way your function supports being called either like ``await nursery.start(async_fn, arg1, arg2)`` or directly like ``await async_fn(arg1, arg2)``, and either way it can call ``task_status.started()`` without worrying about which mode it's in. Defining your function like this will make it obvious to readers that it supports being used in both modes. Before the child calls ``task_status.started()``, it's effectively run underneath the call to :meth:`start`: if it raises an exception then that exception is reported by :meth:`start`, and does *not* propagate out of the nursery. If :meth:`start` is cancelled, then the child task is also cancelled. When the child calls ``task_status.started()``, it's moved out from underneath :meth:`start` and into the given nursery. If the child task passes a value to ``task_status.started(value)``, then :meth:`start` returns this value. Otherwise it returns ``None``. """ if self._closed: raise RuntimeError("Nursery is closed to new arrivals") try: self._pending_starts += 1 async with open_nursery() as old_nursery: task_status = _TaskStatus(old_nursery, self) thunk = functools.partial(async_fn, task_status=task_status) task = GLOBAL_RUN_CONTEXT.runner.spawn_impl( thunk, args, old_nursery, name ) task._eventual_parent_nursery = self # Wait for either _TaskStatus.started or an exception to # cancel this nursery: # If we get here, then the child either got reparented or exited # normally. The complicated logic is all in _TaskStatus.started(). # (Any exceptions propagate directly out of the above.) if not task_status._called_started: raise RuntimeError("child exited without calling task_status.started()") return task_status._value finally: self._pending_starts -= 1 self._check_nursery_closed() def __del__(self): assert not self._children ################################################################ # Task and friends ################################################################ @attr.s(eq=False, hash=False, repr=False, slots=True) class Task(metaclass=NoPublicConstructor): _parent_nursery = attr.ib() coro = attr.ib() _runner = attr.ib() name = attr.ib() # PEP 567 contextvars context context = attr.ib() _counter = attr.ib(init=False, factory=itertools.count().__next__) # Invariant: # - for unscheduled tasks, _next_send_fn and _next_send are both None # - for scheduled tasks, _next_send_fn(_next_send) resumes the task; # usually _next_send_fn is self.coro.send and _next_send is an # Outcome. When recovering from a foreign await, _next_send_fn is # self.coro.throw and _next_send is an exception. _next_send_fn # will effectively be at the top of every task's call stack, so # it should be written in C if you don't want to pollute Trio # tracebacks with extraneous frames. # - for scheduled tasks, custom_sleep_data is None # Tasks start out unscheduled. _next_send_fn = attr.ib(default=None) _next_send = attr.ib(default=None) _abort_func = attr.ib(default=None) custom_sleep_data = attr.ib(default=None) # For introspection and nursery.start() _child_nurseries = attr.ib(factory=list) _eventual_parent_nursery = attr.ib(default=None) # these are counts of how many cancel/schedule points this task has # executed, for assert{_no,}_checkpoints # XX maybe these should be exposed as part of a statistics() method? _cancel_points = attr.ib(default=0) _schedule_points = attr.ib(default=0) def __repr__(self): return "".format(self.name, id(self)) @property def parent_nursery(self): """The nursery this task is inside (or None if this is the "init" task). Example use case: drawing a visualization of the task tree in a debugger. """ return self._parent_nursery @property def eventual_parent_nursery(self): """The nursery this task will be inside after it calls ``task_status.started()``. If this task has already called ``started()``, or if it was not spawned using `nursery.start() `, then its `eventual_parent_nursery` is ``None``. """ return self._eventual_parent_nursery @property def child_nurseries(self): """The nurseries this task contains. This is a list, with outer nurseries before inner nurseries. """ return list(self._child_nurseries) ################ # Cancellation ################ # The CancelStatus object that is currently active for this task. # Don't change this directly; instead, use _activate_cancel_status(). _cancel_status = attr.ib(default=None, repr=False) def _activate_cancel_status(self, cancel_status): if self._cancel_status is not None: self._cancel_status._tasks.remove(self) self._cancel_status = cancel_status if self._cancel_status is not None: self._cancel_status._tasks.add(self) if self._cancel_status.effectively_cancelled: self._attempt_delivery_of_any_pending_cancel() def _attempt_abort(self, raise_cancel): # Either the abort succeeds, in which case we will reschedule the # task, or else it fails, in which case it will worry about # rescheduling itself (hopefully eventually calling reraise to raise # the given exception, but not necessarily). success = self._abort_func(raise_cancel) if type(success) is not Abort: raise TrioInternalError("abort function must return Abort enum") # We only attempt to abort once per blocking call, regardless of # whether we succeeded or failed. self._abort_func = None if success is Abort.SUCCEEDED: self._runner.reschedule(self, capture(raise_cancel)) def _attempt_delivery_of_any_pending_cancel(self): if self._abort_func is None: return if not self._cancel_status.effectively_cancelled: return def raise_cancel(): raise Cancelled._create() self._attempt_abort(raise_cancel) def _attempt_delivery_of_pending_ki(self): assert self._runner.ki_pending if self._abort_func is None: return def raise_cancel(): self._runner.ki_pending = False raise KeyboardInterrupt self._attempt_abort(raise_cancel) ################################################################ # The central Runner object ################################################################ class RunContext(threading.local): runner: "Runner" task: Task GLOBAL_RUN_CONTEXT = RunContext() @attr.s(frozen=True) class _RunStatistics: tasks_living = attr.ib() tasks_runnable = attr.ib() seconds_to_next_deadline = attr.ib() io_statistics = attr.ib() run_sync_soon_queue_size = attr.ib() # This holds all the state that gets trampolined back and forth between # callbacks when we're running in guest mode. # # It has to be a separate object from Runner, and Runner *cannot* hold # references to it (directly or indirectly)! # # The idea is that we want a chance to detect if our host loop quits and stops # driving us forward. We detect that by unrolled_run_gen being garbage # collected, and hitting its 'except GeneratorExit:' block. So this only # happens if unrolled_run_gen is GCed. # # The Runner state is referenced from the global GLOBAL_RUN_CONTEXT. The only # way it gets *un*referenced is by unrolled_run_gen completing, e.g. by being # GCed. But if Runner has a direct or indirect reference to it, and the host # loop has abandoned it, then this will never happen! # # So this object can reference Runner, but Runner can't reference it. The only # references to it are the "in flight" callback chain on the host loop / # worker thread. @attr.s(eq=False, hash=False, slots=True) class GuestState: runner = attr.ib() run_sync_soon_threadsafe = attr.ib() run_sync_soon_not_threadsafe = attr.ib() done_callback = attr.ib() unrolled_run_gen = attr.ib() _value_factory: Callable[[], Value] = lambda: Value(None) unrolled_run_next_send = attr.ib(factory=_value_factory, type=Outcome) def guest_tick(self): try: timeout = self.unrolled_run_next_send.send(self.unrolled_run_gen) except StopIteration: self.done_callback(self.runner.main_task_outcome) return except TrioInternalError as exc: self.done_callback(Error(exc)) return # Optimization: try to skip going into the thread if we can avoid it events_outcome = capture(self.runner.io_manager.get_events, 0) if timeout <= 0 or isinstance(events_outcome, Error) or events_outcome.value: # No need to go into the thread self.unrolled_run_next_send = events_outcome self.runner.guest_tick_scheduled = True self.run_sync_soon_not_threadsafe(self.guest_tick) else: # Need to go into the thread and call get_events() there self.runner.guest_tick_scheduled = False def get_events(): return self.runner.io_manager.get_events(timeout) def deliver(events_outcome): def in_main_thread(): self.unrolled_run_next_send = events_outcome self.runner.guest_tick_scheduled = True self.guest_tick() self.run_sync_soon_threadsafe(in_main_thread) start_thread_soon(get_events, deliver) @attr.s(eq=False, hash=False, slots=True) class Runner: clock = attr.ib() instruments: Instruments = attr.ib() io_manager = attr.ib() ki_manager = attr.ib() # Run-local values, see _local.py _locals = attr.ib(factory=dict) runq = attr.ib(factory=deque) tasks = attr.ib(factory=set) deadlines = attr.ib(factory=Deadlines) init_task = attr.ib(default=None) system_nursery = attr.ib(default=None) system_context = attr.ib(default=None) main_task = attr.ib(default=None) main_task_outcome = attr.ib(default=None) entry_queue = attr.ib(factory=EntryQueue) trio_token = attr.ib(default=None) asyncgens = attr.ib(factory=AsyncGenerators) # If everything goes idle for this long, we call clock._autojump() clock_autojump_threshold = attr.ib(default=inf) # Guest mode stuff is_guest = attr.ib(default=False) guest_tick_scheduled = attr.ib(default=False) def force_guest_tick_asap(self): if self.guest_tick_scheduled: return self.guest_tick_scheduled = True self.io_manager.force_wakeup() def close(self): self.io_manager.close() self.entry_queue.close() self.asyncgens.close() if "after_run" in self.instruments: self.instruments.call("after_run") # This is where KI protection gets disabled, so we do it last self.ki_manager.close() @_public def current_statistics(self): """Returns an object containing run-loop-level debugging information. Currently the following fields are defined: * ``tasks_living`` (int): The number of tasks that have been spawned and not yet exited. * ``tasks_runnable`` (int): The number of tasks that are currently queued on the run queue (as opposed to blocked waiting for something to happen). * ``seconds_to_next_deadline`` (float): The time until the next pending cancel scope deadline. May be negative if the deadline has expired but we haven't yet processed cancellations. May be :data:`~math.inf` if there are no pending deadlines. * ``run_sync_soon_queue_size`` (int): The number of unprocessed callbacks queued via :meth:`trio.lowlevel.TrioToken.run_sync_soon`. * ``io_statistics`` (object): Some statistics from Trio's I/O backend. This always has an attribute ``backend`` which is a string naming which operating-system-specific I/O backend is in use; the other attributes vary between backends. """ seconds_to_next_deadline = self.deadlines.next_deadline() - self.current_time() return _RunStatistics( tasks_living=len(self.tasks), tasks_runnable=len(self.runq), seconds_to_next_deadline=seconds_to_next_deadline, io_statistics=self.io_manager.statistics(), run_sync_soon_queue_size=self.entry_queue.size(), ) @_public def current_time(self): """Returns the current time according to Trio's internal clock. Returns: float: The current time. Raises: RuntimeError: if not inside a call to :func:`trio.run`. """ return self.clock.current_time() @_public def current_clock(self): """Returns the current :class:`~trio.abc.Clock`.""" return self.clock @_public def current_root_task(self): """Returns the current root :class:`Task`. This is the task that is the ultimate parent of all other tasks. """ return self.init_task ################ # Core task handling primitives ################ @_public def reschedule(self, task, next_send=_NO_SEND): """Reschedule the given task with the given :class:`outcome.Outcome`. See :func:`wait_task_rescheduled` for the gory details. There must be exactly one call to :func:`reschedule` for every call to :func:`wait_task_rescheduled`. (And when counting, keep in mind that returning :data:`Abort.SUCCEEDED` from an abort callback is equivalent to calling :func:`reschedule` once.) Args: task (trio.lowlevel.Task): the task to be rescheduled. Must be blocked in a call to :func:`wait_task_rescheduled`. next_send (outcome.Outcome): the value (or error) to return (or raise) from :func:`wait_task_rescheduled`. """ if next_send is _NO_SEND: next_send = Value(None) assert task._runner is self assert task._next_send_fn is None task._next_send_fn = task.coro.send task._next_send = next_send task._abort_func = None task.custom_sleep_data = None if not self.runq and self.is_guest: self.force_guest_tick_asap() self.runq.append(task) if "task_scheduled" in self.instruments: self.instruments.call("task_scheduled", task) def spawn_impl(self, async_fn, args, nursery, name, *, system_task=False): ###### # Make sure the nursery is in working order ###### # This sorta feels like it should be a method on nursery, except it # has to handle nursery=None for init. And it touches the internals of # all kinds of objects. if nursery is not None and nursery._closed: raise RuntimeError("Nursery is closed to new arrivals") if nursery is None: assert self.init_task is None ###### # Call the function and get the coroutine object, while giving helpful # errors for common mistakes. ###### coro = coroutine_or_error(async_fn, *args) if name is None: name = async_fn if isinstance(name, functools.partial): name = name.func if not isinstance(name, str): try: name = "{}.{}".format(name.__module__, name.__qualname__) except AttributeError: name = repr(name) if system_task: context = self.system_context.copy() else: context = copy_context() if not hasattr(coro, "cr_frame"): # This async function is implemented in C or Cython async def python_wrapper(orig_coro): return await orig_coro coro = python_wrapper(coro) coro.cr_frame.f_locals.setdefault(LOCALS_KEY_KI_PROTECTION_ENABLED, system_task) ###### # Set up the Task object ###### task = Task._create( coro=coro, parent_nursery=nursery, runner=self, name=name, context=context ) self.tasks.add(task) if nursery is not None: nursery._children.add(task) task._activate_cancel_status(nursery._cancel_status) if "task_spawned" in self.instruments: self.instruments.call("task_spawned", task) # Special case: normally next_send should be an Outcome, but for the # very first send we have to send a literal unboxed None. self.reschedule(task, None) return task def task_exited(self, task, outcome): if ( task._cancel_status is not None and task._cancel_status.abandoned_by_misnesting and task._cancel_status.parent is None ): # The cancel scope surrounding this task's nursery was closed # before the task exited. Force the task to exit with an error, # since the error might not have been caught elsewhere. See the # comments in CancelStatus.close(). try: # Raise this, rather than just constructing it, to get a # traceback frame included raise RuntimeError( "Cancel scope stack corrupted: cancel scope surrounding " "{!r} was closed before the task exited\n{}".format( task, MISNESTING_ADVICE ) ) except RuntimeError as new_exc: if isinstance(outcome, Error): new_exc.__context__ = outcome.error outcome = Error(new_exc) task._activate_cancel_status(None) self.tasks.remove(task) if task is self.init_task: # If the init task crashed, then something is very wrong and we # let the error propagate. (It'll eventually be wrapped in a # TrioInternalError.) outcome.unwrap() # the init task should be the last task to exit. If not, then # something is very wrong. if self.tasks: # pragma: no cover raise TrioInternalError else: if task is self.main_task: self.main_task_outcome = outcome outcome = Value(None) task._parent_nursery._child_finished(task, outcome) if "task_exited" in self.instruments: self.instruments.call("task_exited", task) ################ # System tasks and init ################ @_public def spawn_system_task(self, async_fn, *args, name=None): """Spawn a "system" task. System tasks have a few differences from regular tasks: * They don't need an explicit nursery; instead they go into the internal "system nursery". * If a system task raises an exception, then it's converted into a :exc:`~trio.TrioInternalError` and *all* tasks are cancelled. If you write a system task, you should be careful to make sure it doesn't crash. * System tasks are automatically cancelled when the main task exits. * By default, system tasks have :exc:`KeyboardInterrupt` protection *enabled*. If you want your task to be interruptible by control-C, then you need to use :func:`disable_ki_protection` explicitly (and come up with some plan for what to do with a :exc:`KeyboardInterrupt`, given that system tasks aren't allowed to raise exceptions). * System tasks do not inherit context variables from their creator. Towards the end of a call to :meth:`trio.run`, after the main task and all system tasks have exited, the system nursery becomes closed. At this point, new calls to :func:`spawn_system_task` will raise ``RuntimeError("Nursery is closed to new arrivals")`` instead of creating a system task. It's possible to encounter this state either in a ``finally`` block in an async generator, or in a callback passed to :meth:`TrioToken.run_sync_soon` at the right moment. Args: async_fn: An async callable. args: Positional arguments for ``async_fn``. If you want to pass keyword arguments, use :func:`functools.partial`. name: The name for this task. Only used for debugging/introspection (e.g. ``repr(task_obj)``). If this isn't a string, :func:`spawn_system_task` will try to make it one. A common use case is if you're wrapping a function before spawning a new task, you might pass the original function as the ``name=`` to make debugging easier. Returns: Task: the newly spawned task """ return self.spawn_impl( async_fn, args, self.system_nursery, name, system_task=True ) async def init(self, async_fn, args): # run_sync_soon task runs here: async with open_nursery() as run_sync_soon_nursery: # All other system tasks run here: async with open_nursery() as self.system_nursery: # Only the main task runs here: async with open_nursery() as main_task_nursery: try: self.main_task = self.spawn_impl( async_fn, args, main_task_nursery, None ) except BaseException as exc: self.main_task_outcome = Error(exc) return self.spawn_impl( self.entry_queue.task, (), run_sync_soon_nursery, "", system_task=True, ) # Main task is done; start shutting down system tasks self.system_nursery.cancel_scope.cancel() # System nursery is closed; finalize remaining async generators await self.asyncgens.finalize_remaining(self) # There are no more asyncgens, which means no more user-provided # code except possibly run_sync_soon callbacks. It's finally safe # to stop the run_sync_soon task and exit run(). run_sync_soon_nursery.cancel_scope.cancel() ################ # Outside context problems ################ @_public def current_trio_token(self): """Retrieve the :class:`TrioToken` for the current call to :func:`trio.run`. """ if self.trio_token is None: self.trio_token = TrioToken._create(self.entry_queue) return self.trio_token ################ # KI handling ################ ki_pending = attr.ib(default=False) # deliver_ki is broke. Maybe move all the actual logic and state into # RunToken, and we'll only have one instance per runner? But then we can't # have a public constructor. Eh, but current_run_token() returning a # unique object per run feels pretty nice. Maybe let's just go for it. And # keep the class public so people can isinstance() it if they want. # This gets called from signal context def deliver_ki(self): self.ki_pending = True try: self.entry_queue.run_sync_soon(self._deliver_ki_cb) except RunFinishedError: pass def _deliver_ki_cb(self): if not self.ki_pending: return # Can't happen because main_task and run_sync_soon_task are created at # the same time -- so even if KI arrives before main_task is created, # we won't get here until afterwards. assert self.main_task is not None if self.main_task_outcome is not None: # We're already in the process of exiting -- leave ki_pending set # and we'll check it again on our way out of run(). return self.main_task._attempt_delivery_of_pending_ki() ################ # Quiescing ################ waiting_for_idle = attr.ib(factory=SortedDict) @_public async def wait_all_tasks_blocked(self, cushion=0.0): """Block until there are no runnable tasks. This is useful in testing code when you want to give other tasks a chance to "settle down". The calling task is blocked, and doesn't wake up until all other tasks are also blocked for at least ``cushion`` seconds. (Setting a non-zero ``cushion`` is intended to handle cases like two tasks talking to each other over a local socket, where we want to ignore the potential brief moment between a send and receive when all tasks are blocked.) Note that ``cushion`` is measured in *real* time, not the Trio clock time. If there are multiple tasks blocked in :func:`wait_all_tasks_blocked`, then the one with the shortest ``cushion`` is the one woken (and this task becoming unblocked resets the timers for the remaining tasks). If there are multiple tasks that have exactly the same ``cushion``, then all are woken. You should also consider :class:`trio.testing.Sequencer`, which provides a more explicit way to control execution ordering within a test, and will often produce more readable tests. Example: Here's an example of one way to test that Trio's locks are fair: we take the lock in the parent, start a child, wait for the child to be blocked waiting for the lock (!), and then check that we can't release and immediately re-acquire the lock:: async def lock_taker(lock): await lock.acquire() lock.release() async def test_lock_fairness(): lock = trio.Lock() await lock.acquire() async with trio.open_nursery() as nursery: nursery.start_soon(lock_taker, lock) # child hasn't run yet, we have the lock assert lock.locked() assert lock._owner is trio.lowlevel.current_task() await trio.testing.wait_all_tasks_blocked() # now the child has run and is blocked on lock.acquire(), we # still have the lock assert lock.locked() assert lock._owner is trio.lowlevel.current_task() lock.release() try: # The child has a prior claim, so we can't have it lock.acquire_nowait() except trio.WouldBlock: assert lock._owner is not trio.lowlevel.current_task() print("PASS") else: print("FAIL") """ task = current_task() key = (cushion, id(task)) self.waiting_for_idle[key] = task def abort(_): del self.waiting_for_idle[key] return Abort.SUCCEEDED await wait_task_rescheduled(abort) ################################################################ # run ################################################################ # # Trio's core task scheduler and coroutine runner is in 'unrolled_run'. It's # called that because it has an unusual feature: it's actually a generator. # Whenever it needs to fetch IO events from the OS, it yields, and waits for # its caller to send the IO events back in. So the loop is "unrolled" into a # sequence of generator send() calls. # # The reason for this unusual design is to support two different modes of # operation, where the IO is handled differently. # # In normal mode using trio.run, the scheduler and IO run in the same thread: # # Main thread: # # +---------------------------+ # | Run tasks | # | (unrolled_run) | # +---------------------------+ # | Block waiting for I/O | # | (io_manager.get_events) | # +---------------------------+ # | Run tasks | # | (unrolled_run) | # +---------------------------+ # | Block waiting for I/O | # | (io_manager.get_events) | # +---------------------------+ # : # # # In guest mode using trio.lowlevel.start_guest_run, the scheduler runs on the # main thread as a host loop callback, but blocking for IO gets pushed into a # worker thread: # # Main thread executing host loop: Trio I/O thread: # # +---------------------------+ # | Run Trio tasks | # | (unrolled_run) | # +---------------------------+ --------------+ # v # +---------------------------+ +----------------------------+ # | Host loop does whatever | | Block waiting for Trio I/O | # | it wants | | (io_manager.get_events) | # +---------------------------+ +----------------------------+ # | # +---------------------------+ <-------------+ # | Run Trio tasks | # | (unrolled_run) | # +---------------------------+ --------------+ # v # +---------------------------+ +----------------------------+ # | Host loop does whatever | | Block waiting for Trio I/O | # | it wants | | (io_manager.get_events) | # +---------------------------+ +----------------------------+ # : : # # Most of Trio's internals don't need to care about this difference. The main # complication it creates is that in guest mode, we might need to wake up not # just due to OS-reported IO events, but also because of code running on the # host loop calling reschedule() or changing task deadlines. Search for # 'is_guest' to see the special cases we need to handle this. def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints): """Create a Runner object and install it as the GLOBAL_RUN_CONTEXT.""" # It wouldn't be *hard* to support nested calls to run(), but I can't # think of a single good reason for it, so let's be conservative for # now: if hasattr(GLOBAL_RUN_CONTEXT, "runner"): raise RuntimeError("Attempted to call run() from inside a run()") if clock is None: clock = SystemClock() instruments = Instruments(instruments) io_manager = TheIOManager() system_context = copy_context() system_context.run(current_async_library_cvar.set, "trio") ki_manager = KIManager() runner = Runner( clock=clock, instruments=instruments, io_manager=io_manager, system_context=system_context, ki_manager=ki_manager, ) runner.asyncgens.install_hooks(runner) # This is where KI protection gets enabled, so we want to do it early - in # particular before we start modifying global state like GLOBAL_RUN_CONTEXT ki_manager.install(runner.deliver_ki, restrict_keyboard_interrupt_to_checkpoints) GLOBAL_RUN_CONTEXT.runner = runner return runner def run( async_fn, *args, clock=None, instruments=(), restrict_keyboard_interrupt_to_checkpoints=False, ): """Run a Trio-flavored async function, and return the result. Calling:: run(async_fn, *args) is the equivalent of:: await async_fn(*args) except that :func:`run` can (and must) be called from a synchronous context. This is Trio's main entry point. Almost every other function in Trio requires that you be inside a call to :func:`run`. Args: async_fn: An async function. args: Positional arguments to be passed to *async_fn*. If you need to pass keyword arguments, then use :func:`functools.partial`. clock: ``None`` to use the default system-specific monotonic clock; otherwise, an object implementing the :class:`trio.abc.Clock` interface, like (for example) a :class:`trio.testing.MockClock` instance. instruments (list of :class:`trio.abc.Instrument` objects): Any instrumentation you want to apply to this run. This can also be modified during the run; see :ref:`instrumentation`. restrict_keyboard_interrupt_to_checkpoints (bool): What happens if the user hits control-C while :func:`run` is running? If this argument is False (the default), then you get the standard Python behavior: a :exc:`KeyboardInterrupt` exception will immediately interrupt whatever task is running (or if no task is running, then Trio will wake up a task to be interrupted). Alternatively, if you set this argument to True, then :exc:`KeyboardInterrupt` delivery will be delayed: it will be *only* be raised at :ref:`checkpoints `, like a :exc:`Cancelled` exception. The default behavior is nice because it means that even if you accidentally write an infinite loop that never executes any checkpoints, then you can still break out of it using control-C. The alternative behavior is nice if you're paranoid about a :exc:`KeyboardInterrupt` at just the wrong place leaving your program in an inconsistent state, because it means that you only have to worry about :exc:`KeyboardInterrupt` at the exact same places where you already have to worry about :exc:`Cancelled`. This setting has no effect if your program has registered a custom SIGINT handler, or if :func:`run` is called from anywhere but the main thread (this is a Python limitation), or if you use :func:`open_signal_receiver` to catch SIGINT. Returns: Whatever ``async_fn`` returns. Raises: TrioInternalError: if an unexpected error is encountered inside Trio's internal machinery. This is a bug and you should `let us know `__. Anything else: if ``async_fn`` raises an exception, then :func:`run` propagates it. """ __tracebackhide__ = True runner = setup_runner( clock, instruments, restrict_keyboard_interrupt_to_checkpoints ) gen = unrolled_run(runner, async_fn, args) next_send = None while True: try: timeout = gen.send(next_send) except StopIteration: break next_send = runner.io_manager.get_events(timeout) # Inlined copy of runner.main_task_outcome.unwrap() to avoid # cluttering every single Trio traceback with an extra frame. if isinstance(runner.main_task_outcome, Value): return runner.main_task_outcome.value else: raise runner.main_task_outcome.error def start_guest_run( async_fn, *args, run_sync_soon_threadsafe, done_callback, run_sync_soon_not_threadsafe=None, host_uses_signal_set_wakeup_fd=False, clock=None, instruments=(), restrict_keyboard_interrupt_to_checkpoints=False, ): """Start a "guest" run of Trio on top of some other "host" event loop. Each host loop can only have one guest run at a time. You should always let the Trio run finish before stopping the host loop; if not, it may leave Trio's internal data structures in an inconsistent state. You might be able to get away with it if you immediately exit the program, but it's safest not to go there in the first place. Generally, the best way to do this is wrap this in a function that starts the host loop and then immediately starts the guest run, and then shuts down the host when the guest run completes. Args: run_sync_soon_threadsafe: An arbitrary callable, which will be passed a function as its sole argument:: def my_run_sync_soon_threadsafe(fn): ... This callable should schedule ``fn()`` to be run by the host on its next pass through its loop. **Must support being called from arbitrary threads.** done_callback: An arbitrary callable:: def my_done_callback(run_outcome): ... When the Trio run has finished, Trio will invoke this callback to let you know. The argument is an `outcome.Outcome`, reporting what would have been returned or raised by `trio.run`. This function can do anything you want, but commonly you'll want it to shut down the host loop, unwrap the outcome, etc. run_sync_soon_not_threadsafe: Like ``run_sync_soon_threadsafe``, but will only be called from inside the host loop's main thread. Optional, but if your host loop allows you to implement this more efficiently than ``run_sync_soon_threadsafe`` then passing it will make things a bit faster. host_uses_signal_set_wakeup_fd (bool): Pass `True` if your host loop uses `signal.set_wakeup_fd`, and `False` otherwise. For more details, see :ref:`guest-run-implementation`. For the meaning of other arguments, see `trio.run`. """ runner = setup_runner( clock, instruments, restrict_keyboard_interrupt_to_checkpoints ) runner.is_guest = True runner.guest_tick_scheduled = True if run_sync_soon_not_threadsafe is None: run_sync_soon_not_threadsafe = run_sync_soon_threadsafe guest_state = GuestState( runner=runner, run_sync_soon_threadsafe=run_sync_soon_threadsafe, run_sync_soon_not_threadsafe=run_sync_soon_not_threadsafe, done_callback=done_callback, unrolled_run_gen=unrolled_run( runner, async_fn, args, host_uses_signal_set_wakeup_fd=host_uses_signal_set_wakeup_fd, ), ) run_sync_soon_not_threadsafe(guest_state.guest_tick) # 24 hours is arbitrary, but it avoids issues like people setting timeouts of # 10**20 and then getting integer overflows in the underlying system calls. _MAX_TIMEOUT = 24 * 60 * 60 # Weird quirk: this is written as a generator in order to support "guest # mode", where our core event loop gets unrolled into a series of callbacks on # the host loop. If you're doing a regular trio.run then this gets run # straight through. def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False): locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True __tracebackhide__ = True try: if not host_uses_signal_set_wakeup_fd: runner.entry_queue.wakeup.wakeup_on_signals() if "before_run" in runner.instruments: runner.instruments.call("before_run") runner.clock.start_clock() runner.init_task = runner.spawn_impl( runner.init, (async_fn, args), None, "", system_task=True ) # You know how people talk about "event loops"? This 'while' loop right # here is our event loop: while runner.tasks: if runner.runq: timeout = 0 else: deadline = runner.deadlines.next_deadline() timeout = runner.clock.deadline_to_sleep_time(deadline) timeout = min(max(0, timeout), _MAX_TIMEOUT) idle_primed = None if runner.waiting_for_idle: cushion, _ = runner.waiting_for_idle.keys()[0] if cushion < timeout: timeout = cushion idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE # We use 'elif' here because if there are tasks in # wait_all_tasks_blocked, then those tasks will wake up without # jumping the clock, so we don't need to autojump. elif runner.clock_autojump_threshold < timeout: timeout = runner.clock_autojump_threshold idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK if "before_io_wait" in runner.instruments: runner.instruments.call("before_io_wait", timeout) # Driver will call io_manager.get_events(timeout) and pass it back # in through the yield events = yield timeout runner.io_manager.process_events(events) if "after_io_wait" in runner.instruments: runner.instruments.call("after_io_wait", timeout) # Process cancellations due to deadline expiry now = runner.clock.current_time() if runner.deadlines.expire(now): idle_primed = None # idle_primed != None means: if the IO wait hit the timeout, and # still nothing is happening, then we should start waking up # wait_all_tasks_blocked tasks or autojump the clock. But there # are some subtleties in defining "nothing is happening". # # 'not runner.runq' means that no tasks are currently runnable. # 'not events' means that the last IO wait call hit its full # timeout. These are very similar, and if idle_primed != None and # we're running in regular mode then they always go together. But, # in *guest* mode, they can happen independently, even when # idle_primed=True: # # - runner.runq=empty and events=True: the host loop adjusted a # deadline and that forced an IO wakeup before the timeout expired, # even though no actual tasks were scheduled. # # - runner.runq=nonempty and events=False: the IO wait hit its # timeout, but then some code in the host thread rescheduled a task # before we got here. # # So we need to check both. if idle_primed is not None and not runner.runq and not events: if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE: while runner.waiting_for_idle: key, task = runner.waiting_for_idle.peekitem(0) if key[0] == cushion: del runner.waiting_for_idle[key] runner.reschedule(task) else: break else: assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK runner.clock._autojump() # Process all runnable tasks, but only the ones that are already # runnable now. Anything that becomes runnable during this cycle # needs to wait until the next pass. This avoids various # starvation issues by ensuring that there's never an unbounded # delay between successive checks for I/O. # # Also, we randomize the order of each batch to avoid assumptions # about scheduling order sneaking in. In the long run, I suspect # we'll either (a) use strict FIFO ordering and document that for # predictability/determinism, or (b) implement a more # sophisticated scheduler (e.g. some variant of fair queueing), # for better behavior under load. For now, this is the worst of # both worlds - but it keeps our options open. (If we do decide to # go all in on deterministic scheduling, then there are other # things that will probably need to change too, like the deadlines # tie-breaker and the non-deterministic ordering of # task._notify_queues.) batch = list(runner.runq) runner.runq.clear() if _ALLOW_DETERMINISTIC_SCHEDULING: # We're running under Hypothesis, and pytest-trio has patched # this in to make the scheduler deterministic and avoid flaky # tests. It's not worth the (small) performance cost in normal # operation, since we'll shuffle the list and _r is only # seeded for tests. batch.sort(key=lambda t: t._counter) _r.shuffle(batch) else: # 50% chance of reversing the batch, this way each task # can appear before/after any other task. if _r.random() < 0.5: batch.reverse() while batch: task = batch.pop() GLOBAL_RUN_CONTEXT.task = task if "before_task_step" in runner.instruments: runner.instruments.call("before_task_step", task) next_send_fn = task._next_send_fn next_send = task._next_send task._next_send_fn = task._next_send = None final_outcome = None try: # We used to unwrap the Outcome object here and send/throw # its contents in directly, but it turns out that .throw() # is buggy, at least on CPython 3.6: # https://bugs.python.org/issue29587 # https://bugs.python.org/issue29590 # So now we send in the Outcome object and unwrap it on the # other side. msg = task.context.run(next_send_fn, next_send) except StopIteration as stop_iteration: final_outcome = Value(stop_iteration.value) except BaseException as task_exc: # Store for later, removing uninteresting top frames: 1 # frame we always remove, because it's this function # catching it, and then in addition we remove however many # more Context.run adds. tb = task_exc.__traceback__.tb_next for _ in range(CONTEXT_RUN_TB_FRAMES): tb = tb.tb_next final_outcome = Error(task_exc.with_traceback(tb)) # Remove local refs so that e.g. cancelled coroutine locals # are not kept alive by this frame until another exception # comes along. del tb if final_outcome is not None: # We can't call this directly inside the except: blocks # above, because then the exceptions end up attaching # themselves to other exceptions as __context__ in # unwanted ways. runner.task_exited(task, final_outcome) # final_outcome may contain a traceback ref. It's not as # crucial compared to the above, but this will allow more # prompt release of resources in coroutine locals. final_outcome = None else: task._schedule_points += 1 if msg is CancelShieldedCheckpoint: runner.reschedule(task) elif type(msg) is WaitTaskRescheduled: task._cancel_points += 1 task._abort_func = msg.abort_func # KI is "outside" all cancel scopes, so check for it # before checking for regular cancellation: if runner.ki_pending and task is runner.main_task: task._attempt_delivery_of_pending_ki() task._attempt_delivery_of_any_pending_cancel() elif type(msg) is PermanentlyDetachCoroutineObject: # Pretend the task just exited with the given outcome runner.task_exited(task, msg.final_outcome) else: exc = TypeError( "trio.run received unrecognized yield message {!r}. " "Are you trying to use a library written for some " "other framework like asyncio? That won't work " "without some kind of compatibility shim.".format(msg) ) # The foreign library probably doesn't adhere to our # protocol of unwrapping whatever outcome gets sent in. # Instead, we'll arrange to throw `exc` in directly, # which works for at least asyncio and curio. runner.reschedule(task, exc) task._next_send_fn = task.coro.throw # prevent long-lived reference # TODO: develop test for this deletion del msg if "after_task_step" in runner.instruments: runner.instruments.call("after_task_step", task) del GLOBAL_RUN_CONTEXT.task # prevent long-lived references # TODO: develop test for these deletions del task, next_send, next_send_fn except GeneratorExit: # The run-loop generator has been garbage collected without finishing warnings.warn( RuntimeWarning( "Trio guest run got abandoned without properly finishing... " "weird stuff might happen" ) ) except TrioInternalError: raise except BaseException as exc: raise TrioInternalError("internal error in Trio - please file a bug!") from exc finally: GLOBAL_RUN_CONTEXT.__dict__.clear() runner.close() # Have to do this after runner.close() has disabled KI protection, # because otherwise there's a race where ki_pending could get set # after we check it. if runner.ki_pending: ki = KeyboardInterrupt() if isinstance(runner.main_task_outcome, Error): ki.__context__ = runner.main_task_outcome.error runner.main_task_outcome = Error(ki) ################################################################ # Other public API functions ################################################################ class _TaskStatusIgnored: def __repr__(self): return "TASK_STATUS_IGNORED" def started(self, value=None): pass TASK_STATUS_IGNORED = _TaskStatusIgnored() def current_task(): """Return the :class:`Task` object representing the current task. Returns: Task: the :class:`Task` that called :func:`current_task`. """ try: return GLOBAL_RUN_CONTEXT.task except AttributeError: raise RuntimeError("must be called from async context") from None def current_effective_deadline(): """Returns the current effective deadline for the current task. This function examines all the cancellation scopes that are currently in effect (taking into account shielding), and returns the deadline that will expire first. One example of where this might be is useful is if your code is trying to decide whether to begin an expensive operation like an RPC call, but wants to skip it if it knows that it can't possibly complete in the available time. Another example would be if you're using a protocol like gRPC that `propagates timeout information to the remote peer `__; this function gives a way to fetch that information so you can send it along. If this is called in a context where a cancellation is currently active (i.e., a blocking call will immediately raise :exc:`Cancelled`), then returned deadline is ``-inf``. If it is called in a context where no scopes have a deadline set, it returns ``inf``. Returns: float: the effective deadline, as an absolute time. """ return current_task()._cancel_status.effective_deadline() async def checkpoint(): """A pure :ref:`checkpoint `. This checks for cancellation and allows other tasks to be scheduled, without otherwise blocking. Note that the scheduler has the option of ignoring this and continuing to run the current task if it decides this is appropriate (e.g. for increased efficiency). Equivalent to ``await trio.sleep(0)`` (which is implemented by calling :func:`checkpoint`.) """ # The scheduler is what checks timeouts and converts them into # cancellations. So by doing the schedule point first, we ensure that the # cancel point has the most up-to-date info. await cancel_shielded_checkpoint() task = current_task() task._cancel_points += 1 if task._cancel_status.effectively_cancelled or ( task is task._runner.main_task and task._runner.ki_pending ): with CancelScope(deadline=-inf): await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED) async def checkpoint_if_cancelled(): """Issue a :ref:`checkpoint ` if the calling context has been cancelled. Equivalent to (but potentially more efficient than):: if trio.current_deadline() == -inf: await trio.lowlevel.checkpoint() This is either a no-op, or else it allow other tasks to be scheduled and then raises :exc:`trio.Cancelled`. Typically used together with :func:`cancel_shielded_checkpoint`. """ task = current_task() if task._cancel_status.effectively_cancelled or ( task is task._runner.main_task and task._runner.ki_pending ): await _core.checkpoint() assert False # pragma: no cover task._cancel_points += 1 if sys.platform == "win32": from ._io_windows import WindowsIOManager as TheIOManager from ._generated_io_windows import * elif sys.platform == "linux" or (not TYPE_CHECKING and hasattr(select, "epoll")): from ._io_epoll import EpollIOManager as TheIOManager from ._generated_io_epoll import * elif TYPE_CHECKING or hasattr(select, "kqueue"): from ._io_kqueue import KqueueIOManager as TheIOManager from ._generated_io_kqueue import * else: # pragma: no cover raise NotImplementedError("unsupported platform") from ._generated_run import * from ._generated_instrumentation import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_thread_cache.py0000644000076500000000000001522700000000000017276 0ustar00quentinwheelfrom threading import Thread, Lock import outcome from itertools import count # The "thread cache" is a simple unbounded thread pool, i.e., it automatically # spawns as many threads as needed to handle all the requests its given. Its # only purpose is to cache worker threads so that they don't have to be # started from scratch every time we want to delegate some work to a thread. # It's expected that some higher-level code will track how many threads are in # use to avoid overwhelming the system (e.g. the limiter= argument to # trio.to_thread.run_sync). # # To maximize sharing, there's only one thread cache per process, even if you # have multiple calls to trio.run. # # Guarantees: # # It's safe to call start_thread_soon simultaneously from # multiple threads. # # Idle threads are chosen in LIFO order, i.e. we *don't* spread work evenly # over all threads. Instead we try to let some threads do most of the work # while others sit idle as much as possible. Compared to FIFO, this has better # memory cache behavior, and it makes it easier to detect when we have too # many threads, so idle ones can exit. # # This code assumes that 'dict' has the following properties: # # - __setitem__, __delitem__, and popitem are all thread-safe and atomic with # respect to each other. This is guaranteed by the GIL. # # - popitem returns the most-recently-added item (i.e., __setitem__ + popitem # give you a LIFO queue). This relies on dicts being insertion-ordered, like # they are in py36+. # How long a thread will idle waiting for new work before gives up and exits. # This value is pretty arbitrary; I don't think it matters too much. IDLE_TIMEOUT = 10 # seconds name_counter = count() class WorkerThread: def __init__(self, thread_cache): self._job = None self._thread_cache = thread_cache # This Lock is used in an unconventional way. # # "Unlocked" means we have a pending job that's been assigned to us; # "locked" means that we don't. # # Initially we have no job, so it starts out in locked state. self._worker_lock = Lock() self._worker_lock.acquire() thread = Thread(target=self._work, daemon=True) thread.name = f"Trio worker thread {next(name_counter)}" thread.start() def _handle_job(self): # Handle job in a separate method to ensure user-created # objects are cleaned up in a consistent manner. fn, deliver = self._job self._job = None result = outcome.capture(fn) # Tell the cache that we're available to be assigned a new # job. We do this *before* calling 'deliver', so that if # 'deliver' triggers a new job, it can be assigned to us # instead of spawning a new thread. self._thread_cache._idle_workers[self] = None deliver(result) def _work(self): while True: if self._worker_lock.acquire(timeout=IDLE_TIMEOUT): # We got a job self._handle_job() else: # Timeout acquiring lock, so we can probably exit. But, # there's a race condition: we might be assigned a job *just* # as we're about to exit. So we have to check. try: del self._thread_cache._idle_workers[self] except KeyError: # Someone else removed us from the idle worker queue, so # they must be in the process of assigning us a job - loop # around and wait for it. continue else: # We successfully removed ourselves from the idle # worker queue, so no more jobs are incoming; it's safe to # exit. return class ThreadCache: def __init__(self): self._idle_workers = {} def start_thread_soon(self, fn, deliver): try: worker, _ = self._idle_workers.popitem() except KeyError: worker = WorkerThread(self) worker._job = (fn, deliver) worker._worker_lock.release() THREAD_CACHE = ThreadCache() def start_thread_soon(fn, deliver): """Runs ``deliver(outcome.capture(fn))`` in a worker thread. Generally ``fn`` does some blocking work, and ``deliver`` delivers the result back to whoever is interested. This is a low-level, no-frills interface, very similar to using `threading.Thread` to spawn a thread directly. The main difference is that this function tries to re-use threads when possible, so it can be a bit faster than `threading.Thread`. Worker threads have the `~threading.Thread.daemon` flag set, which means that if your main thread exits, worker threads will automatically be killed. If you want to make sure that your ``fn`` runs to completion, then you should make sure that the main thread remains alive until ``deliver`` is called. It is safe to call this function simultaneously from multiple threads. Args: fn (sync function): Performs arbitrary blocking work. deliver (sync function): Takes the `outcome.Outcome` of ``fn``, and delivers it. *Must not block.* Because worker threads are cached and reused for multiple calls, neither function should mutate thread-level state, like `threading.local` objects – or if they do, they should be careful to revert their changes before returning. Note: The split between ``fn`` and ``deliver`` serves two purposes. First, it's convenient, since most callers need something like this anyway. Second, it avoids a small race condition that could cause too many threads to be spawned. Consider a program that wants to run several jobs sequentially on a thread, so the main thread submits a job, waits for it to finish, submits another job, etc. In theory, this program should only need one worker thread. But what could happen is: 1. Worker thread: First job finishes, and calls ``deliver``. 2. Main thread: receives notification that the job finished, and calls ``start_thread_soon``. 3. Main thread: sees that no worker threads are marked idle, so spawns a second worker thread. 4. Original worker thread: marks itself as idle. To avoid this, threads mark themselves as idle *before* calling ``deliver``. Is this potential extra thread a major problem? Maybe not, but it's easy enough to avoid, and we figure that if the user is trying to limit how many threads they're using then it's polite to respect that. """ THREAD_CACHE.start_thread_soon(fn, deliver) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_traps.py0000644000076500000000000002573400000000000016041 0ustar00quentinwheel# These are the only functions that ever yield back to the task runner. import types import enum import attr import outcome from . import _run # Helper for the bottommost 'yield'. You can't use 'yield' inside an async # function, but you can inside a generator, and if you decorate your generator # with @types.coroutine, then it's even awaitable. However, it's still not a # real async function: in particular, it isn't recognized by # inspect.iscoroutinefunction, and it doesn't trigger the unawaited coroutine # tracking machinery. Since our traps are public APIs, we make them real async # functions, and then this helper takes care of the actual yield: @types.coroutine def _async_yield(obj): return (yield obj) # This class object is used as a singleton. # Not exported in the trio._core namespace, but imported directly by _run. class CancelShieldedCheckpoint: pass async def cancel_shielded_checkpoint(): """Introduce a schedule point, but not a cancel point. This is *not* a :ref:`checkpoint `, but it is half of a checkpoint, and when combined with :func:`checkpoint_if_cancelled` it can make a full checkpoint. Equivalent to (but potentially more efficient than):: with trio.CancelScope(shield=True): await trio.lowlevel.checkpoint() """ return (await _async_yield(CancelShieldedCheckpoint)).unwrap() # Return values for abort functions class Abort(enum.Enum): """:class:`enum.Enum` used as the return value from abort functions. See :func:`wait_task_rescheduled` for details. .. data:: SUCCEEDED FAILED """ SUCCEEDED = 1 FAILED = 2 # Not exported in the trio._core namespace, but imported directly by _run. @attr.s(frozen=True) class WaitTaskRescheduled: abort_func = attr.ib() async def wait_task_rescheduled(abort_func): """Put the current task to sleep, with cancellation support. This is the lowest-level API for blocking in Trio. Every time a :class:`~trio.lowlevel.Task` blocks, it does so by calling this function (usually indirectly via some higher-level API). This is a tricky interface with no guard rails. If you can use :class:`ParkingLot` or the built-in I/O wait functions instead, then you should. Generally the way it works is that before calling this function, you make arrangements for "someone" to call :func:`reschedule` on the current task at some later point. Then you call :func:`wait_task_rescheduled`, passing in ``abort_func``, an "abort callback". (Terminology: in Trio, "aborting" is the process of attempting to interrupt a blocked task to deliver a cancellation.) There are two possibilities for what happens next: 1. "Someone" calls :func:`reschedule` on the current task, and :func:`wait_task_rescheduled` returns or raises whatever value or error was passed to :func:`reschedule`. 2. The call's context transitions to a cancelled state (e.g. due to a timeout expiring). When this happens, the ``abort_func`` is called. Its interface looks like:: def abort_func(raise_cancel): ... return trio.lowlevel.Abort.SUCCEEDED # or FAILED It should attempt to clean up any state associated with this call, and in particular, arrange that :func:`reschedule` will *not* be called later. If (and only if!) it is successful, then it should return :data:`Abort.SUCCEEDED`, in which case the task will automatically be rescheduled with an appropriate :exc:`~trio.Cancelled` error. Otherwise, it should return :data:`Abort.FAILED`. This means that the task can't be cancelled at this time, and still has to make sure that "someone" eventually calls :func:`reschedule`. At that point there are again two possibilities. You can simply ignore the cancellation altogether: wait for the operation to complete and then reschedule and continue as normal. (For example, this is what :func:`trio.to_thread.run_sync` does if cancellation is disabled.) The other possibility is that the ``abort_func`` does succeed in cancelling the operation, but for some reason isn't able to report that right away. (Example: on Windows, it's possible to request that an async ("overlapped") I/O operation be cancelled, but this request is *also* asynchronous – you don't find out until later whether the operation was actually cancelled or not.) To report a delayed cancellation, then you should reschedule the task yourself, and call the ``raise_cancel`` callback passed to ``abort_func`` to raise a :exc:`~trio.Cancelled` (or possibly :exc:`KeyboardInterrupt`) exception into this task. Either of the approaches sketched below can work:: # Option 1: # Catch the exception from raise_cancel and inject it into the task. # (This is what Trio does automatically for you if you return # Abort.SUCCEEDED.) trio.lowlevel.reschedule(task, outcome.capture(raise_cancel)) # Option 2: # wait to be woken by "someone", and then decide whether to raise # the error from inside the task. outer_raise_cancel = None def abort(inner_raise_cancel): nonlocal outer_raise_cancel outer_raise_cancel = inner_raise_cancel TRY_TO_CANCEL_OPERATION() return trio.lowlevel.Abort.FAILED await wait_task_rescheduled(abort) if OPERATION_WAS_SUCCESSFULLY_CANCELLED: # raises the error outer_raise_cancel() In any case it's guaranteed that we only call the ``abort_func`` at most once per call to :func:`wait_task_rescheduled`. Sometimes, it's useful to be able to share some mutable sleep-related data between the sleeping task, the abort function, and the waking task. You can use the sleeping task's :data:`~Task.custom_sleep_data` attribute to store this data, and Trio won't touch it, except to make sure that it gets cleared when the task is rescheduled. .. warning:: If your ``abort_func`` raises an error, or returns any value other than :data:`Abort.SUCCEEDED` or :data:`Abort.FAILED`, then Trio will crash violently. Be careful! Similarly, it is entirely possible to deadlock a Trio program by failing to reschedule a blocked task, or cause havoc by calling :func:`reschedule` too many times. Remember what we said up above about how you should use a higher-level API if at all possible? """ return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap() # Not exported in the trio._core namespace, but imported directly by _run. @attr.s(frozen=True) class PermanentlyDetachCoroutineObject: final_outcome = attr.ib() async def permanently_detach_coroutine_object(final_outcome): """Permanently detach the current task from the Trio scheduler. Normally, a Trio task doesn't exit until its coroutine object exits. When you call this function, Trio acts like the coroutine object just exited and the task terminates with the given outcome. This is useful if you want to permanently switch the coroutine object over to a different coroutine runner. When the calling coroutine enters this function it's running under Trio, and when the function returns it's running under the foreign coroutine runner. You should make sure that the coroutine object has released any Trio-specific resources it has acquired (e.g. nurseries). Args: final_outcome (outcome.Outcome): Trio acts as if the current task exited with the given return value or exception. Returns or raises whatever value or exception the new coroutine runner uses to resume the coroutine. """ if _run.current_task().child_nurseries: raise RuntimeError( "can't permanently detach a coroutine object with open nurseries" ) return await _async_yield(PermanentlyDetachCoroutineObject(final_outcome)) async def temporarily_detach_coroutine_object(abort_func): """Temporarily detach the current coroutine object from the Trio scheduler. When the calling coroutine enters this function it's running under Trio, and when the function returns it's running under the foreign coroutine runner. The Trio :class:`Task` will continue to exist, but will be suspended until you use :func:`reattach_detached_coroutine_object` to resume it. In the mean time, you can use another coroutine runner to schedule the coroutine object. In fact, you have to – the function doesn't return until the coroutine is advanced from outside. Note that you'll need to save the current :class:`Task` object to later resume; you can retrieve it with :func:`current_task`. You can also use this :class:`Task` object to retrieve the coroutine object – see :data:`Task.coro`. Args: abort_func: Same as for :func:`wait_task_rescheduled`, except that it must return :data:`Abort.FAILED`. (If it returned :data:`Abort.SUCCEEDED`, then Trio would attempt to reschedule the detached task directly without going through :func:`reattach_detached_coroutine_object`, which would be bad.) Your ``abort_func`` should still arrange for whatever the coroutine object is doing to be cancelled, and then reattach to Trio and call the ``raise_cancel`` callback, if possible. Returns or raises whatever value or exception the new coroutine runner uses to resume the coroutine. """ return await _async_yield(WaitTaskRescheduled(abort_func)) async def reattach_detached_coroutine_object(task, yield_value): """Reattach a coroutine object that was detached using :func:`temporarily_detach_coroutine_object`. When the calling coroutine enters this function it's running under the foreign coroutine runner, and when the function returns it's running under Trio. This must be called from inside the coroutine being resumed, and yields whatever value you pass in. (Presumably you'll pass a value that will cause the current coroutine runner to stop scheduling this task.) Then the coroutine is resumed by the Trio scheduler at the next opportunity. Args: task (Task): The Trio task object that the current coroutine was detached from. yield_value (object): The object to yield to the current coroutine runner. """ # This is a kind of crude check – in particular, it can fail if the # passed-in task is where the coroutine *runner* is running. But this is # an experts-only interface, and there's no easy way to do a more accurate # check, so I guess that's OK. if not task.coro.cr_running: raise RuntimeError("given task does not match calling coroutine") _run.reschedule(task, outcome.Value("reattaching")) value = await _async_yield(yield_value) assert value == outcome.Value("reattaching") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_unbounded_queue.py0000644000076500000000000001102200000000000020060 0ustar00quentinwheelimport attr from .. import _core from .._deprecate import deprecated from .._util import Final @attr.s(frozen=True) class _UnboundedQueueStats: qsize = attr.ib() tasks_waiting = attr.ib() class UnboundedQueue(metaclass=Final): """An unbounded queue suitable for certain unusual forms of inter-task communication. This class is designed for use as a queue in cases where the producer for some reason cannot be subjected to back-pressure, i.e., :meth:`put_nowait` has to always succeed. In order to prevent the queue backlog from actually growing without bound, the consumer API is modified to dequeue items in "batches". If a consumer task processes each batch without yielding, then this helps achieve (but does not guarantee) an effective bound on the queue's memory use, at the cost of potentially increasing system latencies in general. You should generally prefer to use a memory channel instead if you can. Currently each batch completely empties the queue, but `this may change in the future `__. A :class:`UnboundedQueue` object can be used as an asynchronous iterator, where each iteration returns a new batch of items. I.e., these two loops are equivalent:: async for batch in queue: ... while True: obj = await queue.get_batch() ... """ @deprecated( "0.9.0", issue=497, thing="trio.lowlevel.UnboundedQueue", instead="trio.open_memory_channel(math.inf)", ) def __init__(self): self._lot = _core.ParkingLot() self._data = [] # used to allow handoff from put to the first task in the lot self._can_get = False def __repr__(self): return "".format(len(self._data)) def qsize(self): """Returns the number of items currently in the queue.""" return len(self._data) def empty(self): """Returns True if the queue is empty, False otherwise. There is some subtlety to interpreting this method's return value: see `issue #63 `__. """ return not self._data @_core.enable_ki_protection def put_nowait(self, obj): """Put an object into the queue, without blocking. This always succeeds, because the queue is unbounded. We don't provide a blocking ``put`` method, because it would never need to block. Args: obj (object): The object to enqueue. """ if not self._data: assert not self._can_get if self._lot: self._lot.unpark(count=1) else: self._can_get = True self._data.append(obj) def _get_batch_protected(self): data = self._data.copy() self._data.clear() self._can_get = False return data def get_batch_nowait(self): """Attempt to get the next batch from the queue, without blocking. Returns: list: A list of dequeued items, in order. On a successful call this list is always non-empty; if it would be empty we raise :exc:`~trio.WouldBlock` instead. Raises: ~trio.WouldBlock: if the queue is empty. """ if not self._can_get: raise _core.WouldBlock return self._get_batch_protected() async def get_batch(self): """Get the next batch from the queue, blocking as necessary. Returns: list: A list of dequeued items, in order. This list is always non-empty. """ await _core.checkpoint_if_cancelled() if not self._can_get: await self._lot.park() return self._get_batch_protected() else: try: return self._get_batch_protected() finally: await _core.cancel_shielded_checkpoint() def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``qsize``: The number of items currently in the queue. * ``tasks_waiting``: The number of tasks blocked on this queue's :meth:`get_batch` method. """ return _UnboundedQueueStats( qsize=len(self._data), tasks_waiting=self._lot.statistics().tasks_waiting ) def __aiter__(self): return self async def __anext__(self): return await self.get_batch() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_wakeup_socketpair.py0000644000076500000000000000665100000000000020425 0ustar00quentinwheelimport socket import sys import signal import warnings from .. import _core from .._util import is_main_thread def _has_warn_on_full_buffer(): if sys.version_info < (3, 7): return False if "__pypy__" not in sys.builtin_module_names: # CPython has warn_on_full_buffer. Don't need to inspect. # Also, CPython doesn't support inspecting built-in functions. return True import inspect args_spec = inspect.getfullargspec(signal.set_wakeup_fd) return "warn_on_full_buffer" in args_spec.kwonlyargs HAVE_WARN_ON_FULL_BUFFER = _has_warn_on_full_buffer() class WakeupSocketpair: def __init__(self): self.wakeup_sock, self.write_sock = socket.socketpair() self.wakeup_sock.setblocking(False) self.write_sock.setblocking(False) # This somewhat reduces the amount of memory wasted queueing up data # for wakeups. With these settings, maximum number of 1-byte sends # before getting BlockingIOError: # Linux 4.8: 6 # macOS (darwin 15.5): 1 # Windows 10: 525347 # Windows you're weird. (And on Windows setting SNDBUF to 0 makes send # blocking, even on non-blocking sockets, so don't do that.) # # But, if we're on an old Python and can't control the signal module's # warn-on-full-buffer behavior, then we need to leave things alone, so # the signal module won't spam the console with spurious warnings. if HAVE_WARN_ON_FULL_BUFFER: self.wakeup_sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 1) self.write_sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 1) # On Windows this is a TCP socket so this might matter. On other # platforms this fails b/c AF_UNIX sockets aren't actually TCP. try: self.write_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) except OSError: pass self.old_wakeup_fd = None def wakeup_thread_and_signal_safe(self): try: self.write_sock.send(b"\x00") except BlockingIOError: pass async def wait_woken(self): await _core.wait_readable(self.wakeup_sock) self.drain() def drain(self): try: while True: self.wakeup_sock.recv(2 ** 16) except BlockingIOError: pass def wakeup_on_signals(self): assert self.old_wakeup_fd is None if not is_main_thread(): return fd = self.write_sock.fileno() if HAVE_WARN_ON_FULL_BUFFER: self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False) else: self.old_wakeup_fd = signal.set_wakeup_fd(fd) if self.old_wakeup_fd != -1: warnings.warn( RuntimeWarning( "It looks like Trio's signal handling code might have " "collided with another library you're using. If you're " "running Trio in guest mode, then this might mean you " "should set host_uses_signal_set_wakeup_fd=True. " "Otherwise, file a bug on Trio and we'll help you figure " "out what's going on." ) ) def close(self): self.wakeup_sock.close() self.write_sock.close() if self.old_wakeup_fd is not None: signal.set_wakeup_fd(self.old_wakeup_fd) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/_windows_cffi.py0000644000076500000000000002123700000000000017363 0ustar00quentinwheelimport cffi import re import enum ################################################################ # Functions and types ################################################################ LIB = """ // https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751(v=vs.85).aspx typedef int BOOL; typedef unsigned char BYTE; typedef BYTE BOOLEAN; typedef void* PVOID; typedef PVOID HANDLE; typedef unsigned long DWORD; typedef unsigned long ULONG; typedef unsigned int NTSTATUS; typedef unsigned long u_long; typedef ULONG *PULONG; typedef const void *LPCVOID; typedef void *LPVOID; typedef const wchar_t *LPCWSTR; typedef uintptr_t ULONG_PTR; typedef uintptr_t UINT_PTR; typedef UINT_PTR SOCKET; typedef struct _OVERLAPPED { ULONG_PTR Internal; ULONG_PTR InternalHigh; union { struct { DWORD Offset; DWORD OffsetHigh; } DUMMYSTRUCTNAME; PVOID Pointer; } DUMMYUNIONNAME; HANDLE hEvent; } OVERLAPPED, *LPOVERLAPPED; typedef OVERLAPPED WSAOVERLAPPED; typedef LPOVERLAPPED LPWSAOVERLAPPED; typedef PVOID LPSECURITY_ATTRIBUTES; typedef PVOID LPCSTR; typedef struct _OVERLAPPED_ENTRY { ULONG_PTR lpCompletionKey; LPOVERLAPPED lpOverlapped; ULONG_PTR Internal; DWORD dwNumberOfBytesTransferred; } OVERLAPPED_ENTRY, *LPOVERLAPPED_ENTRY; // kernel32.dll HANDLE WINAPI CreateIoCompletionPort( _In_ HANDLE FileHandle, _In_opt_ HANDLE ExistingCompletionPort, _In_ ULONG_PTR CompletionKey, _In_ DWORD NumberOfConcurrentThreads ); BOOL SetFileCompletionNotificationModes( HANDLE FileHandle, UCHAR Flags ); HANDLE CreateFileW( LPCWSTR lpFileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile ); BOOL WINAPI CloseHandle( _In_ HANDLE hObject ); BOOL WINAPI PostQueuedCompletionStatus( _In_ HANDLE CompletionPort, _In_ DWORD dwNumberOfBytesTransferred, _In_ ULONG_PTR dwCompletionKey, _In_opt_ LPOVERLAPPED lpOverlapped ); BOOL WINAPI GetQueuedCompletionStatusEx( _In_ HANDLE CompletionPort, _Out_ LPOVERLAPPED_ENTRY lpCompletionPortEntries, _In_ ULONG ulCount, _Out_ PULONG ulNumEntriesRemoved, _In_ DWORD dwMilliseconds, _In_ BOOL fAlertable ); BOOL WINAPI CancelIoEx( _In_ HANDLE hFile, _In_opt_ LPOVERLAPPED lpOverlapped ); BOOL WriteFile( HANDLE hFile, LPCVOID lpBuffer, DWORD nNumberOfBytesToWrite, LPDWORD lpNumberOfBytesWritten, LPOVERLAPPED lpOverlapped ); BOOL ReadFile( HANDLE hFile, LPVOID lpBuffer, DWORD nNumberOfBytesToRead, LPDWORD lpNumberOfBytesRead, LPOVERLAPPED lpOverlapped ); BOOL WINAPI SetConsoleCtrlHandler( _In_opt_ void* HandlerRoutine, _In_ BOOL Add ); HANDLE CreateEventA( LPSECURITY_ATTRIBUTES lpEventAttributes, BOOL bManualReset, BOOL bInitialState, LPCSTR lpName ); BOOL SetEvent( HANDLE hEvent ); BOOL ResetEvent( HANDLE hEvent ); DWORD WaitForSingleObject( HANDLE hHandle, DWORD dwMilliseconds ); DWORD WaitForMultipleObjects( DWORD nCount, HANDLE *lpHandles, BOOL bWaitAll, DWORD dwMilliseconds ); ULONG RtlNtStatusToDosError( NTSTATUS Status ); int WSAIoctl( SOCKET s, DWORD dwIoControlCode, LPVOID lpvInBuffer, DWORD cbInBuffer, LPVOID lpvOutBuffer, DWORD cbOutBuffer, LPDWORD lpcbBytesReturned, LPWSAOVERLAPPED lpOverlapped, // actually LPWSAOVERLAPPED_COMPLETION_ROUTINE void* lpCompletionRoutine ); int WSAGetLastError(); BOOL DeviceIoControl( HANDLE hDevice, DWORD dwIoControlCode, LPVOID lpInBuffer, DWORD nInBufferSize, LPVOID lpOutBuffer, DWORD nOutBufferSize, LPDWORD lpBytesReturned, LPOVERLAPPED lpOverlapped ); // From https://github.com/piscisaureus/wepoll/blob/master/src/afd.h typedef struct _AFD_POLL_HANDLE_INFO { HANDLE Handle; ULONG Events; NTSTATUS Status; } AFD_POLL_HANDLE_INFO, *PAFD_POLL_HANDLE_INFO; // This is really defined as a messy union to allow stuff like // i.DUMMYSTRUCTNAME.LowPart, but we don't need those complications. // Under all that it's just an int64. typedef int64_t LARGE_INTEGER; typedef struct _AFD_POLL_INFO { LARGE_INTEGER Timeout; ULONG NumberOfHandles; ULONG Exclusive; AFD_POLL_HANDLE_INFO Handles[1]; } AFD_POLL_INFO, *PAFD_POLL_INFO; """ # cribbed from pywincffi # programmatically strips out those annotations MSDN likes, like _In_ REGEX_SAL_ANNOTATION = re.compile( r"\b(_In_|_Inout_|_Out_|_Outptr_|_Reserved_)(opt_)?\b" ) LIB = REGEX_SAL_ANNOTATION.sub(" ", LIB) # Other fixups: # - get rid of FAR, cffi doesn't like it LIB = re.sub(r"\bFAR\b", " ", LIB) # - PASCAL is apparently an alias for __stdcall (on modern compilers - modern # being _MSC_VER >= 800) LIB = re.sub(r"\bPASCAL\b", "__stdcall", LIB) ffi = cffi.FFI() ffi.cdef(LIB) kernel32 = ffi.dlopen("kernel32.dll") ntdll = ffi.dlopen("ntdll.dll") ws2_32 = ffi.dlopen("ws2_32.dll") ################################################################ # Magic numbers ################################################################ # Here's a great resource for looking these up: # https://www.magnumdb.com # (Tip: check the box to see "Hex value") INVALID_HANDLE_VALUE = ffi.cast("HANDLE", -1) class ErrorCodes(enum.IntEnum): STATUS_TIMEOUT = 0x102 WAIT_TIMEOUT = 0x102 WAIT_ABANDONED = 0x80 WAIT_OBJECT_0 = 0x00 # object is signaled WAIT_FAILED = 0xFFFFFFFF ERROR_IO_PENDING = 997 ERROR_OPERATION_ABORTED = 995 ERROR_ABANDONED_WAIT_0 = 735 ERROR_INVALID_HANDLE = 6 ERROR_INVALID_PARMETER = 87 ERROR_NOT_FOUND = 1168 ERROR_NOT_SOCKET = 10038 class FileFlags(enum.IntEnum): GENERIC_READ = 0x80000000 SYNCHRONIZE = 0x00100000 FILE_FLAG_OVERLAPPED = 0x40000000 FILE_SHARE_READ = 1 FILE_SHARE_WRITE = 2 FILE_SHARE_DELETE = 4 CREATE_NEW = 1 CREATE_ALWAYS = 2 OPEN_EXISTING = 3 OPEN_ALWAYS = 4 TRUNCATE_EXISTING = 5 class AFDPollFlags(enum.IntFlag): # These are drawn from a combination of: # https://github.com/piscisaureus/wepoll/blob/master/src/afd.h # https://github.com/reactos/reactos/blob/master/sdk/include/reactos/drivers/afd/shared.h AFD_POLL_RECEIVE = 0x0001 AFD_POLL_RECEIVE_EXPEDITED = 0x0002 # OOB/urgent data AFD_POLL_SEND = 0x0004 AFD_POLL_DISCONNECT = 0x0008 # received EOF (FIN) AFD_POLL_ABORT = 0x0010 # received RST AFD_POLL_LOCAL_CLOSE = 0x0020 # local socket object closed AFD_POLL_CONNECT = 0x0040 # socket is successfully connected AFD_POLL_ACCEPT = 0x0080 # you can call accept on this socket AFD_POLL_CONNECT_FAIL = 0x0100 # connect() terminated unsuccessfully # See WSAEventSelect docs for more details on these four: AFD_POLL_QOS = 0x0200 AFD_POLL_GROUP_QOS = 0x0400 AFD_POLL_ROUTING_INTERFACE_CHANGE = 0x0800 AFD_POLL_EVENT_ADDRESS_LIST_CHANGE = 0x1000 class WSAIoctls(enum.IntEnum): SIO_BASE_HANDLE = 0x48000022 SIO_BSP_HANDLE_SELECT = 0x4800001C SIO_BSP_HANDLE_POLL = 0x4800001D class CompletionModes(enum.IntFlag): FILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 0x1 FILE_SKIP_SET_EVENT_ON_HANDLE = 0x2 class IoControlCodes(enum.IntEnum): IOCTL_AFD_POLL = 0x00012024 ################################################################ # Generic helpers ################################################################ def _handle(obj): # For now, represent handles as either cffi HANDLEs or as ints. If you # try to pass in a file descriptor instead, it's not going to work # out. (For that msvcrt.get_osfhandle does the trick, but I don't know if # we'll actually need that for anything...) For sockets this doesn't # matter, Python never allocates an fd. So let's wait until we actually # encounter the problem before worrying about it. if type(obj) is int: return ffi.cast("HANDLE", obj) else: return obj def raise_winerror(winerror=None, *, filename=None, filename2=None): if winerror is None: winerror, msg = ffi.getwinerror() else: _, msg = ffi.getwinerror(winerror) # https://docs.python.org/3/library/exceptions.html#OSError raise OSError(0, msg, filename, winerror, filename2) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8060284 trio-0.19.0/trio/_core/tests/0000755000076500000000000000000000000000000015326 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/__init__.py0000644000076500000000000000000000000000000017425 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/conftest.py0000644000076500000000000000127700000000000017534 0ustar00quentinwheelimport pytest import inspect # XX this should move into a global something from ...testing import MockClock, trio_test @pytest.fixture def mock_clock(): return MockClock() @pytest.fixture def autojump_clock(): return MockClock(autojump_threshold=0) # FIXME: split off into a package (or just make part of Trio's public # interface?), with config file to enable? and I guess a mark option too; I # guess it's useful with the class- and file-level marking machinery (where # the raw @trio_test decorator isn't enough). @pytest.hookimpl(tryfirst=True) def pytest_pyfunc_call(pyfuncitem): if inspect.iscoroutinefunction(pyfuncitem.obj): pyfuncitem.obj = trio_test(pyfuncitem.obj) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_asyncgen.py0000644000076500000000000002471400000000000020556 0ustar00quentinwheelimport sys import weakref import pytest from math import inf from functools import partial from async_generator import aclosing from ... import _core from .tutil import gc_collect_harder, buggy_pypy_asyncgens, restore_unraisablehook def test_asyncgen_basics(): collected = [] async def example(cause): try: try: yield 42 except GeneratorExit: pass await _core.checkpoint() except _core.Cancelled: assert "exhausted" not in cause task_name = _core.current_task().name assert cause in task_name or task_name == "" assert _core.current_effective_deadline() == -inf with pytest.raises(_core.Cancelled): await _core.checkpoint() collected.append(cause) else: assert "async_main" in _core.current_task().name assert "exhausted" in cause assert _core.current_effective_deadline() == inf await _core.checkpoint() collected.append(cause) saved = [] async def async_main(): # GC'ed before exhausted with pytest.warns( ResourceWarning, match="Async generator.*collected before.*exhausted" ): assert 42 == await example("abandoned").asend(None) gc_collect_harder() await _core.wait_all_tasks_blocked() assert collected.pop() == "abandoned" # aclosing() ensures it's cleaned up at point of use async with aclosing(example("exhausted 1")) as aiter: assert 42 == await aiter.asend(None) assert collected.pop() == "exhausted 1" # Also fine if you exhaust it at point of use async for val in example("exhausted 2"): assert val == 42 assert collected.pop() == "exhausted 2" gc_collect_harder() # No problems saving the geniter when using either of these patterns async with aclosing(example("exhausted 3")) as aiter: saved.append(aiter) assert 42 == await aiter.asend(None) assert collected.pop() == "exhausted 3" # Also fine if you exhaust it at point of use saved.append(example("exhausted 4")) async for val in saved[-1]: assert val == 42 assert collected.pop() == "exhausted 4" # Leave one referenced-but-unexhausted and make sure it gets cleaned up if buggy_pypy_asyncgens: collected.append("outlived run") else: saved.append(example("outlived run")) assert 42 == await saved[-1].asend(None) assert collected == [] _core.run(async_main) assert collected.pop() == "outlived run" for agen in saved: assert agen.ag_frame is None # all should now be exhausted async def test_asyncgen_throws_during_finalization(caplog): record = [] async def agen(): try: yield 1 finally: await _core.cancel_shielded_checkpoint() record.append("crashing") raise ValueError("oops") with restore_unraisablehook(): await agen().asend(None) gc_collect_harder() await _core.wait_all_tasks_blocked() assert record == ["crashing"] exc_type, exc_value, exc_traceback = caplog.records[0].exc_info assert exc_type is ValueError assert str(exc_value) == "oops" assert "during finalization of async generator" in caplog.records[0].message @pytest.mark.skipif(buggy_pypy_asyncgens, reason="pypy 7.2.0 is buggy") def test_firstiter_after_closing(): saved = [] record = [] async def funky_agen(): try: yield 1 except GeneratorExit: record.append("cleanup 1") raise try: yield 2 finally: record.append("cleanup 2") await funky_agen().asend(None) async def async_main(): aiter = funky_agen() saved.append(aiter) assert 1 == await aiter.asend(None) assert 2 == await aiter.asend(None) _core.run(async_main) assert record == ["cleanup 2", "cleanup 1"] @pytest.mark.skipif(buggy_pypy_asyncgens, reason="pypy 7.2.0 is buggy") def test_interdependent_asyncgen_cleanup_order(): saved = [] record = [] async def innermost(): try: yield 1 finally: await _core.cancel_shielded_checkpoint() record.append("innermost") async def agen(label, inner): try: yield await inner.asend(None) finally: # Either `inner` has already been cleaned up, or # we're about to exhaust it. Either way, we wind # up with `record` containing the labels in # innermost-to-outermost order. with pytest.raises(StopAsyncIteration): await inner.asend(None) record.append(label) async def async_main(): # This makes a chain of 101 interdependent asyncgens: # agen(99)'s cleanup will iterate agen(98)'s will iterate # ... agen(0)'s will iterate innermost()'s ag_chain = innermost() for idx in range(100): ag_chain = agen(idx, ag_chain) saved.append(ag_chain) assert 1 == await ag_chain.asend(None) assert record == [] _core.run(async_main) assert record == ["innermost"] + list(range(100)) @restore_unraisablehook() def test_last_minute_gc_edge_case(): saved = [] record = [] needs_retry = True async def agen(): try: yield 1 finally: record.append("cleaned up") def collect_at_opportune_moment(token): runner = _core._run.GLOBAL_RUN_CONTEXT.runner if runner.system_nursery._closed and isinstance( runner.asyncgens.alive, weakref.WeakSet ): saved.clear() record.append("final collection") gc_collect_harder() record.append("done") else: try: token.run_sync_soon(collect_at_opportune_moment, token) except _core.RunFinishedError: # pragma: no cover nonlocal needs_retry needs_retry = True async def async_main(): token = _core.current_trio_token() token.run_sync_soon(collect_at_opportune_moment, token) saved.append(agen()) await saved[-1].asend(None) # Actually running into the edge case requires that the run_sync_soon task # execute in between the system nursery's closure and the strong-ification # of runner.asyncgens. There's about a 25% chance that it doesn't # (if the run_sync_soon task runs before init on one tick and after init # on the next tick); if we try enough times, we can make the chance of # failure as small as we want. for attempt in range(50): needs_retry = False del record[:] del saved[:] _core.run(async_main) if needs_retry: # pragma: no cover if not buggy_pypy_asyncgens: assert record == ["cleaned up"] else: assert record == ["final collection", "done", "cleaned up"] break else: # pragma: no cover pytest.fail( f"Didn't manage to hit the trailing_finalizer_asyncgens case " f"despite trying {attempt} times" ) async def step_outside_async_context(aiter): # abort_fns run outside of task context, at least if they're # triggered by a deadline expiry rather than a direct # cancellation. Thus, an asyncgen first iterated inside one # will appear non-Trio, and since no other hooks were installed, # will use the last-ditch fallback handling (that tries to mimic # CPython's behavior with no hooks). # # NB: the strangeness with aiter being an attribute of abort_fn is # to make it as easy as possible to ensure we don't hang onto a # reference to aiter inside the guts of the run loop. def abort_fn(_): with pytest.raises(StopIteration, match="42"): abort_fn.aiter.asend(None).send(None) del abort_fn.aiter return _core.Abort.SUCCEEDED abort_fn.aiter = aiter async with _core.open_nursery() as nursery: nursery.start_soon(_core.wait_task_rescheduled, abort_fn) await _core.wait_all_tasks_blocked() nursery.cancel_scope.deadline = _core.current_time() @pytest.mark.skipif(buggy_pypy_asyncgens, reason="pypy 7.2.0 is buggy") async def test_fallback_when_no_hook_claims_it(capsys): async def well_behaved(): yield 42 async def yields_after_yield(): with pytest.raises(GeneratorExit): yield 42 yield 100 async def awaits_after_yield(): with pytest.raises(GeneratorExit): yield 42 await _core.cancel_shielded_checkpoint() with restore_unraisablehook(): await step_outside_async_context(well_behaved()) gc_collect_harder() assert capsys.readouterr().err == "" await step_outside_async_context(yields_after_yield()) gc_collect_harder() assert "ignored GeneratorExit" in capsys.readouterr().err await step_outside_async_context(awaits_after_yield()) gc_collect_harder() assert "awaited something during finalization" in capsys.readouterr().err @pytest.mark.skipif(buggy_pypy_asyncgens, reason="pypy 7.2.0 is buggy") def test_delegation_to_existing_hooks(): record = [] def my_firstiter(agen): record.append("firstiter " + agen.ag_frame.f_locals["arg"]) def my_finalizer(agen): record.append("finalizer " + agen.ag_frame.f_locals["arg"]) async def example(arg): try: yield 42 finally: with pytest.raises(_core.Cancelled): await _core.checkpoint() record.append("trio collected " + arg) async def async_main(): await step_outside_async_context(example("theirs")) assert 42 == await example("ours").asend(None) gc_collect_harder() assert record == ["firstiter theirs", "finalizer theirs"] record[:] = [] await _core.wait_all_tasks_blocked() assert record == ["trio collected ours"] with restore_unraisablehook(): old_hooks = sys.get_asyncgen_hooks() sys.set_asyncgen_hooks(my_firstiter, my_finalizer) try: _core.run(async_main) finally: assert sys.get_asyncgen_hooks() == (my_firstiter, my_finalizer) sys.set_asyncgen_hooks(*old_hooks) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_guest_mode.py0000644000076500000000000004415200000000000021100 0ustar00quentinwheelimport pytest import asyncio import contextvars import sys import traceback import queue from functools import partial from math import inf import signal import socket import threading import time import trio import trio.testing from .tutil import gc_collect_harder, buggy_pypy_asyncgens, restore_unraisablehook from ..._util import signal_raise # The simplest possible "host" loop. # Nice features: # - we can run code "outside" of trio using the schedule function passed to # our main # - final result is returned # - any unhandled exceptions cause an immediate crash def trivial_guest_run(trio_fn, **start_guest_run_kwargs): todo = queue.Queue() host_thread = threading.current_thread() def run_sync_soon_threadsafe(fn): if host_thread is threading.current_thread(): # pragma: no cover crash = partial( pytest.fail, "run_sync_soon_threadsafe called from host thread" ) todo.put(("run", crash)) todo.put(("run", fn)) def run_sync_soon_not_threadsafe(fn): if host_thread is not threading.current_thread(): # pragma: no cover crash = partial( pytest.fail, "run_sync_soon_not_threadsafe called from worker thread" ) todo.put(("run", crash)) todo.put(("run", fn)) def done_callback(outcome): todo.put(("unwrap", outcome)) trio.lowlevel.start_guest_run( trio_fn, run_sync_soon_not_threadsafe, run_sync_soon_threadsafe=run_sync_soon_threadsafe, run_sync_soon_not_threadsafe=run_sync_soon_not_threadsafe, done_callback=done_callback, **start_guest_run_kwargs, ) try: while True: op, obj = todo.get() if op == "run": obj() elif op == "unwrap": return obj.unwrap() else: # pragma: no cover assert False finally: # Make sure that exceptions raised here don't capture these, so that # if an exception does cause us to abandon a run then the Trio state # has a chance to be GC'ed and warn about it. del todo, run_sync_soon_threadsafe, done_callback def test_guest_trivial(): async def trio_return(in_host): await trio.sleep(0) return "ok" assert trivial_guest_run(trio_return) == "ok" async def trio_fail(in_host): raise KeyError("whoopsiedaisy") with pytest.raises(KeyError, match="whoopsiedaisy"): trivial_guest_run(trio_fail) def test_guest_can_do_io(): async def trio_main(in_host): record = [] a, b = trio.socket.socketpair() with a, b: async with trio.open_nursery() as nursery: async def do_receive(): record.append(await a.recv(1)) nursery.start_soon(do_receive) await trio.testing.wait_all_tasks_blocked() await b.send(b"x") assert record == [b"x"] trivial_guest_run(trio_main) def test_host_can_directly_wake_trio_task(): async def trio_main(in_host): ev = trio.Event() in_host(ev.set) await ev.wait() return "ok" assert trivial_guest_run(trio_main) == "ok" def test_host_altering_deadlines_wakes_trio_up(): def set_deadline(cscope, new_deadline): cscope.deadline = new_deadline async def trio_main(in_host): with trio.CancelScope() as cscope: in_host(lambda: set_deadline(cscope, -inf)) await trio.sleep_forever() assert cscope.cancelled_caught with trio.CancelScope() as cscope: # also do a change that doesn't affect the next deadline, just to # exercise that path in_host(lambda: set_deadline(cscope, 1e6)) in_host(lambda: set_deadline(cscope, -inf)) await trio.sleep(999) assert cscope.cancelled_caught return "ok" assert trivial_guest_run(trio_main) == "ok" def test_warn_set_wakeup_fd_overwrite(): assert signal.set_wakeup_fd(-1) == -1 async def trio_main(in_host): return "ok" a, b = socket.socketpair() with a, b: a.setblocking(False) # Warn if there's already a wakeup fd signal.set_wakeup_fd(a.fileno()) try: with pytest.warns(RuntimeWarning, match="signal handling code.*collided"): assert trivial_guest_run(trio_main) == "ok" finally: assert signal.set_wakeup_fd(-1) == a.fileno() signal.set_wakeup_fd(a.fileno()) try: with pytest.warns(RuntimeWarning, match="signal handling code.*collided"): assert ( trivial_guest_run(trio_main, host_uses_signal_set_wakeup_fd=False) == "ok" ) finally: assert signal.set_wakeup_fd(-1) == a.fileno() # Don't warn if there isn't already a wakeup fd with pytest.warns(None) as record: assert trivial_guest_run(trio_main) == "ok" # Apparently this is how you assert 'there were no RuntimeWarnings' with pytest.raises(AssertionError): record.pop(RuntimeWarning) with pytest.warns(None) as record: assert ( trivial_guest_run(trio_main, host_uses_signal_set_wakeup_fd=True) == "ok" ) with pytest.raises(AssertionError): record.pop(RuntimeWarning) # If there's already a wakeup fd, but we've been told to trust it, # then it's left alone and there's no warning signal.set_wakeup_fd(a.fileno()) try: async def trio_check_wakeup_fd_unaltered(in_host): fd = signal.set_wakeup_fd(-1) assert fd == a.fileno() signal.set_wakeup_fd(fd) return "ok" with pytest.warns(None) as record: assert ( trivial_guest_run( trio_check_wakeup_fd_unaltered, host_uses_signal_set_wakeup_fd=True, ) == "ok" ) with pytest.raises(AssertionError): record.pop(RuntimeWarning) finally: assert signal.set_wakeup_fd(-1) == a.fileno() def test_host_wakeup_doesnt_trigger_wait_all_tasks_blocked(): # This is designed to hit the branch in unrolled_run where: # idle_primed=True # runner.runq is empty # events is Truth-y # ...and confirm that in this case, wait_all_tasks_blocked does not get # triggered. def set_deadline(cscope, new_deadline): print(f"setting deadline {new_deadline}") cscope.deadline = new_deadline async def trio_main(in_host): async def sit_in_wait_all_tasks_blocked(watb_cscope): with watb_cscope: # Overall point of this test is that this # wait_all_tasks_blocked should *not* return normally, but # only by cancellation. await trio.testing.wait_all_tasks_blocked(cushion=9999) assert False # pragma: no cover assert watb_cscope.cancelled_caught async def get_woken_by_host_deadline(watb_cscope): with trio.CancelScope() as cscope: print("scheduling stuff to happen") # Altering the deadline from the host, to something in the # future, will cause the run loop to wake up, but then # discover that there is nothing to do and go back to sleep. # This should *not* trigger wait_all_tasks_blocked. # # So the 'before_io_wait' here will wait until we're blocking # with the wait_all_tasks_blocked primed, and then schedule a # deadline change. The critical test is that this should *not* # wake up 'sit_in_wait_all_tasks_blocked'. # # The after we've had a chance to wake up # 'sit_in_wait_all_tasks_blocked', we want the test to # actually end. So in after_io_wait we schedule a second host # call to tear things down. class InstrumentHelper: def __init__(self): self.primed = False def before_io_wait(self, timeout): print(f"before_io_wait({timeout})") if timeout == 9999: # pragma: no branch assert not self.primed in_host(lambda: set_deadline(cscope, 1e9)) self.primed = True def after_io_wait(self, timeout): if self.primed: # pragma: no branch print("instrument triggered") in_host(lambda: cscope.cancel()) trio.lowlevel.remove_instrument(self) trio.lowlevel.add_instrument(InstrumentHelper()) await trio.sleep_forever() assert cscope.cancelled_caught watb_cscope.cancel() async with trio.open_nursery() as nursery: watb_cscope = trio.CancelScope() nursery.start_soon(sit_in_wait_all_tasks_blocked, watb_cscope) await trio.testing.wait_all_tasks_blocked() nursery.start_soon(get_woken_by_host_deadline, watb_cscope) return "ok" assert trivial_guest_run(trio_main) == "ok" @restore_unraisablehook() def test_guest_warns_if_abandoned(): # This warning is emitted from the garbage collector. So we have to make # sure that our abandoned run is garbage. The easiest way to do this is to # put it into a function, so that we're sure all the local state, # traceback frames, etc. are garbage once it returns. def do_abandoned_guest_run(): async def abandoned_main(in_host): in_host(lambda: 1 / 0) while True: await trio.sleep(0) with pytest.raises(ZeroDivisionError): trivial_guest_run(abandoned_main) with pytest.warns(RuntimeWarning, match="Trio guest run got abandoned"): do_abandoned_guest_run() gc_collect_harder() # If you have problems some day figuring out what's holding onto a # reference to the unrolled_run generator and making this test fail, # then this might be useful to help track it down. (It assumes you # also hack start_guest_run so that it does 'global W; W = # weakref(unrolled_run_gen)'.) # # import gc # print(trio._core._run.W) # targets = [trio._core._run.W()] # for i in range(15): # new_targets = [] # for target in targets: # new_targets += gc.get_referrers(target) # new_targets.remove(targets) # print("#####################") # print(f"depth {i}: {len(new_targets)}") # print(new_targets) # targets = new_targets with pytest.raises(RuntimeError): trio.current_time() def aiotrio_run(trio_fn, *, pass_not_threadsafe=True, **start_guest_run_kwargs): loop = asyncio.new_event_loop() async def aio_main(): trio_done_fut = loop.create_future() def trio_done_callback(main_outcome): print(f"trio_fn finished: {main_outcome!r}") trio_done_fut.set_result(main_outcome) if pass_not_threadsafe: start_guest_run_kwargs["run_sync_soon_not_threadsafe"] = loop.call_soon trio.lowlevel.start_guest_run( trio_fn, run_sync_soon_threadsafe=loop.call_soon_threadsafe, done_callback=trio_done_callback, **start_guest_run_kwargs, ) return (await trio_done_fut).unwrap() try: return loop.run_until_complete(aio_main()) finally: loop.close() def test_guest_mode_on_asyncio(): async def trio_main(): print("trio_main!") to_trio, from_aio = trio.open_memory_channel(float("inf")) from_trio = asyncio.Queue() aio_task = asyncio.ensure_future(aio_pingpong(from_trio, to_trio)) # Make sure we have at least one tick where we don't need to go into # the thread await trio.sleep(0) from_trio.put_nowait(0) async for n in from_aio: print(f"trio got: {n}") from_trio.put_nowait(n + 1) if n >= 10: aio_task.cancel() return "trio-main-done" async def aio_pingpong(from_trio, to_trio): print("aio_pingpong!") try: while True: n = await from_trio.get() print(f"aio got: {n}") to_trio.send_nowait(n + 1) except asyncio.CancelledError: raise except: # pragma: no cover traceback.print_exc() raise assert ( aiotrio_run( trio_main, # Not all versions of asyncio we test on can actually be trusted, # but this test doesn't care about signal handling, and it's # easier to just avoid the warnings. host_uses_signal_set_wakeup_fd=True, ) == "trio-main-done" ) assert ( aiotrio_run( trio_main, # Also check that passing only call_soon_threadsafe works, via the # fallback path where we use it for everything. pass_not_threadsafe=False, host_uses_signal_set_wakeup_fd=True, ) == "trio-main-done" ) def test_guest_mode_internal_errors(monkeypatch, recwarn): with monkeypatch.context() as m: async def crash_in_run_loop(in_host): m.setattr("trio._core._run.GLOBAL_RUN_CONTEXT.runner.runq", "HI") await trio.sleep(1) with pytest.raises(trio.TrioInternalError): trivial_guest_run(crash_in_run_loop) with monkeypatch.context() as m: async def crash_in_io(in_host): m.setattr("trio._core._run.TheIOManager.get_events", None) await trio.sleep(0) with pytest.raises(trio.TrioInternalError): trivial_guest_run(crash_in_io) with monkeypatch.context() as m: async def crash_in_worker_thread_io(in_host): t = threading.current_thread() old_get_events = trio._core._run.TheIOManager.get_events def bad_get_events(*args): if threading.current_thread() is not t: raise ValueError("oh no!") else: return old_get_events(*args) m.setattr("trio._core._run.TheIOManager.get_events", bad_get_events) await trio.sleep(1) with pytest.raises(trio.TrioInternalError): trivial_guest_run(crash_in_worker_thread_io) gc_collect_harder() def test_guest_mode_ki(): assert signal.getsignal(signal.SIGINT) is signal.default_int_handler # Check SIGINT in Trio func and in host func async def trio_main(in_host): with pytest.raises(KeyboardInterrupt): signal_raise(signal.SIGINT) # Host SIGINT should get injected into Trio in_host(partial(signal_raise, signal.SIGINT)) await trio.sleep(10) with pytest.raises(KeyboardInterrupt) as excinfo: trivial_guest_run(trio_main) assert excinfo.value.__context__ is None # Signal handler should be restored properly on exit assert signal.getsignal(signal.SIGINT) is signal.default_int_handler # Also check chaining in the case where KI is injected after main exits final_exc = KeyError("whoa") async def trio_main_raising(in_host): in_host(partial(signal_raise, signal.SIGINT)) raise final_exc with pytest.raises(KeyboardInterrupt) as excinfo: trivial_guest_run(trio_main_raising) assert excinfo.value.__context__ is final_exc assert signal.getsignal(signal.SIGINT) is signal.default_int_handler def test_guest_mode_autojump_clock_threshold_changing(): # This is super obscure and probably no-one will ever notice, but # technically mutating the MockClock.autojump_threshold from the host # should wake up the guest, so let's test it. clock = trio.testing.MockClock() DURATION = 120 async def trio_main(in_host): assert trio.current_time() == 0 in_host(lambda: setattr(clock, "autojump_threshold", 0)) await trio.sleep(DURATION) assert trio.current_time() == DURATION start = time.monotonic() trivial_guest_run(trio_main, clock=clock) end = time.monotonic() # Should be basically instantaneous, but we'll leave a generous buffer to # account for any CI weirdness assert end - start < DURATION / 2 @pytest.mark.skipif(buggy_pypy_asyncgens, reason="PyPy 7.2 is buggy") @pytest.mark.xfail( sys.implementation.name == "pypy" and sys.version_info >= (3, 7), reason="async generator issue under investigation", ) @restore_unraisablehook() def test_guest_mode_asyncgens(): import sniffio record = set() async def agen(label): assert sniffio.current_async_library() == label try: yield 1 finally: library = sniffio.current_async_library() try: await sys.modules[library].sleep(0) except trio.Cancelled: pass record.add((label, library)) async def iterate_in_aio(): # "trio" gets inherited from our Trio caller if we don't set this sniffio.current_async_library_cvar.set("asyncio") await agen("asyncio").asend(None) async def trio_main(): task = asyncio.ensure_future(iterate_in_aio()) done_evt = trio.Event() task.add_done_callback(lambda _: done_evt.set()) with trio.fail_after(1): await done_evt.wait() await agen("trio").asend(None) gc_collect_harder() # Ensure we don't pollute the thread-level context if run under # an asyncio without contextvars support (3.6) context = contextvars.copy_context() context.run(aiotrio_run, trio_main, host_uses_signal_set_wakeup_fd=True) assert record == {("asyncio", "asyncio"), ("trio", "trio")} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_instrumentation.py0000644000076500000000000001630500000000000022207 0ustar00quentinwheelimport attr import pytest from ... import _core, _abc from .tutil import check_sequence_matches @attr.s(eq=False, hash=False) class TaskRecorder: record = attr.ib(factory=list) def before_run(self): self.record.append(("before_run",)) def task_scheduled(self, task): self.record.append(("schedule", task)) def before_task_step(self, task): assert task is _core.current_task() self.record.append(("before", task)) def after_task_step(self, task): assert task is _core.current_task() self.record.append(("after", task)) def after_run(self): self.record.append(("after_run",)) def filter_tasks(self, tasks): for item in self.record: if item[0] in ("schedule", "before", "after") and item[1] in tasks: yield item if item[0] in ("before_run", "after_run"): yield item def test_instruments(recwarn): r1 = TaskRecorder() r2 = TaskRecorder() r3 = TaskRecorder() task = None # We use a child task for this, because the main task does some extra # bookkeeping stuff that can leak into the instrument results, and we # don't want to deal with it. async def task_fn(): nonlocal task task = _core.current_task() for _ in range(4): await _core.checkpoint() # replace r2 with r3, to test that we can manipulate them as we go _core.remove_instrument(r2) with pytest.raises(KeyError): _core.remove_instrument(r2) # add is idempotent _core.add_instrument(r3) _core.add_instrument(r3) for _ in range(1): await _core.checkpoint() async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(task_fn) _core.run(main, instruments=[r1, r2]) # It sleeps 5 times, so it runs 6 times. Note that checkpoint() # reschedules the task immediately upon yielding, before the # after_task_step event fires. expected = ( [("before_run",), ("schedule", task)] + [("before", task), ("schedule", task), ("after", task)] * 5 + [("before", task), ("after", task), ("after_run",)] ) assert r1.record == r2.record + r3.record assert list(r1.filter_tasks([task])) == expected def test_instruments_interleave(): tasks = {} async def two_step1(): tasks["t1"] = _core.current_task() await _core.checkpoint() async def two_step2(): tasks["t2"] = _core.current_task() await _core.checkpoint() async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(two_step1) nursery.start_soon(two_step2) r = TaskRecorder() _core.run(main, instruments=[r]) expected = [ ("before_run",), ("schedule", tasks["t1"]), ("schedule", tasks["t2"]), { ("before", tasks["t1"]), ("schedule", tasks["t1"]), ("after", tasks["t1"]), ("before", tasks["t2"]), ("schedule", tasks["t2"]), ("after", tasks["t2"]), }, { ("before", tasks["t1"]), ("after", tasks["t1"]), ("before", tasks["t2"]), ("after", tasks["t2"]), }, ("after_run",), ] print(list(r.filter_tasks(tasks.values()))) check_sequence_matches(list(r.filter_tasks(tasks.values())), expected) def test_null_instrument(): # undefined instrument methods are skipped class NullInstrument: def something_unrelated(self): pass # pragma: no cover async def main(): await _core.checkpoint() _core.run(main, instruments=[NullInstrument()]) def test_instrument_before_after_run(): record = [] class BeforeAfterRun: def before_run(self): record.append("before_run") def after_run(self): record.append("after_run") async def main(): pass _core.run(main, instruments=[BeforeAfterRun()]) assert record == ["before_run", "after_run"] def test_instrument_task_spawn_exit(): record = [] class SpawnExitRecorder: def task_spawned(self, task): record.append(("spawned", task)) def task_exited(self, task): record.append(("exited", task)) async def main(): return _core.current_task() main_task = _core.run(main, instruments=[SpawnExitRecorder()]) assert ("spawned", main_task) in record assert ("exited", main_task) in record # This test also tests having a crash before the initial task is even spawned, # which is very difficult to handle. def test_instruments_crash(caplog): record = [] class BrokenInstrument: def task_scheduled(self, task): record.append("scheduled") raise ValueError("oops") def close(self): # Shouldn't be called -- tests that the instrument disabling logic # works right. record.append("closed") # pragma: no cover async def main(): record.append("main ran") return _core.current_task() r = TaskRecorder() main_task = _core.run(main, instruments=[r, BrokenInstrument()]) assert record == ["scheduled", "main ran"] # the TaskRecorder kept going throughout, even though the BrokenInstrument # was disabled assert ("after", main_task) in r.record assert ("after_run",) in r.record # And we got a log message exc_type, exc_value, exc_traceback = caplog.records[0].exc_info assert exc_type is ValueError assert str(exc_value) == "oops" assert "Instrument has been disabled" in caplog.records[0].message def test_instruments_monkeypatch(): class NullInstrument(_abc.Instrument): pass instrument = NullInstrument() async def main(): record = [] # Changing the set of hooks implemented by an instrument after # it's installed doesn't make them start being called right away instrument.before_task_step = record.append await _core.checkpoint() await _core.checkpoint() assert len(record) == 0 # But if we remove and re-add the instrument, the new hooks are # picked up _core.remove_instrument(instrument) _core.add_instrument(instrument) await _core.checkpoint() await _core.checkpoint() assert record.count(_core.current_task()) == 2 _core.remove_instrument(instrument) await _core.checkpoint() await _core.checkpoint() assert record.count(_core.current_task()) == 2 _core.run(main, instruments=[instrument]) def test_instrument_that_raises_on_getattr(): class EvilInstrument: def task_exited(self, task): assert False # pragma: no cover @property def after_run(self): raise ValueError("oops") async def main(): with pytest.raises(ValueError): _core.add_instrument(EvilInstrument()) # Make sure the instrument is fully removed from the per-method lists runner = _core.current_task()._runner assert "after_run" not in runner.instruments assert "task_exited" not in runner.instruments _core.run(main) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_io.py0000644000076500000000000003517600000000000017362 0ustar00quentinwheelimport pytest import socket as stdlib_socket import select import random import errno from contextlib import suppress from ... import _core from ...testing import wait_all_tasks_blocked, Sequencer, assert_checkpoints import trio # Cross-platform tests for IO handling def fill_socket(sock): try: while True: sock.send(b"x" * 65536) except BlockingIOError: pass def drain_socket(sock): try: while True: sock.recv(65536) except BlockingIOError: pass @pytest.fixture def socketpair(): pair = stdlib_socket.socketpair() for sock in pair: sock.setblocking(False) yield pair for sock in pair: sock.close() def using_fileno(fn): def fileno_wrapper(fileobj): return fn(fileobj.fileno()) name = "<{} on fileno>".format(fn.__name__) fileno_wrapper.__name__ = fileno_wrapper.__qualname__ = name return fileno_wrapper wait_readable_options = [trio.lowlevel.wait_readable] wait_writable_options = [trio.lowlevel.wait_writable] notify_closing_options = [trio.lowlevel.notify_closing] for options_list in [ wait_readable_options, wait_writable_options, notify_closing_options, ]: options_list += [using_fileno(f) for f in options_list] # Decorators that feed in different settings for wait_readable / wait_writable # / notify_closing. # Note that if you use all three decorators on the same test, it will run all # N**3 *combinations* read_socket_test = pytest.mark.parametrize( "wait_readable", wait_readable_options, ids=lambda fn: fn.__name__ ) write_socket_test = pytest.mark.parametrize( "wait_writable", wait_writable_options, ids=lambda fn: fn.__name__ ) notify_closing_test = pytest.mark.parametrize( "notify_closing", notify_closing_options, ids=lambda fn: fn.__name__ ) # XX These tests are all a bit dicey because they can't distinguish between # wait_on_{read,writ}able blocking the way it should, versus blocking # momentarily and then immediately resuming. @read_socket_test @write_socket_test async def test_wait_basic(socketpair, wait_readable, wait_writable): a, b = socketpair # They start out writable() with assert_checkpoints(): await wait_writable(a) # But readable() blocks until data arrives record = [] async def block_on_read(): try: with assert_checkpoints(): await wait_readable(a) except _core.Cancelled: record.append("cancelled") else: record.append("readable") assert a.recv(10) == b"x" async with _core.open_nursery() as nursery: nursery.start_soon(block_on_read) await wait_all_tasks_blocked() assert record == [] b.send(b"x") fill_socket(a) # Now writable will block, but readable won't with assert_checkpoints(): await wait_readable(b) record = [] async def block_on_write(): try: with assert_checkpoints(): await wait_writable(a) except _core.Cancelled: record.append("cancelled") else: record.append("writable") async with _core.open_nursery() as nursery: nursery.start_soon(block_on_write) await wait_all_tasks_blocked() assert record == [] drain_socket(b) # check cancellation record = [] async with _core.open_nursery() as nursery: nursery.start_soon(block_on_read) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() assert record == ["cancelled"] fill_socket(a) record = [] async with _core.open_nursery() as nursery: nursery.start_soon(block_on_write) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() assert record == ["cancelled"] @read_socket_test async def test_double_read(socketpair, wait_readable): a, b = socketpair # You can't have two tasks trying to read from a socket at the same time async with _core.open_nursery() as nursery: nursery.start_soon(wait_readable, a) await wait_all_tasks_blocked() with pytest.raises(_core.BusyResourceError): await wait_readable(a) nursery.cancel_scope.cancel() @write_socket_test async def test_double_write(socketpair, wait_writable): a, b = socketpair # You can't have two tasks trying to write to a socket at the same time fill_socket(a) async with _core.open_nursery() as nursery: nursery.start_soon(wait_writable, a) await wait_all_tasks_blocked() with pytest.raises(_core.BusyResourceError): await wait_writable(a) nursery.cancel_scope.cancel() @read_socket_test @write_socket_test @notify_closing_test async def test_interrupted_by_close( socketpair, wait_readable, wait_writable, notify_closing ): a, b = socketpair async def reader(): with pytest.raises(_core.ClosedResourceError): await wait_readable(a) async def writer(): with pytest.raises(_core.ClosedResourceError): await wait_writable(a) fill_socket(a) async with _core.open_nursery() as nursery: nursery.start_soon(reader) nursery.start_soon(writer) await wait_all_tasks_blocked() notify_closing(a) @read_socket_test @write_socket_test async def test_socket_simultaneous_read_write(socketpair, wait_readable, wait_writable): record = [] async def r_task(sock): await wait_readable(sock) record.append("r_task") async def w_task(sock): await wait_writable(sock) record.append("w_task") a, b = socketpair fill_socket(a) async with _core.open_nursery() as nursery: nursery.start_soon(r_task, a) nursery.start_soon(w_task, a) await wait_all_tasks_blocked() assert record == [] b.send(b"x") await wait_all_tasks_blocked() assert record == ["r_task"] drain_socket(b) await wait_all_tasks_blocked() assert record == ["r_task", "w_task"] @read_socket_test @write_socket_test async def test_socket_actual_streaming(socketpair, wait_readable, wait_writable): a, b = socketpair # Use a small send buffer on one of the sockets to increase the chance of # getting partial writes a.setsockopt(stdlib_socket.SOL_SOCKET, stdlib_socket.SO_SNDBUF, 10000) N = 1000000 # 1 megabyte MAX_CHUNK = 65536 results = {} async def sender(sock, seed, key): r = random.Random(seed) sent = 0 while sent < N: print("sent", sent) chunk = bytearray(r.randrange(MAX_CHUNK)) while chunk: with assert_checkpoints(): await wait_writable(sock) this_chunk_size = sock.send(chunk) sent += this_chunk_size del chunk[:this_chunk_size] sock.shutdown(stdlib_socket.SHUT_WR) results[key] = sent async def receiver(sock, key): received = 0 while True: print("received", received) with assert_checkpoints(): await wait_readable(sock) this_chunk_size = len(sock.recv(MAX_CHUNK)) if not this_chunk_size: break received += this_chunk_size results[key] = received async with _core.open_nursery() as nursery: nursery.start_soon(sender, a, 0, "send_a") nursery.start_soon(sender, b, 1, "send_b") nursery.start_soon(receiver, a, "recv_a") nursery.start_soon(receiver, b, "recv_b") assert results["send_a"] == results["recv_b"] assert results["send_b"] == results["recv_a"] async def test_notify_closing_on_invalid_object(): # It should either be a no-op (generally on Unix, where we don't know # which fds are valid), or an OSError (on Windows, where we currently only # support sockets, so we have to do some validation to figure out whether # it's a socket or a regular handle). got_oserror = False got_no_error = False try: trio.lowlevel.notify_closing(-1) except OSError: got_oserror = True else: got_no_error = True assert got_oserror or got_no_error async def test_wait_on_invalid_object(): # We definitely want to raise an error everywhere if you pass in an # invalid fd to wait_* for wait in [trio.lowlevel.wait_readable, trio.lowlevel.wait_writable]: with stdlib_socket.socket() as s: fileno = s.fileno() # We just closed the socket and don't do anything else in between, so # we can be confident that the fileno hasn't be reassigned. with pytest.raises(OSError): await wait(fileno) async def test_io_manager_statistics(): def check(*, expected_readers, expected_writers): statistics = _core.current_statistics() print(statistics) iostats = statistics.io_statistics if iostats.backend in ["epoll", "windows"]: assert iostats.tasks_waiting_read == expected_readers assert iostats.tasks_waiting_write == expected_writers else: assert iostats.backend == "kqueue" assert iostats.tasks_waiting == expected_readers + expected_writers a1, b1 = stdlib_socket.socketpair() a2, b2 = stdlib_socket.socketpair() a3, b3 = stdlib_socket.socketpair() for sock in [a1, b1, a2, b2, a3, b3]: sock.setblocking(False) with a1, b1, a2, b2, a3, b3: # let the call_soon_task settle down await wait_all_tasks_blocked() # 1 for call_soon_task check(expected_readers=1, expected_writers=0) # We want: # - one socket with a writer blocked # - two sockets with a reader blocked # - a socket with both blocked fill_socket(a1) fill_socket(a3) async with _core.open_nursery() as nursery: nursery.start_soon(_core.wait_writable, a1) nursery.start_soon(_core.wait_readable, a2) nursery.start_soon(_core.wait_readable, b2) nursery.start_soon(_core.wait_writable, a3) nursery.start_soon(_core.wait_readable, a3) await wait_all_tasks_blocked() # +1 for call_soon_task check(expected_readers=3 + 1, expected_writers=2) nursery.cancel_scope.cancel() # 1 for call_soon_task check(expected_readers=1, expected_writers=0) async def test_can_survive_unnotified_close(): # An "unnotified" close is when the user closes an fd/socket/handle # directly, without calling notify_closing first. This should never happen # -- users should call notify_closing before closing things. But, just in # case they don't, we would still like to avoid exploding. # # Acceptable behaviors: # - wait_* never return, but can be cancelled cleanly # - wait_* exit cleanly # - wait_* raise an OSError # # Not acceptable: # - getting stuck in an uncancellable state # - TrioInternalError blowing up the whole run # # This test exercises some tricky "unnotified close" scenarios, to make # sure we get the "acceptable" behaviors. async def allow_OSError(async_func, *args): with suppress(OSError): await async_func(*args) with stdlib_socket.socket() as s: async with trio.open_nursery() as nursery: nursery.start_soon(allow_OSError, trio.lowlevel.wait_readable, s) await wait_all_tasks_blocked() s.close() await wait_all_tasks_blocked() nursery.cancel_scope.cancel() # We hit different paths on Windows depending on whether we close the last # handle to the object (which produces a LOCAL_CLOSE notification and # wakes up wait_readable), or only close one of the handles (which leaves # wait_readable pending until cancelled). with stdlib_socket.socket() as s, s.dup() as s2: # noqa: F841 async with trio.open_nursery() as nursery: nursery.start_soon(allow_OSError, trio.lowlevel.wait_readable, s) await wait_all_tasks_blocked() s.close() await wait_all_tasks_blocked() nursery.cancel_scope.cancel() # A more elaborate case, with two tasks waiting. On windows and epoll, # the two tasks get muxed together onto a single underlying wait # operation. So when they're cancelled, there's a brief moment where one # of the tasks is cancelled but the other isn't, so we try to re-issue the # underlying wait operation. But here, the handle we were going to use to # do that has been pulled out from under our feet... so test that we can # survive this. a, b = stdlib_socket.socketpair() with a, b, a.dup() as a2: # noqa: F841 a.setblocking(False) b.setblocking(False) fill_socket(a) async with trio.open_nursery() as nursery: nursery.start_soon(allow_OSError, trio.lowlevel.wait_readable, a) nursery.start_soon(allow_OSError, trio.lowlevel.wait_writable, a) await wait_all_tasks_blocked() a.close() nursery.cancel_scope.cancel() # A similar case, but now the single-task-wakeup happens due to I/O # arriving, not a cancellation, so the operation gets re-issued from # handle_io context rather than abort context. a, b = stdlib_socket.socketpair() with a, b, a.dup() as a2: # noqa: F841 print("a={}, b={}, a2={}".format(a.fileno(), b.fileno(), a2.fileno())) a.setblocking(False) b.setblocking(False) fill_socket(a) e = trio.Event() # We want to wait for the kernel to process the wakeup on 'a', if any. # But depending on the platform, we might not get a wakeup on 'a'. So # we put one task to sleep waiting on 'a', and we put a second task to # sleep waiting on 'a2', with the idea that the 'a2' notification will # definitely arrive, and when it does then we can assume that whatever # notification was going to arrive for 'a' has also arrived. async def wait_readable_a2_then_set(): await trio.lowlevel.wait_readable(a2) e.set() async with trio.open_nursery() as nursery: nursery.start_soon(allow_OSError, trio.lowlevel.wait_readable, a) nursery.start_soon(allow_OSError, trio.lowlevel.wait_writable, a) nursery.start_soon(wait_readable_a2_then_set) await wait_all_tasks_blocked() a.close() b.send(b"x") # Make sure that the wakeup has been received and everything has # settled before cancelling the wait_writable. await e.wait() await wait_all_tasks_blocked() nursery.cancel_scope.cancel() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_ki.py0000644000076500000000000004567000000000000017356 0ustar00quentinwheelimport outcome import pytest import sys import os import signal import threading import contextlib import time from async_generator import ( async_generator, yield_, isasyncgenfunction, asynccontextmanager, ) from ... import _core from ...testing import wait_all_tasks_blocked from ..._util import signal_raise, is_main_thread from ..._timeouts import sleep from .tutil import slow def ki_self(): signal_raise(signal.SIGINT) def test_ki_self(): with pytest.raises(KeyboardInterrupt): ki_self() async def test_ki_enabled(): # Regular tasks aren't KI-protected assert not _core.currently_ki_protected() # Low-level call-soon callbacks are KI-protected token = _core.current_trio_token() record = [] def check(): record.append(_core.currently_ki_protected()) token.run_sync_soon(check) await wait_all_tasks_blocked() assert record == [True] @_core.enable_ki_protection def protected(): assert _core.currently_ki_protected() unprotected() @_core.disable_ki_protection def unprotected(): assert not _core.currently_ki_protected() protected() @_core.enable_ki_protection async def aprotected(): assert _core.currently_ki_protected() await aunprotected() @_core.disable_ki_protection async def aunprotected(): assert not _core.currently_ki_protected() await aprotected() # make sure that the decorator here overrides the automatic manipulation # that start_soon() does: async with _core.open_nursery() as nursery: nursery.start_soon(aprotected) nursery.start_soon(aunprotected) @_core.enable_ki_protection def gen_protected(): assert _core.currently_ki_protected() yield for _ in gen_protected(): pass @_core.disable_ki_protection def gen_unprotected(): assert not _core.currently_ki_protected() yield for _ in gen_unprotected(): pass # This used to be broken due to # # https://bugs.python.org/issue29590 # # Specifically, after a coroutine is resumed with .throw(), then the stack # makes it look like the immediate caller is the function that called # .throw(), not the actual caller. So child() here would have a caller deep in # the guts of the run loop, and always be protected, even when it shouldn't # have been. (Solution: we don't use .throw() anymore.) async def test_ki_enabled_after_yield_briefly(): @_core.enable_ki_protection async def protected(): await child(True) @_core.disable_ki_protection async def unprotected(): await child(False) async def child(expected): import traceback traceback.print_stack() assert _core.currently_ki_protected() == expected await _core.checkpoint() traceback.print_stack() assert _core.currently_ki_protected() == expected await protected() await unprotected() # This also used to be broken due to # https://bugs.python.org/issue29590 async def test_generator_based_context_manager_throw(): @contextlib.contextmanager @_core.enable_ki_protection def protected_manager(): assert _core.currently_ki_protected() try: yield finally: assert _core.currently_ki_protected() with protected_manager(): assert not _core.currently_ki_protected() with pytest.raises(KeyError): # This is the one that used to fail with protected_manager(): raise KeyError async def test_agen_protection(): @_core.enable_ki_protection @async_generator async def agen_protected1(): assert _core.currently_ki_protected() try: await yield_() finally: assert _core.currently_ki_protected() @_core.disable_ki_protection @async_generator async def agen_unprotected1(): assert not _core.currently_ki_protected() try: await yield_() finally: assert not _core.currently_ki_protected() # Swap the order of the decorators: @async_generator @_core.enable_ki_protection async def agen_protected2(): assert _core.currently_ki_protected() try: await yield_() finally: assert _core.currently_ki_protected() @async_generator @_core.disable_ki_protection async def agen_unprotected2(): assert not _core.currently_ki_protected() try: await yield_() finally: assert not _core.currently_ki_protected() # Native async generators @_core.enable_ki_protection async def agen_protected3(): assert _core.currently_ki_protected() try: yield finally: assert _core.currently_ki_protected() @_core.disable_ki_protection async def agen_unprotected3(): assert not _core.currently_ki_protected() try: yield finally: assert not _core.currently_ki_protected() for agen_fn in [ agen_protected1, agen_protected2, agen_protected3, agen_unprotected1, agen_unprotected2, agen_unprotected3, ]: async for _ in agen_fn(): # noqa assert not _core.currently_ki_protected() # asynccontextmanager insists that the function passed must itself be an # async gen function, not a wrapper around one if isasyncgenfunction(agen_fn): async with asynccontextmanager(agen_fn)(): assert not _core.currently_ki_protected() # Another case that's tricky due to: # https://bugs.python.org/issue29590 with pytest.raises(KeyError): async with asynccontextmanager(agen_fn)(): raise KeyError # Test the case where there's no magic local anywhere in the call stack def test_ki_disabled_out_of_context(): assert _core.currently_ki_protected() def test_ki_disabled_in_del(): def nestedfunction(): return _core.currently_ki_protected() def __del__(): assert _core.currently_ki_protected() assert nestedfunction() @_core.disable_ki_protection def outerfunction(): assert not _core.currently_ki_protected() assert not nestedfunction() __del__() __del__() outerfunction() assert nestedfunction() def test_ki_protection_works(): async def sleeper(name, record): try: while True: await _core.checkpoint() except _core.Cancelled: record.add(name + " ok") async def raiser(name, record): try: # os.kill runs signal handlers before returning, so we don't need # to worry that the handler will be delayed print("killing, protection =", _core.currently_ki_protected()) ki_self() except KeyboardInterrupt: print("raised!") # Make sure we aren't getting cancelled as well as siginted await _core.checkpoint() record.add(name + " raise ok") raise else: print("didn't raise!") # If we didn't raise (b/c protected), then we *should* get # cancelled at the next opportunity try: await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED) except _core.Cancelled: record.add(name + " cancel ok") # simulated control-C during raiser, which is *unprotected* print("check 1") record = set() async def check_unprotected_kill(): async with _core.open_nursery() as nursery: nursery.start_soon(sleeper, "s1", record) nursery.start_soon(sleeper, "s2", record) nursery.start_soon(raiser, "r1", record) with pytest.raises(KeyboardInterrupt): _core.run(check_unprotected_kill) assert record == {"s1 ok", "s2 ok", "r1 raise ok"} # simulated control-C during raiser, which is *protected*, so the KI gets # delivered to the main task instead print("check 2") record = set() async def check_protected_kill(): async with _core.open_nursery() as nursery: nursery.start_soon(sleeper, "s1", record) nursery.start_soon(sleeper, "s2", record) nursery.start_soon(_core.enable_ki_protection(raiser), "r1", record) # __aexit__ blocks, and then receives the KI with pytest.raises(KeyboardInterrupt): _core.run(check_protected_kill) assert record == {"s1 ok", "s2 ok", "r1 cancel ok"} # kill at last moment still raises (run_sync_soon until it raises an # error, then kill) print("check 3") async def check_kill_during_shutdown(): token = _core.current_trio_token() def kill_during_shutdown(): assert _core.currently_ki_protected() try: token.run_sync_soon(kill_during_shutdown) except _core.RunFinishedError: # it's too late for regular handling! handle this! print("kill! kill!") ki_self() token.run_sync_soon(kill_during_shutdown) with pytest.raises(KeyboardInterrupt): _core.run(check_kill_during_shutdown) # KI arrives very early, before main is even spawned print("check 4") class InstrumentOfDeath: def before_run(self): ki_self() async def main(): await _core.checkpoint() with pytest.raises(KeyboardInterrupt): _core.run(main, instruments=[InstrumentOfDeath()]) # checkpoint_if_cancelled notices pending KI print("check 5") @_core.enable_ki_protection async def main(): assert _core.currently_ki_protected() ki_self() with pytest.raises(KeyboardInterrupt): await _core.checkpoint_if_cancelled() _core.run(main) # KI arrives while main task is not abortable, b/c already scheduled print("check 6") @_core.enable_ki_protection async def main(): assert _core.currently_ki_protected() ki_self() await _core.cancel_shielded_checkpoint() await _core.cancel_shielded_checkpoint() await _core.cancel_shielded_checkpoint() with pytest.raises(KeyboardInterrupt): await _core.checkpoint() _core.run(main) # KI arrives while main task is not abortable, b/c refuses to be aborted print("check 7") @_core.enable_ki_protection async def main(): assert _core.currently_ki_protected() ki_self() task = _core.current_task() def abort(_): _core.reschedule(task, outcome.Value(1)) return _core.Abort.FAILED assert await _core.wait_task_rescheduled(abort) == 1 with pytest.raises(KeyboardInterrupt): await _core.checkpoint() _core.run(main) # KI delivered via slow abort print("check 8") @_core.enable_ki_protection async def main(): assert _core.currently_ki_protected() ki_self() task = _core.current_task() def abort(raise_cancel): result = outcome.capture(raise_cancel) _core.reschedule(task, result) return _core.Abort.FAILED with pytest.raises(KeyboardInterrupt): assert await _core.wait_task_rescheduled(abort) await _core.checkpoint() _core.run(main) # KI arrives just before main task exits, so the run_sync_soon machinery # is still functioning and will accept the callback to deliver the KI, but # by the time the callback is actually run, main has exited and can't be # aborted. print("check 9") @_core.enable_ki_protection async def main(): ki_self() with pytest.raises(KeyboardInterrupt): _core.run(main) print("check 10") # KI in unprotected code, with # restrict_keyboard_interrupt_to_checkpoints=True record = [] async def main(): # We're not KI protected... assert not _core.currently_ki_protected() ki_self() # ...but even after the KI, we keep running uninterrupted... record.append("ok") # ...until we hit a checkpoint: with pytest.raises(KeyboardInterrupt): await sleep(10) _core.run(main, restrict_keyboard_interrupt_to_checkpoints=True) assert record == ["ok"] record = [] # Exact same code raises KI early if we leave off the argument, doesn't # even reach the record.append call: with pytest.raises(KeyboardInterrupt): _core.run(main) assert record == [] # KI arrives while main task is inside a cancelled cancellation scope # the KeyboardInterrupt should take priority print("check 11") @_core.enable_ki_protection async def main(): assert _core.currently_ki_protected() with _core.CancelScope() as cancel_scope: cancel_scope.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint() ki_self() with pytest.raises(KeyboardInterrupt): await _core.checkpoint() with pytest.raises(_core.Cancelled): await _core.checkpoint() _core.run(main) def test_ki_is_good_neighbor(): # in the unlikely event someone overwrites our signal handler, we leave # the overwritten one be try: orig = signal.getsignal(signal.SIGINT) def my_handler(signum, frame): # pragma: no cover pass async def main(): signal.signal(signal.SIGINT, my_handler) _core.run(main) assert signal.getsignal(signal.SIGINT) is my_handler finally: signal.signal(signal.SIGINT, orig) # Regression test for #461 def test_ki_with_broken_threads(): thread = threading.main_thread() # scary! original = threading._active[thread.ident] # put this in a try finally so we don't have a chance of cascading a # breakage down to everything else try: del threading._active[thread.ident] @_core.enable_ki_protection async def inner(): assert signal.getsignal(signal.SIGINT) != signal.default_int_handler _core.run(inner) finally: threading._active[thread.ident] = original # For details on why this test is non-trivial, see: # https://github.com/python-trio/trio/issues/42 # https://github.com/python-trio/trio/issues/109 @slow def test_ki_wakes_us_up(): assert is_main_thread() # This test is flaky due to a race condition on Windows; see: # https://github.com/python-trio/trio/issues/119 # https://bugs.python.org/issue30038 # I think the only fix is to wait for fixed CPython to be released, so in # the mean time, on affected versions we send two signals (equivalent to # hitting control-C twice). This works because the problem is that the C # level signal handler does # # write-to-fd -> set-flags # # and we need # # set-flags -> write-to-fd # # so running the C level signal handler twice does # # write-to-fd -> set-flags -> write-to-fd -> set-flags # # which contains the desired sequence. # # Affected version of CPython include 3.6.1 and earlier. # It's fixed in 3.6.2 and 3.7+ # # PyPy was never affected. # # The problem technically can occur on Unix as well, if a signal is # delivered to a non-main thread, though we haven't observed this in # practice. # # There's also this theoretical problem, but hopefully it won't actually # bite us in practice: # https://bugs.python.org/issue31119 # https://bitbucket.org/pypy/pypy/issues/2623 import platform buggy_wakeup_fd = ( sys.version_info < (3, 6, 2) and platform.python_implementation() == "CPython" ) # lock is only needed to avoid an annoying race condition where the # *second* ki_self() call arrives *after* the first one woke us up and its # KeyboardInterrupt was caught, and then generates a second # KeyboardInterrupt that aborts the test run. The kill_soon thread holds # the lock while doing the calls to ki_self, which means that it holds it # while the C-level signal handler is running. Then in the main thread, # when we're woken up we know that ki_self() has been run at least once; # if we then take the lock it guaranteeds that ki_self() has been run # twice, so if a second KeyboardInterrupt is going to arrive it should # arrive by the time we've acquired the lock. This lets us force it to # happen inside the pytest.raises block. # # It will be very nice when the buggy_wakeup_fd bug is fixed. lock = threading.Lock() def kill_soon(): # We want the signal to be raised after the main thread has entered # the IO manager blocking primitive. There really is no way to # deterministically interlock with that, so we have to use sleep and # hope it's long enough. time.sleep(1.1) with lock: print("thread doing ki_self()") ki_self() if buggy_wakeup_fd: print("buggy_wakeup_fd =", buggy_wakeup_fd) ki_self() async def main(): thread = threading.Thread(target=kill_soon) print("Starting thread") thread.start() try: with pytest.raises(KeyboardInterrupt): # To limit the damage on CI if this does get broken (as # compared to sleep_forever()) print("Going to sleep") try: await sleep(20) print("Woke without raising?!") # pragma: no cover # The only purpose of this finally: block is to soak up the # second KeyboardInterrupt that might arrive on # buggy_wakeup_fd platforms. So it might get aborted at any # moment randomly on some runs, so pragma: no cover avoids # coverage flapping: finally: # pragma: no cover print("waiting for lock") with lock: print("got lock") # And then we want to force a PyErr_CheckSignals. Which is # not so easy on Windows. Weird kluge: builtin_repr calls # PyObject_Repr, which does an unconditional # PyErr_CheckSignals for some reason. print(repr(None)) # And finally, it's possible that the signal was delivered # but at a moment when we had KI protection enabled, so we # need to execute a checkpoint to ensure it's delivered # before we exit main(). await _core.checkpoint() finally: print("joining thread", sys.exc_info()) thread.join() start = time.perf_counter() try: _core.run(main) finally: end = time.perf_counter() print("duration", end - start) print("sys.exc_info", sys.exc_info()) assert 1.0 <= (end - start) < 2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_local.py0000644000076500000000000000507700000000000020042 0ustar00quentinwheelimport pytest from ... import _core # scary runvar tests def test_runvar_smoketest(): t1 = _core.RunVar("test1") t2 = _core.RunVar("test2", default="catfish") assert "RunVar" in repr(t1) async def first_check(): with pytest.raises(LookupError): t1.get() t1.set("swordfish") assert t1.get() == "swordfish" assert t2.get() == "catfish" assert t2.get(default="eel") == "eel" t2.set("goldfish") assert t2.get() == "goldfish" assert t2.get(default="tuna") == "goldfish" async def second_check(): with pytest.raises(LookupError): t1.get() assert t2.get() == "catfish" _core.run(first_check) _core.run(second_check) def test_runvar_resetting(): t1 = _core.RunVar("test1") t2 = _core.RunVar("test2", default="dogfish") t3 = _core.RunVar("test3") async def reset_check(): token = t1.set("moonfish") assert t1.get() == "moonfish" t1.reset(token) with pytest.raises(TypeError): t1.reset(None) with pytest.raises(LookupError): t1.get() token2 = t2.set("catdogfish") assert t2.get() == "catdogfish" t2.reset(token2) assert t2.get() == "dogfish" with pytest.raises(ValueError): t2.reset(token2) token3 = t3.set("basculin") assert t3.get() == "basculin" with pytest.raises(ValueError): t1.reset(token3) _core.run(reset_check) def test_runvar_sync(): t1 = _core.RunVar("test1") async def sync_check(): async def task1(): t1.set("plaice") assert t1.get() == "plaice" async def task2(tok): t1.reset(token) with pytest.raises(LookupError): t1.get() t1.set("cod") async with _core.open_nursery() as n: token = t1.set("cod") assert t1.get() == "cod" n.start_soon(task1) await _core.wait_all_tasks_blocked() assert t1.get() == "plaice" n.start_soon(task2, token) await _core.wait_all_tasks_blocked() assert t1.get() == "cod" _core.run(sync_check) def test_accessing_runvar_outside_run_call_fails(): t1 = _core.RunVar("test1") with pytest.raises(RuntimeError): t1.set("asdf") with pytest.raises(RuntimeError): t1.get() async def get_token(): return t1.set("ok") token = _core.run(get_token) with pytest.raises(RuntimeError): t1.reset(token) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_mock_clock.py0000644000076500000000000001163600000000000021052 0ustar00quentinwheelfrom math import inf import time import pytest from trio import sleep from ... import _core from .. import wait_all_tasks_blocked from .._mock_clock import MockClock from .tutil import slow def test_mock_clock(): REAL_NOW = 123.0 c = MockClock() c._real_clock = lambda: REAL_NOW repr(c) # smoke test assert c.rate == 0 assert c.current_time() == 0 c.jump(1.2) assert c.current_time() == 1.2 with pytest.raises(ValueError): c.jump(-1) assert c.current_time() == 1.2 assert c.deadline_to_sleep_time(1.1) == 0 assert c.deadline_to_sleep_time(1.2) == 0 assert c.deadline_to_sleep_time(1.3) > 999999 with pytest.raises(ValueError): c.rate = -1 assert c.rate == 0 c.rate = 2 assert c.current_time() == 1.2 REAL_NOW += 1 assert c.current_time() == 3.2 assert c.deadline_to_sleep_time(3.1) == 0 assert c.deadline_to_sleep_time(3.2) == 0 assert c.deadline_to_sleep_time(4.2) == 0.5 c.rate = 0.5 assert c.current_time() == 3.2 assert c.deadline_to_sleep_time(3.1) == 0 assert c.deadline_to_sleep_time(3.2) == 0 assert c.deadline_to_sleep_time(4.2) == 2.0 c.jump(0.8) assert c.current_time() == 4.0 REAL_NOW += 1 assert c.current_time() == 4.5 c2 = MockClock(rate=3) assert c2.rate == 3 assert c2.current_time() < 10 async def test_mock_clock_autojump(mock_clock): assert mock_clock.autojump_threshold == inf mock_clock.autojump_threshold = 0 assert mock_clock.autojump_threshold == 0 real_start = time.perf_counter() virtual_start = _core.current_time() for i in range(10): print("sleeping {} seconds".format(10 * i)) await sleep(10 * i) print("woke up!") assert virtual_start + 10 * i == _core.current_time() virtual_start = _core.current_time() real_duration = time.perf_counter() - real_start print("Slept {} seconds in {} seconds".format(10 * sum(range(10)), real_duration)) assert real_duration < 1 mock_clock.autojump_threshold = 0.02 t = _core.current_time() # this should wake up before the autojump threshold triggers, so time # shouldn't change await wait_all_tasks_blocked() assert t == _core.current_time() # this should too await wait_all_tasks_blocked(0.01) assert t == _core.current_time() # set up a situation where the autojump task is blocked for a long long # time, to make sure that cancel-and-adjust-threshold logic is working mock_clock.autojump_threshold = 10000 await wait_all_tasks_blocked() mock_clock.autojump_threshold = 0 # if the above line didn't take affect immediately, then this would be # bad: await sleep(100000) async def test_mock_clock_autojump_interference(mock_clock): mock_clock.autojump_threshold = 0.02 mock_clock2 = MockClock() # messing with the autojump threshold of a clock that isn't actually # installed in the run loop shouldn't do anything. mock_clock2.autojump_threshold = 0.01 # if the autojump_threshold of 0.01 were in effect, then the next line # would block forever, as the autojump task kept waking up to try to # jump the clock. await wait_all_tasks_blocked(0.015) # but the 0.02 limit does apply await sleep(100000) def test_mock_clock_autojump_preset(): # Check that we can set the autojump_threshold before the clock is # actually in use, and it gets picked up mock_clock = MockClock(autojump_threshold=0.1) mock_clock.autojump_threshold = 0.01 real_start = time.perf_counter() _core.run(sleep, 10000, clock=mock_clock) assert time.perf_counter() - real_start < 1 async def test_mock_clock_autojump_0_and_wait_all_tasks_blocked_0(mock_clock): # Checks that autojump_threshold=0 doesn't interfere with # calling wait_all_tasks_blocked with the default cushion=0. mock_clock.autojump_threshold = 0 record = [] async def sleeper(): await sleep(100) record.append("yawn") async def waiter(): await wait_all_tasks_blocked() record.append("waiter woke") await sleep(1000) record.append("waiter done") async with _core.open_nursery() as nursery: nursery.start_soon(sleeper) nursery.start_soon(waiter) assert record == ["waiter woke", "yawn", "waiter done"] @slow async def test_mock_clock_autojump_0_and_wait_all_tasks_blocked_nonzero(mock_clock): # Checks that autojump_threshold=0 doesn't interfere with # calling wait_all_tasks_blocked with a non-zero cushion. mock_clock.autojump_threshold = 0 record = [] async def sleeper(): await sleep(100) record.append("yawn") async def waiter(): await wait_all_tasks_blocked(1) record.append("waiter done") async with _core.open_nursery() as nursery: nursery.start_soon(sleeper) nursery.start_soon(waiter) assert record == ["waiter done", "yawn"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror.py0000644000076500000000000005306000000000000021147 0ustar00quentinwheelimport logging import pytest from traceback import ( extract_tb, print_exception, format_exception, ) from traceback import _cause_message # type: ignore import sys import os import re from pathlib import Path import subprocess from .tutil import slow from .._multierror import MultiError, concat_tb from ..._core import open_nursery class NotHashableException(Exception): code = None def __init__(self, code): super().__init__() self.code = code def __eq__(self, other): if not isinstance(other, NotHashableException): return False return self.code == other.code async def raise_nothashable(code): raise NotHashableException(code) def raiser1(): raiser1_2() def raiser1_2(): raiser1_3() def raiser1_3(): raise ValueError("raiser1_string") def raiser2(): raiser2_2() def raiser2_2(): raise KeyError("raiser2_string") def raiser3(): raise NameError def get_exc(raiser): try: raiser() except Exception as exc: return exc def get_tb(raiser): return get_exc(raiser).__traceback__ def einfo(exc): return (type(exc), exc, exc.__traceback__) def test_concat_tb(): tb1 = get_tb(raiser1) tb2 = get_tb(raiser2) # These return a list of (filename, lineno, fn name, text) tuples # https://docs.python.org/3/library/traceback.html#traceback.extract_tb entries1 = extract_tb(tb1) entries2 = extract_tb(tb2) tb12 = concat_tb(tb1, tb2) assert extract_tb(tb12) == entries1 + entries2 tb21 = concat_tb(tb2, tb1) assert extract_tb(tb21) == entries2 + entries1 # Check degenerate cases assert extract_tb(concat_tb(None, tb1)) == entries1 assert extract_tb(concat_tb(tb1, None)) == entries1 assert concat_tb(None, None) is None # Make sure the original tracebacks didn't get mutated by mistake assert extract_tb(get_tb(raiser1)) == entries1 assert extract_tb(get_tb(raiser2)) == entries2 def test_MultiError(): exc1 = get_exc(raiser1) exc2 = get_exc(raiser2) assert MultiError([exc1]) is exc1 m = MultiError([exc1, exc2]) assert m.exceptions == [exc1, exc2] assert "ValueError" in str(m) assert "ValueError" in repr(m) with pytest.raises(TypeError): MultiError(object()) with pytest.raises(TypeError): MultiError([KeyError(), ValueError]) def test_MultiErrorOfSingleMultiError(): # For MultiError([MultiError]), ensure there is no bad recursion by the # constructor where __init__ is called if __new__ returns a bare MultiError. exceptions = [KeyError(), ValueError()] a = MultiError(exceptions) b = MultiError([a]) assert b == a assert b.exceptions == exceptions async def test_MultiErrorNotHashable(): exc1 = NotHashableException(42) exc2 = NotHashableException(4242) exc3 = ValueError() assert exc1 != exc2 assert exc1 != exc3 with pytest.raises(MultiError): async with open_nursery() as nursery: nursery.start_soon(raise_nothashable, 42) nursery.start_soon(raise_nothashable, 4242) def test_MultiError_filter_NotHashable(): excs = MultiError([NotHashableException(42), ValueError()]) def handle_ValueError(exc): if isinstance(exc, ValueError): return None else: return exc filtered_excs = MultiError.filter(handle_ValueError, excs) assert isinstance(filtered_excs, NotHashableException) def test_traceback_recursion(): exc1 = RuntimeError() exc2 = KeyError() exc3 = NotHashableException(42) # Note how this creates a loop, where exc1 refers to exc1 # This could trigger an infinite recursion; the 'seen' set is supposed to prevent # this. exc1.__cause__ = MultiError([exc1, exc2, exc3]) # python traceback.TracebackException < 3.6.4 does not support unhashable exceptions # and raises a TypeError exception if sys.version_info < (3, 6, 4): with pytest.raises(TypeError): format_exception(*einfo(exc1)) else: format_exception(*einfo(exc1)) def make_tree(): # Returns an object like: # MultiError([ # MultiError([ # ValueError, # KeyError, # ]), # NameError, # ]) # where all exceptions except the root have a non-trivial traceback. exc1 = get_exc(raiser1) exc2 = get_exc(raiser2) exc3 = get_exc(raiser3) # Give m12 a non-trivial traceback try: raise MultiError([exc1, exc2]) except BaseException as m12: return MultiError([m12, exc3]) def assert_tree_eq(m1, m2): if m1 is None or m2 is None: assert m1 is m2 return assert type(m1) is type(m2) assert extract_tb(m1.__traceback__) == extract_tb(m2.__traceback__) assert_tree_eq(m1.__cause__, m2.__cause__) assert_tree_eq(m1.__context__, m2.__context__) if isinstance(m1, MultiError): assert len(m1.exceptions) == len(m2.exceptions) for e1, e2 in zip(m1.exceptions, m2.exceptions): assert_tree_eq(e1, e2) def test_MultiError_filter(): def null_handler(exc): return exc m = make_tree() assert_tree_eq(m, m) assert MultiError.filter(null_handler, m) is m assert_tree_eq(m, make_tree()) # Make sure we don't pick up any detritus if run in a context where # implicit exception chaining would like to kick in m = make_tree() try: raise ValueError except ValueError: assert MultiError.filter(null_handler, m) is m assert_tree_eq(m, make_tree()) def simple_filter(exc): if isinstance(exc, ValueError): return None if isinstance(exc, KeyError): return RuntimeError() return exc new_m = MultiError.filter(simple_filter, make_tree()) assert isinstance(new_m, MultiError) assert len(new_m.exceptions) == 2 # was: [[ValueError, KeyError], NameError] # ValueError disappeared & KeyError became RuntimeError, so now: assert isinstance(new_m.exceptions[0], RuntimeError) assert isinstance(new_m.exceptions[1], NameError) # implicit chaining: assert isinstance(new_m.exceptions[0].__context__, KeyError) # also, the traceback on the KeyError incorporates what used to be the # traceback on its parent MultiError orig = make_tree() # make sure we have the right path assert isinstance(orig.exceptions[0].exceptions[1], KeyError) # get original traceback summary orig_extracted = ( extract_tb(orig.__traceback__) + extract_tb(orig.exceptions[0].__traceback__) + extract_tb(orig.exceptions[0].exceptions[1].__traceback__) ) def p(exc): print_exception(type(exc), exc, exc.__traceback__) p(orig) p(orig.exceptions[0]) p(orig.exceptions[0].exceptions[1]) p(new_m.exceptions[0].__context__) # compare to the new path assert new_m.__traceback__ is None new_extracted = extract_tb(new_m.exceptions[0].__context__.__traceback__) assert orig_extracted == new_extracted # check preserving partial tree def filter_NameError(exc): if isinstance(exc, NameError): return None return exc m = make_tree() new_m = MultiError.filter(filter_NameError, m) # with the NameError gone, the other branch gets promoted assert new_m is m.exceptions[0] # check fully handling everything def filter_all(exc): return None assert MultiError.filter(filter_all, make_tree()) is None def test_MultiError_catch(): # No exception to catch def noop(_): pass # pragma: no cover with MultiError.catch(noop): pass # Simple pass-through of all exceptions m = make_tree() with pytest.raises(MultiError) as excinfo: with MultiError.catch(lambda exc: exc): raise m assert excinfo.value is m # Should be unchanged, except that we added a traceback frame by raising # it here assert m.__traceback__ is not None assert m.__traceback__.tb_frame.f_code.co_name == "test_MultiError_catch" assert m.__traceback__.tb_next is None m.__traceback__ = None assert_tree_eq(m, make_tree()) # Swallows everything with MultiError.catch(lambda _: None): raise make_tree() def simple_filter(exc): if isinstance(exc, ValueError): return None if isinstance(exc, KeyError): return RuntimeError() return exc with pytest.raises(MultiError) as excinfo: with MultiError.catch(simple_filter): raise make_tree() new_m = excinfo.value assert isinstance(new_m, MultiError) assert len(new_m.exceptions) == 2 # was: [[ValueError, KeyError], NameError] # ValueError disappeared & KeyError became RuntimeError, so now: assert isinstance(new_m.exceptions[0], RuntimeError) assert isinstance(new_m.exceptions[1], NameError) # Make sure that Python did not successfully attach the old MultiError to # our new MultiError's __context__ assert not new_m.__suppress_context__ assert new_m.__context__ is None # check preservation of __cause__ and __context__ v = ValueError() v.__cause__ = KeyError() with pytest.raises(ValueError) as excinfo: with MultiError.catch(lambda exc: exc): raise v assert isinstance(excinfo.value.__cause__, KeyError) v = ValueError() context = KeyError() v.__context__ = context with pytest.raises(ValueError) as excinfo: with MultiError.catch(lambda exc: exc): raise v assert excinfo.value.__context__ is context assert not excinfo.value.__suppress_context__ for suppress_context in [True, False]: v = ValueError() context = KeyError() v.__context__ = context v.__suppress_context__ = suppress_context distractor = RuntimeError() with pytest.raises(ValueError) as excinfo: def catch_RuntimeError(exc): if isinstance(exc, RuntimeError): return None else: return exc with MultiError.catch(catch_RuntimeError): raise MultiError([v, distractor]) assert excinfo.value.__context__ is context assert excinfo.value.__suppress_context__ == suppress_context def assert_match_in_seq(pattern_list, string): offset = 0 print("looking for pattern matches...") for pattern in pattern_list: print("checking pattern:", pattern) reobj = re.compile(pattern) match = reobj.search(string, offset) assert match is not None offset = match.end() def test_assert_match_in_seq(): assert_match_in_seq(["a", "b"], "xx a xx b xx") assert_match_in_seq(["b", "a"], "xx b xx a xx") with pytest.raises(AssertionError): assert_match_in_seq(["a", "b"], "xx b xx a xx") def test_format_exception(): exc = get_exc(raiser1) formatted = "".join(format_exception(*einfo(exc))) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" not in formatted assert "in raiser2_2" not in formatted assert "direct cause" not in formatted assert "During handling" not in formatted exc = get_exc(raiser1) exc.__cause__ = get_exc(raiser2) formatted = "".join(format_exception(*einfo(exc))) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" in formatted assert "in raiser2_2" in formatted assert "direct cause" in formatted assert "During handling" not in formatted # ensure cause included assert _cause_message in formatted exc = get_exc(raiser1) exc.__context__ = get_exc(raiser2) formatted = "".join(format_exception(*einfo(exc))) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" in formatted assert "in raiser2_2" in formatted assert "direct cause" not in formatted assert "During handling" in formatted exc.__suppress_context__ = True formatted = "".join(format_exception(*einfo(exc))) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" not in formatted assert "in raiser2_2" not in formatted assert "direct cause" not in formatted assert "During handling" not in formatted # chain=False exc = get_exc(raiser1) exc.__context__ = get_exc(raiser2) formatted = "".join(format_exception(*einfo(exc), chain=False)) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" not in formatted assert "in raiser2_2" not in formatted assert "direct cause" not in formatted assert "During handling" not in formatted # limit exc = get_exc(raiser1) exc.__context__ = get_exc(raiser2) # get_exc adds a frame that counts against the limit, so limit=2 means we # get 1 deep into the raiser stack formatted = "".join(format_exception(*einfo(exc), limit=2)) print(formatted) assert "raiser1_string" in formatted assert "in raiser1" in formatted assert "in raiser1_2" not in formatted assert "raiser2_string" in formatted assert "in raiser2" in formatted assert "in raiser2_2" not in formatted exc = get_exc(raiser1) exc.__context__ = get_exc(raiser2) formatted = "".join(format_exception(*einfo(exc), limit=1)) print(formatted) assert "raiser1_string" in formatted assert "in raiser1" not in formatted assert "raiser2_string" in formatted assert "in raiser2" not in formatted # handles loops exc = get_exc(raiser1) exc.__cause__ = exc formatted = "".join(format_exception(*einfo(exc))) assert "raiser1_string" in formatted assert "in raiser1_3" in formatted assert "raiser2_string" not in formatted assert "in raiser2_2" not in formatted # ensure duplicate exception is not included as cause assert _cause_message not in formatted # MultiError formatted = "".join(format_exception(*einfo(make_tree()))) print(formatted) assert_match_in_seq( [ # Outer exception is MultiError r"MultiError:", # First embedded exception is the embedded MultiError r"\nDetails of embedded exception 1", # Which has a single stack frame from make_tree raising it r"in make_tree", # Then it has two embedded exceptions r" Details of embedded exception 1", r"in raiser1_2", # for some reason ValueError has no quotes r"ValueError: raiser1_string", r" Details of embedded exception 2", r"in raiser2_2", # But KeyError does have quotes r"KeyError: 'raiser2_string'", # And finally the NameError, which is a sibling of the embedded # MultiError r"\nDetails of embedded exception 2:", r"in raiser3", r"NameError", ], formatted, ) # Prints duplicate exceptions in sub-exceptions exc1 = get_exc(raiser1) def raise1_raiser1(): try: raise exc1 except: raise ValueError("foo") def raise2_raiser1(): try: raise exc1 except: raise KeyError("bar") exc2 = get_exc(raise1_raiser1) exc3 = get_exc(raise2_raiser1) try: raise MultiError([exc2, exc3]) except MultiError as e: exc = e formatted = "".join(format_exception(*einfo(exc))) print(formatted) assert_match_in_seq( [ r"Traceback", # Outer exception is MultiError r"MultiError:", # First embedded exception is the embedded ValueError with cause of raiser1 r"\nDetails of embedded exception 1", # Print details of exc1 r" Traceback", r"in get_exc", r"in raiser1", r"ValueError: raiser1_string", # Print details of exc2 r"\n During handling of the above exception, another exception occurred:", r" Traceback", r"in get_exc", r"in raise1_raiser1", r" ValueError: foo", # Second embedded exception is the embedded KeyError with cause of raiser1 r"\nDetails of embedded exception 2", # Print details of exc1 again r" Traceback", r"in get_exc", r"in raiser1", r"ValueError: raiser1_string", # Print details of exc3 r"\n During handling of the above exception, another exception occurred:", r" Traceback", r"in get_exc", r"in raise2_raiser1", r" KeyError: 'bar'", ], formatted, ) def test_logging(caplog): exc1 = get_exc(raiser1) exc2 = get_exc(raiser2) m = MultiError([exc1, exc2]) message = "test test test" try: raise m except MultiError as exc: logging.getLogger().exception(message) # Join lines together formatted = "".join(format_exception(type(exc), exc, exc.__traceback__)) assert message in caplog.text assert formatted in caplog.text def run_script(name, use_ipython=False): import trio trio_path = Path(trio.__file__).parent.parent script_path = Path(__file__).parent / "test_multierror_scripts" / name env = dict(os.environ) print("parent PYTHONPATH:", env.get("PYTHONPATH")) if "PYTHONPATH" in env: # pragma: no cover pp = env["PYTHONPATH"].split(os.pathsep) else: pp = [] pp.insert(0, str(trio_path)) pp.insert(0, str(script_path.parent)) env["PYTHONPATH"] = os.pathsep.join(pp) print("subprocess PYTHONPATH:", env.get("PYTHONPATH")) if use_ipython: lines = [script_path.read_text(), "exit()"] cmd = [ sys.executable, "-u", "-m", "IPython", # no startup files "--quick", "--TerminalIPythonApp.code_to_run=" + "\n".join(lines), ] else: cmd = [sys.executable, "-u", str(script_path)] print("running:", cmd) completed = subprocess.run( cmd, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) print("process output:") print(completed.stdout.decode("utf-8")) return completed def check_simple_excepthook(completed): assert_match_in_seq( [ "in ", "MultiError", "Details of embedded exception 1", "in exc1_fn", "ValueError", "Details of embedded exception 2", "in exc2_fn", "KeyError", ], completed.stdout.decode("utf-8"), ) def test_simple_excepthook(): completed = run_script("simple_excepthook.py") check_simple_excepthook(completed) def test_custom_excepthook(): # Check that user-defined excepthooks aren't overridden completed = run_script("custom_excepthook.py") assert_match_in_seq( [ # The warning "RuntimeWarning", "already have a custom", # The message printed by the custom hook, proving we didn't # override it "custom running!", # The MultiError "MultiError:", ], completed.stdout.decode("utf-8"), ) # This warning is triggered by ipython 7.5.0 on python 3.8 import warnings warnings.filterwarnings( "ignore", message='.*"@coroutine" decorator is deprecated', category=DeprecationWarning, module="IPython.*", ) try: import IPython except ImportError: # pragma: no cover have_ipython = False else: have_ipython = True need_ipython = pytest.mark.skipif(not have_ipython, reason="need IPython") @slow @need_ipython def test_ipython_exc_handler(): completed = run_script("simple_excepthook.py", use_ipython=True) check_simple_excepthook(completed) @slow @need_ipython def test_ipython_imported_but_unused(): completed = run_script("simple_excepthook_IPython.py") check_simple_excepthook(completed) @slow def test_partial_imported_but_unused(): # Check that a functools.partial as sys.excepthook doesn't cause an exception when # importing trio. This was a problem due to the lack of a .__name__ attribute and # happens when inside a pytest-qt test case for example. completed = run_script("simple_excepthook_partial.py") completed.check_returncode() @slow @need_ipython def test_ipython_custom_exc_handler(): # Check we get a nice warning (but only one!) if the user is using IPython # and already has some other set_custom_exc handler installed. completed = run_script("ipython_custom_exc.py", use_ipython=True) assert_match_in_seq( [ # The warning "RuntimeWarning", "IPython detected", "skip installing Trio", # The MultiError "MultiError", "ValueError", "KeyError", ], completed.stdout.decode("utf-8"), ) # Make sure our other warning doesn't show up assert "custom sys.excepthook" not in completed.stdout.decode("utf-8") @slow @pytest.mark.skipif( not Path("/usr/lib/python3/dist-packages/apport_python_hook.py").exists(), reason="need Ubuntu with python3-apport installed", ) def test_apport_excepthook_monkeypatch_interaction(): completed = run_script("apport_excepthook.py") stdout = completed.stdout.decode("utf-8") # No warning assert "custom sys.excepthook" not in stdout # Proper traceback assert_match_in_seq( ["Details of embedded", "KeyError", "Details of embedded", "ValueError"], stdout, ) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.813381 trio-0.19.0/trio/_core/tests/test_multierror_scripts/0000755000076500000000000000000000000000000022340 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/__init__.py0000644000076500000000000000023300000000000024447 0ustar00quentinwheel# This isn't really a package, everything in here is a standalone script. This # __init__.py is just to fool setup.py into actually installing the things. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/_common.py0000644000076500000000000000025300000000000024341 0ustar00quentinwheel# https://coverage.readthedocs.io/en/latest/subprocess.html try: import coverage except ImportError: # pragma: no cover pass else: coverage.process_startup() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/apport_excepthook.py0000644000076500000000000000060600000000000026452 0ustar00quentinwheel# The apport_python_hook package is only installed as part of Ubuntu's system # python, and not available in venvs. So before we can import it we have to # make sure it's on sys.path. import sys sys.path.append("/usr/lib/python3/dist-packages") import apport_python_hook apport_python_hook.install() import trio raise trio.MultiError([KeyError("key_error"), ValueError("value_error")]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/custom_excepthook.py0000644000076500000000000000055400000000000026461 0ustar00quentinwheelimport _common import sys def custom_excepthook(*args): print("custom running!") return sys.__excepthook__(*args) sys.excepthook = custom_excepthook # Should warn that we'll get kinda-broken tracebacks import trio # The custom excepthook should run, because Trio was polite and didn't # override it raise trio.MultiError([ValueError(), KeyError()]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/ipython_custom_exc.py0000644000076500000000000000133000000000000026632 0ustar00quentinwheelimport _common # Override the regular excepthook too -- it doesn't change anything either way # because ipython doesn't use it, but we want to make sure Trio doesn't warn # about it. import sys def custom_excepthook(*args): print("custom running!") return sys.__excepthook__(*args) sys.excepthook = custom_excepthook import IPython ip = IPython.get_ipython() # Set this to some random nonsense class SomeError(Exception): pass def custom_exc_hook(etype, value, tb, tb_offset=None): ip.showtraceback() ip.set_custom_exc((SomeError,), custom_exc_hook) import trio # The custom excepthook should run, because Trio was polite and didn't # override it raise trio.MultiError([ValueError(), KeyError()]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/simple_excepthook.py0000644000076500000000000000052600000000000026437 0ustar00quentinwheelimport _common import trio def exc1_fn(): try: raise ValueError except Exception as exc: return exc def exc2_fn(): try: raise KeyError except Exception as exc: return exc # This should be printed nicely, because Trio overrode sys.excepthook raise trio.MultiError([exc1_fn(), exc2_fn()]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/simple_excepthook_IPython.py0000644000076500000000000000025400000000000030107 0ustar00quentinwheelimport _common # To tickle the "is IPython loaded?" logic, make sure that Trio tolerates # IPython loaded but not actually in use import IPython import simple_excepthook ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_multierror_scripts/simple_excepthook_partial.py0000644000076500000000000000040400000000000030146 0ustar00quentinwheelimport functools import sys import _common # just making sure importing Trio doesn't fail if sys.excepthook doesn't have a # .__name__ attribute sys.excepthook = functools.partial(sys.excepthook) assert not hasattr(sys.excepthook, "__name__") import trio ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_parking_lot.py0000644000076500000000000001321400000000000021251 0ustar00quentinwheelimport pytest from ... import _core from ...testing import wait_all_tasks_blocked from .._parking_lot import ParkingLot from .tutil import check_sequence_matches async def test_parking_lot_basic(): record = [] async def waiter(i, lot): record.append("sleep {}".format(i)) await lot.park() record.append("wake {}".format(i)) async with _core.open_nursery() as nursery: lot = ParkingLot() assert not lot assert len(lot) == 0 assert lot.statistics().tasks_waiting == 0 for i in range(3): nursery.start_soon(waiter, i, lot) await wait_all_tasks_blocked() assert len(record) == 3 assert bool(lot) assert len(lot) == 3 assert lot.statistics().tasks_waiting == 3 lot.unpark_all() assert lot.statistics().tasks_waiting == 0 await wait_all_tasks_blocked() assert len(record) == 6 check_sequence_matches( record, [{"sleep 0", "sleep 1", "sleep 2"}, {"wake 0", "wake 1", "wake 2"}] ) async with _core.open_nursery() as nursery: record = [] for i in range(3): nursery.start_soon(waiter, i, lot) await wait_all_tasks_blocked() assert len(record) == 3 for i in range(3): lot.unpark() await wait_all_tasks_blocked() # 1-by-1 wakeups are strict FIFO assert record == [ "sleep 0", "sleep 1", "sleep 2", "wake 0", "wake 1", "wake 2", ] # It's legal (but a no-op) to try and unpark while there's nothing parked lot.unpark() lot.unpark(count=1) lot.unpark(count=100) # Check unpark with count async with _core.open_nursery() as nursery: record = [] for i in range(3): nursery.start_soon(waiter, i, lot) await wait_all_tasks_blocked() lot.unpark(count=2) await wait_all_tasks_blocked() check_sequence_matches( record, ["sleep 0", "sleep 1", "sleep 2", {"wake 0", "wake 1"}] ) lot.unpark_all() async def cancellable_waiter(name, lot, scopes, record): with _core.CancelScope() as scope: scopes[name] = scope record.append("sleep {}".format(name)) try: await lot.park() except _core.Cancelled: record.append("cancelled {}".format(name)) else: record.append("wake {}".format(name)) async def test_parking_lot_cancel(): record = [] scopes = {} async with _core.open_nursery() as nursery: lot = ParkingLot() nursery.start_soon(cancellable_waiter, 1, lot, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 2, lot, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 3, lot, scopes, record) await wait_all_tasks_blocked() assert len(record) == 3 scopes[2].cancel() await wait_all_tasks_blocked() assert len(record) == 4 lot.unpark_all() await wait_all_tasks_blocked() assert len(record) == 6 check_sequence_matches( record, ["sleep 1", "sleep 2", "sleep 3", "cancelled 2", {"wake 1", "wake 3"}] ) async def test_parking_lot_repark(): record = [] scopes = {} lot1 = ParkingLot() lot2 = ParkingLot() with pytest.raises(TypeError): lot1.repark([]) async with _core.open_nursery() as nursery: nursery.start_soon(cancellable_waiter, 1, lot1, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 2, lot1, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 3, lot1, scopes, record) await wait_all_tasks_blocked() assert len(record) == 3 assert len(lot1) == 3 lot1.repark(lot2) assert len(lot1) == 2 assert len(lot2) == 1 lot2.unpark_all() await wait_all_tasks_blocked() assert len(record) == 4 assert record == ["sleep 1", "sleep 2", "sleep 3", "wake 1"] lot1.repark_all(lot2) assert len(lot1) == 0 assert len(lot2) == 2 scopes[2].cancel() await wait_all_tasks_blocked() assert len(lot2) == 1 assert record == [ "sleep 1", "sleep 2", "sleep 3", "wake 1", "cancelled 2", ] lot2.unpark_all() await wait_all_tasks_blocked() assert record == [ "sleep 1", "sleep 2", "sleep 3", "wake 1", "cancelled 2", "wake 3", ] async def test_parking_lot_repark_with_count(): record = [] scopes = {} lot1 = ParkingLot() lot2 = ParkingLot() async with _core.open_nursery() as nursery: nursery.start_soon(cancellable_waiter, 1, lot1, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 2, lot1, scopes, record) await wait_all_tasks_blocked() nursery.start_soon(cancellable_waiter, 3, lot1, scopes, record) await wait_all_tasks_blocked() assert len(record) == 3 assert len(lot1) == 3 assert len(lot2) == 0 lot1.repark(lot2, count=2) assert len(lot1) == 1 assert len(lot2) == 2 while lot2: lot2.unpark() await wait_all_tasks_blocked() assert record == [ "sleep 1", "sleep 2", "sleep 3", "wake 1", "wake 2", ] lot1.unpark_all() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_run.py0000644000076500000000000021644300000000000017555 0ustar00quentinwheelimport contextvars import functools import platform import sys import threading import time import types import warnings import weakref from contextlib import contextmanager, ExitStack from math import inf from textwrap import dedent import gc import attr import outcome import sniffio import pytest from .tutil import ( slow, check_sequence_matches, gc_collect_harder, ignore_coroutine_never_awaited_warnings, buggy_pypy_asyncgens, restore_unraisablehook, create_asyncio_future_in_new_loop, ) from ... import _core from .._run import DEADLINE_HEAP_MIN_PRUNE_THRESHOLD from ..._threads import to_thread_run_sync from ..._timeouts import sleep, fail_after from ...testing import ( wait_all_tasks_blocked, Sequencer, assert_checkpoints, ) # slightly different from _timeouts.sleep_forever because it returns the value # its rescheduled with, which is really only useful for tests of # rescheduling... async def sleep_forever(): return await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED) def test_basic(): async def trivial(x): return x assert _core.run(trivial, 8) == 8 with pytest.raises(TypeError): # Missing an argument _core.run(trivial) with pytest.raises(TypeError): # Not an async function _core.run(lambda: None) async def trivial2(x): await _core.checkpoint() return x assert _core.run(trivial2, 1) == 1 def test_initial_task_error(): async def main(x): raise ValueError(x) with pytest.raises(ValueError) as excinfo: _core.run(main, 17) assert excinfo.value.args == (17,) def test_run_nesting(): async def inception(): async def main(): # pragma: no cover pass return _core.run(main) with pytest.raises(RuntimeError) as excinfo: _core.run(inception) assert "from inside" in str(excinfo.value) async def test_nursery_warn_use_async_with(): with pytest.raises(RuntimeError) as excinfo: on = _core.open_nursery() with on: pass # pragma: no cover excinfo.match( r"use 'async with open_nursery\(...\)', not 'with open_nursery\(...\)'" ) # avoid unawaited coro. async with on: pass async def test_nursery_main_block_error_basic(): exc = ValueError("whoops") with pytest.raises(ValueError) as excinfo: async with _core.open_nursery(): raise exc assert excinfo.value is exc async def test_child_crash_basic(): exc = ValueError("uh oh") async def erroring(): raise exc try: # nursery.__aexit__ propagates exception from child back to parent async with _core.open_nursery() as nursery: nursery.start_soon(erroring) except ValueError as e: assert e is exc async def test_basic_interleave(): async def looper(whoami, record): for i in range(3): record.append((whoami, i)) await _core.checkpoint() record = [] async with _core.open_nursery() as nursery: nursery.start_soon(looper, "a", record) nursery.start_soon(looper, "b", record) check_sequence_matches( record, [{("a", 0), ("b", 0)}, {("a", 1), ("b", 1)}, {("a", 2), ("b", 2)}] ) def test_task_crash_propagation(): looper_record = [] async def looper(): try: while True: await _core.checkpoint() except _core.Cancelled: print("looper cancelled") looper_record.append("cancelled") async def crasher(): raise ValueError("argh") async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(looper) nursery.start_soon(crasher) with pytest.raises(ValueError) as excinfo: _core.run(main) assert looper_record == ["cancelled"] assert excinfo.value.args == ("argh",) def test_main_and_task_both_crash(): # If main crashes and there's also a task crash, then we get both in a # MultiError async def crasher(): raise ValueError async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(crasher) raise KeyError with pytest.raises(_core.MultiError) as excinfo: _core.run(main) print(excinfo.value) assert {type(exc) for exc in excinfo.value.exceptions} == { ValueError, KeyError, } def test_two_child_crashes(): async def crasher(etype): raise etype async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(crasher, KeyError) nursery.start_soon(crasher, ValueError) with pytest.raises(_core.MultiError) as excinfo: _core.run(main) assert {type(exc) for exc in excinfo.value.exceptions} == { ValueError, KeyError, } async def test_child_crash_wakes_parent(): async def crasher(): raise ValueError with pytest.raises(ValueError): async with _core.open_nursery() as nursery: nursery.start_soon(crasher) await sleep_forever() async def test_reschedule(): t1 = None t2 = None async def child1(): nonlocal t1, t2 t1 = _core.current_task() print("child1 start") x = await sleep_forever() print("child1 woke") assert x == 0 print("child1 rescheduling t2") _core.reschedule(t2, outcome.Error(ValueError())) print("child1 exit") async def child2(): nonlocal t1, t2 print("child2 start") t2 = _core.current_task() _core.reschedule(t1, outcome.Value(0)) print("child2 sleep") with pytest.raises(ValueError): await sleep_forever() print("child2 successful exit") async with _core.open_nursery() as nursery: nursery.start_soon(child1) # let t1 run and fall asleep await _core.checkpoint() nursery.start_soon(child2) async def test_current_time(): t1 = _core.current_time() # Windows clock is pretty low-resolution -- appveyor tests fail unless we # sleep for a bit here. time.sleep(time.get_clock_info("perf_counter").resolution) t2 = _core.current_time() assert t1 < t2 async def test_current_time_with_mock_clock(mock_clock): start = mock_clock.current_time() assert mock_clock.current_time() == _core.current_time() assert mock_clock.current_time() == _core.current_time() mock_clock.jump(3.14) assert start + 3.14 == mock_clock.current_time() == _core.current_time() async def test_current_clock(mock_clock): assert mock_clock is _core.current_clock() async def test_current_task(): parent_task = _core.current_task() async def child(): assert _core.current_task().parent_nursery.parent_task is parent_task async with _core.open_nursery() as nursery: nursery.start_soon(child) async def test_root_task(): root = _core.current_root_task() assert root.parent_nursery is root.eventual_parent_nursery is None def test_out_of_context(): with pytest.raises(RuntimeError): _core.current_task() with pytest.raises(RuntimeError): _core.current_time() async def test_current_statistics(mock_clock): # Make sure all the early startup stuff has settled down await wait_all_tasks_blocked() # A child that sticks around to make some interesting stats: async def child(): try: await sleep_forever() except _core.Cancelled: pass stats = _core.current_statistics() print(stats) # 2 system tasks + us assert stats.tasks_living == 3 assert stats.run_sync_soon_queue_size == 0 async with _core.open_nursery() as nursery: nursery.start_soon(child) await wait_all_tasks_blocked() token = _core.current_trio_token() token.run_sync_soon(lambda: None) token.run_sync_soon(lambda: None, idempotent=True) stats = _core.current_statistics() print(stats) # 2 system tasks + us + child assert stats.tasks_living == 4 # the exact value here might shift if we change how we do accounting # (currently it only counts tasks that we already know will be # runnable on the next pass), but still useful to at least test the # difference between now and after we wake up the child: assert stats.tasks_runnable == 0 assert stats.run_sync_soon_queue_size == 2 nursery.cancel_scope.cancel() stats = _core.current_statistics() print(stats) assert stats.tasks_runnable == 1 # Give the child a chance to die and the run_sync_soon a chance to clear await _core.checkpoint() await _core.checkpoint() with _core.CancelScope(deadline=_core.current_time() + 5): stats = _core.current_statistics() print(stats) assert stats.seconds_to_next_deadline == 5 stats = _core.current_statistics() print(stats) assert stats.seconds_to_next_deadline == inf async def test_cancel_scope_repr(mock_clock): scope = _core.CancelScope() assert "unbound" in repr(scope) with scope: assert "active" in repr(scope) scope.deadline = _core.current_time() - 1 assert "deadline is 1.00 seconds ago" in repr(scope) scope.deadline = _core.current_time() + 10 assert "deadline is 10.00 seconds from now" in repr(scope) # when not in async context, can't get the current time assert "deadline" not in await to_thread_run_sync(repr, scope) scope.cancel() assert "cancelled" in repr(scope) assert "exited" in repr(scope) def test_cancel_points(): async def main1(): with _core.CancelScope() as scope: await _core.checkpoint_if_cancelled() scope.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint_if_cancelled() _core.run(main1) async def main2(): with _core.CancelScope() as scope: await _core.checkpoint() scope.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint() _core.run(main2) async def main3(): with _core.CancelScope() as scope: scope.cancel() with pytest.raises(_core.Cancelled): await sleep_forever() _core.run(main3) async def main4(): with _core.CancelScope() as scope: scope.cancel() await _core.cancel_shielded_checkpoint() await _core.cancel_shielded_checkpoint() with pytest.raises(_core.Cancelled): await _core.checkpoint() _core.run(main4) async def test_cancel_edge_cases(): with _core.CancelScope() as scope: # Two cancels in a row -- idempotent scope.cancel() scope.cancel() await _core.checkpoint() assert scope.cancel_called assert scope.cancelled_caught with _core.CancelScope() as scope: # Check level-triggering scope.cancel() with pytest.raises(_core.Cancelled): await sleep_forever() with pytest.raises(_core.Cancelled): await sleep_forever() async def test_cancel_scope_multierror_filtering(): async def crasher(): raise KeyError try: with _core.CancelScope() as outer: try: async with _core.open_nursery() as nursery: # Two children that get cancelled by the nursery scope nursery.start_soon(sleep_forever) # t1 nursery.start_soon(sleep_forever) # t2 nursery.cancel_scope.cancel() with _core.CancelScope(shield=True): await wait_all_tasks_blocked() # One child that gets cancelled by the outer scope nursery.start_soon(sleep_forever) # t3 outer.cancel() # And one that raises a different error nursery.start_soon(crasher) # t4 # and then our __aexit__ also receives an outer Cancelled except _core.MultiError as multi_exc: # Since the outer scope became cancelled before the # nursery block exited, all cancellations inside the # nursery block continue propagating to reach the # outer scope. assert len(multi_exc.exceptions) == 5 summary = {} for exc in multi_exc.exceptions: summary.setdefault(type(exc), 0) summary[type(exc)] += 1 assert summary == {_core.Cancelled: 4, KeyError: 1} raise except AssertionError: # pragma: no cover raise except BaseException as exc: # This is outside the outer scope, so all the Cancelled # exceptions should have been absorbed, leaving just a regular # KeyError from crasher() assert type(exc) is KeyError else: # pragma: no cover assert False async def test_precancelled_task(): # a task that gets spawned into an already-cancelled nursery should begin # execution (https://github.com/python-trio/trio/issues/41), but get a # cancelled error at its first blocking call. record = [] async def blocker(): record.append("started") await sleep_forever() async with _core.open_nursery() as nursery: nursery.cancel_scope.cancel() nursery.start_soon(blocker) assert record == ["started"] async def test_cancel_shielding(): with _core.CancelScope() as outer: with _core.CancelScope() as inner: await _core.checkpoint() outer.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint() assert inner.shield is False with pytest.raises(TypeError): inner.shield = "hello" assert inner.shield is False inner.shield = True assert inner.shield is True # shield protects us from 'outer' await _core.checkpoint() with _core.CancelScope() as innerest: innerest.cancel() # but it doesn't protect us from scope inside inner with pytest.raises(_core.Cancelled): await _core.checkpoint() await _core.checkpoint() inner.shield = False # can disable shield again with pytest.raises(_core.Cancelled): await _core.checkpoint() # re-enable shield inner.shield = True await _core.checkpoint() # shield doesn't protect us from inner itself inner.cancel() # This should now raise, but be absorbed by the inner scope await _core.checkpoint() assert inner.cancelled_caught # make sure that cancellation propagates immediately to all children async def test_cancel_inheritance(): record = set() async def leaf(ident): try: await sleep_forever() except _core.Cancelled: record.add(ident) async def worker(ident): async with _core.open_nursery() as nursery: nursery.start_soon(leaf, ident + "-l1") nursery.start_soon(leaf, ident + "-l2") async with _core.open_nursery() as nursery: nursery.start_soon(worker, "w1") nursery.start_soon(worker, "w2") nursery.cancel_scope.cancel() assert record == {"w1-l1", "w1-l2", "w2-l1", "w2-l2"} async def test_cancel_shield_abort(): with _core.CancelScope() as outer: async with _core.open_nursery() as nursery: outer.cancel() nursery.cancel_scope.shield = True # The outer scope is cancelled, but this task is protected by the # shield, so it manages to get to sleep record = [] async def sleeper(): record.append("sleeping") try: await sleep_forever() except _core.Cancelled: record.append("cancelled") nursery.start_soon(sleeper) await wait_all_tasks_blocked() assert record == ["sleeping"] # now when we unshield, it should abort the sleep. nursery.cancel_scope.shield = False # wait for the task to finish before entering the nursery # __aexit__, because __aexit__ could make it spuriously look like # this worked by cancelling the nursery scope. (When originally # written, without these last few lines, the test spuriously # passed, even though shield assignment was buggy.) with _core.CancelScope(shield=True): await wait_all_tasks_blocked() assert record == ["sleeping", "cancelled"] async def test_basic_timeout(mock_clock): start = _core.current_time() with _core.CancelScope() as scope: assert scope.deadline == inf scope.deadline = start + 1 assert scope.deadline == start + 1 assert not scope.cancel_called mock_clock.jump(2) await _core.checkpoint() await _core.checkpoint() await _core.checkpoint() assert not scope.cancel_called start = _core.current_time() with _core.CancelScope(deadline=start + 1) as scope: mock_clock.jump(2) await sleep_forever() # But then the scope swallowed the exception... but we can still see it # here: assert scope.cancel_called assert scope.cancelled_caught # changing deadline start = _core.current_time() with _core.CancelScope() as scope: await _core.checkpoint() scope.deadline = start + 10 await _core.checkpoint() mock_clock.jump(5) await _core.checkpoint() scope.deadline = start + 1 with pytest.raises(_core.Cancelled): await _core.checkpoint() with pytest.raises(_core.Cancelled): await _core.checkpoint() async def test_cancel_scope_nesting(): # Nested scopes: if two triggering at once, the outer one wins with _core.CancelScope() as scope1: with _core.CancelScope() as scope2: with _core.CancelScope() as scope3: scope3.cancel() scope2.cancel() await sleep_forever() assert scope3.cancel_called assert not scope3.cancelled_caught assert scope2.cancel_called assert scope2.cancelled_caught assert not scope1.cancel_called assert not scope1.cancelled_caught # shielding with _core.CancelScope() as scope1: with _core.CancelScope() as scope2: scope1.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint() with pytest.raises(_core.Cancelled): await _core.checkpoint() scope2.shield = True await _core.checkpoint() scope2.cancel() with pytest.raises(_core.Cancelled): await _core.checkpoint() # if a scope is pending, but then gets popped off the stack, then it # isn't delivered with _core.CancelScope() as scope: scope.cancel() await _core.cancel_shielded_checkpoint() await _core.checkpoint() assert not scope.cancelled_caught # Regression test for https://github.com/python-trio/trio/issues/1175 async def test_unshield_while_cancel_propagating(): with _core.CancelScope() as outer: with _core.CancelScope() as inner: outer.cancel() try: await _core.checkpoint() finally: inner.shield = True assert outer.cancelled_caught and not inner.cancelled_caught async def test_cancel_unbound(): async def sleep_until_cancelled(scope): with scope, fail_after(1): await sleep_forever() # Cancel before entry scope = _core.CancelScope() scope.cancel() async with _core.open_nursery() as nursery: nursery.start_soon(sleep_until_cancelled, scope) # Cancel after entry scope = _core.CancelScope() async with _core.open_nursery() as nursery: nursery.start_soon(sleep_until_cancelled, scope) await wait_all_tasks_blocked() scope.cancel() # Shield before entry scope = _core.CancelScope() scope.shield = True with _core.CancelScope() as outer, scope: outer.cancel() await _core.checkpoint() scope.shield = False with pytest.raises(_core.Cancelled): await _core.checkpoint() # Can't reuse with _core.CancelScope() as scope: await _core.checkpoint() scope.cancel() await _core.checkpoint() assert scope.cancel_called assert not scope.cancelled_caught with pytest.raises(RuntimeError) as exc_info: with scope: pass # pragma: no cover assert "single 'with' block" in str(exc_info.value) # Can't reenter with _core.CancelScope() as scope: with pytest.raises(RuntimeError) as exc_info: with scope: pass # pragma: no cover assert "single 'with' block" in str(exc_info.value) # Can't enter from multiple tasks simultaneously scope = _core.CancelScope() async def enter_scope(): with scope: await sleep_forever() async with _core.open_nursery() as nursery: nursery.start_soon(enter_scope, name="this one") await wait_all_tasks_blocked() with pytest.raises(RuntimeError) as exc_info: with scope: pass # pragma: no cover assert "single 'with' block" in str(exc_info.value) nursery.cancel_scope.cancel() # If not yet entered, cancel_called is true when the deadline has passed # even if cancel() hasn't been called yet scope = _core.CancelScope(deadline=_core.current_time() + 1) assert not scope.cancel_called scope.deadline -= 1 assert scope.cancel_called scope.deadline += 1 assert scope.cancel_called # never become un-cancelled async def test_cancel_scope_misnesting(): outer = _core.CancelScope() inner = _core.CancelScope() with ExitStack() as stack: stack.enter_context(outer) with inner: with pytest.raises(RuntimeError, match="still within its child"): stack.close() # No further error is raised when exiting the inner context # If there are other tasks inside the abandoned part of the cancel tree, # they get cancelled when the misnesting is detected async def task1(): with pytest.raises(_core.Cancelled): await sleep_forever() # Even if inside another cancel scope async def task2(): with _core.CancelScope(): with pytest.raises(_core.Cancelled): await sleep_forever() with ExitStack() as stack: stack.enter_context(_core.CancelScope()) async with _core.open_nursery() as nursery: nursery.start_soon(task1) nursery.start_soon(task2) await wait_all_tasks_blocked() with pytest.raises(RuntimeError, match="still within its child"): stack.close() # Variant that makes the child tasks direct children of the scope # that noticed the misnesting: nursery_mgr = _core.open_nursery() nursery = await nursery_mgr.__aenter__() try: nursery.start_soon(task1) nursery.start_soon(task2) nursery.start_soon(sleep_forever) await wait_all_tasks_blocked() nursery.cancel_scope.__exit__(None, None, None) finally: with pytest.raises(RuntimeError) as exc_info: await nursery_mgr.__aexit__(*sys.exc_info()) assert "which had already been exited" in str(exc_info.value) assert type(exc_info.value.__context__) is _core.MultiError assert len(exc_info.value.__context__.exceptions) == 3 cancelled_in_context = False for exc in exc_info.value.__context__.exceptions: assert isinstance(exc, RuntimeError) assert "closed before the task exited" in str(exc) cancelled_in_context |= isinstance(exc.__context__, _core.Cancelled) assert cancelled_in_context # for the sleep_forever # Trying to exit a cancel scope from an unrelated task raises an error # without affecting any state async def task3(task_status): with _core.CancelScope() as scope: task_status.started(scope) await sleep_forever() async with _core.open_nursery() as nursery: scope = await nursery.start(task3) with pytest.raises(RuntimeError, match="from unrelated"): scope.__exit__(None, None, None) scope.cancel() @slow async def test_timekeeping(): # probably a good idea to use a real clock for *one* test anyway... TARGET = 1.0 # give it a few tries in case of random CI server flakiness for _ in range(4): real_start = time.perf_counter() with _core.CancelScope() as scope: scope.deadline = _core.current_time() + TARGET await sleep_forever() real_duration = time.perf_counter() - real_start accuracy = real_duration / TARGET print(accuracy) # Actual time elapsed should always be >= target time # (== is possible depending on system behavior for time.perf_counter resolution if 1.0 <= accuracy < 2: # pragma: no branch break else: # pragma: no cover assert False async def test_failed_abort(): stubborn_task = [None] stubborn_scope = [None] record = [] async def stubborn_sleeper(): stubborn_task[0] = _core.current_task() with _core.CancelScope() as scope: stubborn_scope[0] = scope record.append("sleep") x = await _core.wait_task_rescheduled(lambda _: _core.Abort.FAILED) assert x == 1 record.append("woke") try: await _core.checkpoint_if_cancelled() except _core.Cancelled: record.append("cancelled") async with _core.open_nursery() as nursery: nursery.start_soon(stubborn_sleeper) await wait_all_tasks_blocked() assert record == ["sleep"] stubborn_scope[0].cancel() await wait_all_tasks_blocked() # cancel didn't wake it up assert record == ["sleep"] # wake it up again by hand _core.reschedule(stubborn_task[0], outcome.Value(1)) assert record == ["sleep", "woke", "cancelled"] @restore_unraisablehook() def test_broken_abort(): async def main(): # These yields are here to work around an annoying warning -- we're # going to crash the main loop, and if we (by chance) do this before # the run_sync_soon task runs for the first time, then Python gives us # a spurious warning about it not being awaited. (I mean, the warning # is correct, but here we're testing our ability to deliver a # semi-meaningful error after things have gone totally pear-shaped, so # it's not relevant.) By letting the run_sync_soon_task run first, we # avoid the warning. await _core.checkpoint() await _core.checkpoint() with _core.CancelScope() as scope: scope.cancel() # None is not a legal return value here await _core.wait_task_rescheduled(lambda _: None) with pytest.raises(_core.TrioInternalError): _core.run(main) # Because this crashes, various __del__ methods print complaints on # stderr. Make sure that they get run now, so the output is attached to # this test. gc_collect_harder() @restore_unraisablehook() def test_error_in_run_loop(): # Blow stuff up real good to check we at least get a TrioInternalError async def main(): task = _core.current_task() task._schedule_points = "hello!" await _core.checkpoint() with ignore_coroutine_never_awaited_warnings(): with pytest.raises(_core.TrioInternalError): _core.run(main) async def test_spawn_system_task(): record = [] async def system_task(x): record.append(("x", x)) record.append(("ki", _core.currently_ki_protected())) await _core.checkpoint() _core.spawn_system_task(system_task, 1) await wait_all_tasks_blocked() assert record == [("x", 1), ("ki", True)] # intentionally make a system task crash def test_system_task_crash(): async def crasher(): raise KeyError async def main(): _core.spawn_system_task(crasher) await sleep_forever() with pytest.raises(_core.TrioInternalError): _core.run(main) def test_system_task_crash_MultiError(): async def crasher1(): raise KeyError async def crasher2(): raise ValueError async def system_task(): async with _core.open_nursery() as nursery: nursery.start_soon(crasher1) nursery.start_soon(crasher2) async def main(): _core.spawn_system_task(system_task) await sleep_forever() with pytest.raises(_core.TrioInternalError) as excinfo: _core.run(main) me = excinfo.value.__cause__ assert isinstance(me, _core.MultiError) assert len(me.exceptions) == 2 for exc in me.exceptions: assert isinstance(exc, (KeyError, ValueError)) def test_system_task_crash_plus_Cancelled(): # Set up a situation where a system task crashes with a # MultiError([Cancelled, ValueError]) async def crasher(): try: await sleep_forever() except _core.Cancelled: raise ValueError async def cancelme(): await sleep_forever() async def system_task(): async with _core.open_nursery() as nursery: nursery.start_soon(crasher) nursery.start_soon(cancelme) async def main(): _core.spawn_system_task(system_task) # then we exit, triggering a cancellation with pytest.raises(_core.TrioInternalError) as excinfo: _core.run(main) assert type(excinfo.value.__cause__) is ValueError def test_system_task_crash_KeyboardInterrupt(): async def ki(): raise KeyboardInterrupt async def main(): _core.spawn_system_task(ki) await sleep_forever() with pytest.raises(_core.TrioInternalError) as excinfo: _core.run(main) assert isinstance(excinfo.value.__cause__, KeyboardInterrupt) # This used to fail because checkpoint was a yield followed by an immediate # reschedule. So we had: # 1) this task yields # 2) this task is rescheduled # ... # 3) next iteration of event loop starts, runs timeouts # 4) this task has timed out # 5) ...but it's on the run queue, so the timeout is queued to be delivered # the next time that it's blocked. async def test_yield_briefly_checks_for_timeout(mock_clock): with _core.CancelScope(deadline=_core.current_time() + 5): await _core.checkpoint() with pytest.raises(_core.Cancelled): mock_clock.jump(10) await _core.checkpoint() # This tests that sys.exc_info is properly saved/restored as we swap between # tasks. It turns out that the interpreter automagically handles this for us # so there's no special code in Trio required to pass this test, but it's # still nice to know that it works :-). # # Update: it turns out I was right to be nervous! see the next test... async def test_exc_info(): record = [] seq = Sequencer() async def child1(): with pytest.raises(ValueError) as excinfo: try: async with seq(0): pass # we don't yield until seq(2) below record.append("child1 raise") raise ValueError("child1") except ValueError: record.append("child1 sleep") async with seq(2): pass assert "child2 wake" in record record.append("child1 re-raise") raise assert excinfo.value.__context__ is None record.append("child1 success") async def child2(): with pytest.raises(KeyError) as excinfo: async with seq(1): pass # we don't yield until seq(3) below assert "child1 sleep" in record record.append("child2 wake") assert sys.exc_info() == (None, None, None) try: raise KeyError("child2") except KeyError: record.append("child2 sleep again") async with seq(3): pass assert "child1 re-raise" in record record.append("child2 re-raise") raise assert excinfo.value.__context__ is None record.append("child2 success") async with _core.open_nursery() as nursery: nursery.start_soon(child1) nursery.start_soon(child2) assert record == [ "child1 raise", "child1 sleep", "child2 wake", "child2 sleep again", "child1 re-raise", "child1 success", "child2 re-raise", "child2 success", ] # At least as of CPython 3.6, using .throw() to raise an exception inside a # coroutine/generator causes the original exc_info state to be lost, so things # like re-raising and exception chaining are broken. # # https://bugs.python.org/issue29587 async def test_exc_info_after_yield_error(): child_task = None async def child(): nonlocal child_task child_task = _core.current_task() try: raise KeyError except Exception: try: await sleep_forever() except Exception: pass raise with pytest.raises(KeyError): async with _core.open_nursery() as nursery: nursery.start_soon(child) await wait_all_tasks_blocked() _core.reschedule(child_task, outcome.Error(ValueError())) # Similar to previous test -- if the ValueError() gets sent in via 'throw', # then Python's normal implicit chaining stuff is broken. async def test_exception_chaining_after_yield_error(): child_task = None async def child(): nonlocal child_task child_task = _core.current_task() try: raise KeyError except Exception: await sleep_forever() with pytest.raises(ValueError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(child) await wait_all_tasks_blocked() _core.reschedule(child_task, outcome.Error(ValueError())) assert isinstance(excinfo.value.__context__, KeyError) async def test_nursery_exception_chaining_doesnt_make_context_loops(): async def crasher(): raise KeyError with pytest.raises(_core.MultiError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(crasher) raise ValueError # the MultiError should not have the KeyError or ValueError as context assert excinfo.value.__context__ is None def test_TrioToken_identity(): async def get_and_check_token(): token = _core.current_trio_token() # Two calls in the same run give the same object assert token is _core.current_trio_token() return token t1 = _core.run(get_and_check_token) t2 = _core.run(get_and_check_token) assert t1 is not t2 assert t1 != t2 assert hash(t1) != hash(t2) async def test_TrioToken_run_sync_soon_basic(): record = [] def cb(x): record.append(("cb", x)) token = _core.current_trio_token() token.run_sync_soon(cb, 1) assert not record await wait_all_tasks_blocked() assert record == [("cb", 1)] def test_TrioToken_run_sync_soon_too_late(): token = None async def main(): nonlocal token token = _core.current_trio_token() _core.run(main) assert token is not None with pytest.raises(_core.RunFinishedError): token.run_sync_soon(lambda: None) # pragma: no branch async def test_TrioToken_run_sync_soon_idempotent(): record = [] def cb(x): record.append(x) token = _core.current_trio_token() token.run_sync_soon(cb, 1) token.run_sync_soon(cb, 1, idempotent=True) token.run_sync_soon(cb, 1, idempotent=True) token.run_sync_soon(cb, 1, idempotent=True) token.run_sync_soon(cb, 2, idempotent=True) token.run_sync_soon(cb, 2, idempotent=True) await wait_all_tasks_blocked() assert len(record) == 3 assert sorted(record) == [1, 1, 2] # ordering test record = [] for _ in range(3): for i in range(100): token.run_sync_soon(cb, i, idempotent=True) await wait_all_tasks_blocked() # We guarantee FIFO assert record == list(range(100)) def test_TrioToken_run_sync_soon_idempotent_requeue(): # We guarantee that if a call has finished, queueing it again will call it # again. Due to the lack of synchronization, this effectively means that # we have to guarantee that once a call has *started*, queueing it again # will call it again. Also this is much easier to test :-) record = [] def redo(token): record.append(None) try: token.run_sync_soon(redo, token, idempotent=True) except _core.RunFinishedError: pass async def main(): token = _core.current_trio_token() token.run_sync_soon(redo, token, idempotent=True) await _core.checkpoint() await _core.checkpoint() await _core.checkpoint() _core.run(main) assert len(record) >= 2 def test_TrioToken_run_sync_soon_after_main_crash(): record = [] async def main(): token = _core.current_trio_token() # After main exits but before finally cleaning up, callback processed # normally token.run_sync_soon(lambda: record.append("sync-cb")) raise ValueError with pytest.raises(ValueError): _core.run(main) assert record == ["sync-cb"] def test_TrioToken_run_sync_soon_crashes(): record = set() async def main(): token = _core.current_trio_token() token.run_sync_soon(lambda: dict()["nope"]) # check that a crashing run_sync_soon callback doesn't stop further # calls to run_sync_soon token.run_sync_soon(lambda: record.add("2nd run_sync_soon ran")) try: await sleep_forever() except _core.Cancelled: record.add("cancelled!") with pytest.raises(_core.TrioInternalError) as excinfo: _core.run(main) assert type(excinfo.value.__cause__) is KeyError assert record == {"2nd run_sync_soon ran", "cancelled!"} async def test_TrioToken_run_sync_soon_FIFO(): N = 100 record = [] token = _core.current_trio_token() for i in range(N): token.run_sync_soon(lambda j: record.append(j), i) await wait_all_tasks_blocked() assert record == list(range(N)) def test_TrioToken_run_sync_soon_starvation_resistance(): # Even if we push callbacks in from callbacks, so that the callback queue # never empties out, then we still can't starve out other tasks from # running. token = None record = [] def naughty_cb(i): nonlocal token try: token.run_sync_soon(naughty_cb, i + 1) except _core.RunFinishedError: record.append(("run finished", i)) async def main(): nonlocal token token = _core.current_trio_token() token.run_sync_soon(naughty_cb, 0) record.append("starting") for _ in range(20): await _core.checkpoint() _core.run(main) assert len(record) == 2 assert record[0] == "starting" assert record[1][0] == "run finished" assert record[1][1] >= 19 def test_TrioToken_run_sync_soon_threaded_stress_test(): cb_counter = 0 def cb(): nonlocal cb_counter cb_counter += 1 def stress_thread(token): try: while True: token.run_sync_soon(cb) time.sleep(0) except _core.RunFinishedError: pass async def main(): token = _core.current_trio_token() thread = threading.Thread(target=stress_thread, args=(token,)) thread.start() for _ in range(10): start_value = cb_counter while cb_counter == start_value: await sleep(0.01) _core.run(main) print(cb_counter) async def test_TrioToken_run_sync_soon_massive_queue(): # There are edge cases in the wakeup fd code when the wakeup fd overflows, # so let's try to make that happen. This is also just a good stress test # in general. (With the current-as-of-2017-02-14 code using a socketpair # with minimal buffer, Linux takes 6 wakeups to fill the buffer and macOS # takes 1 wakeup. So 1000 is overkill if anything. Windows OTOH takes # ~600,000 wakeups, but has the same code paths...) COUNT = 1000 token = _core.current_trio_token() counter = [0] def cb(i): # This also tests FIFO ordering of callbacks assert counter[0] == i counter[0] += 1 for i in range(COUNT): token.run_sync_soon(cb, i) await wait_all_tasks_blocked() assert counter[0] == COUNT @pytest.mark.skipif(buggy_pypy_asyncgens, reason="PyPy 7.2 is buggy") def test_TrioToken_run_sync_soon_late_crash(): # Crash after system nursery is closed -- easiest way to do that is # from an async generator finalizer. record = [] saved = [] async def agen(): token = _core.current_trio_token() try: yield 1 finally: token.run_sync_soon(lambda: {}["nope"]) token.run_sync_soon(lambda: record.append("2nd ran")) async def main(): saved.append(agen()) await saved[-1].asend(None) record.append("main exiting") with pytest.raises(_core.TrioInternalError) as excinfo: _core.run(main) assert type(excinfo.value.__cause__) is KeyError assert record == ["main exiting", "2nd ran"] async def test_slow_abort_basic(): with _core.CancelScope() as scope: scope.cancel() with pytest.raises(_core.Cancelled): task = _core.current_task() token = _core.current_trio_token() def slow_abort(raise_cancel): result = outcome.capture(raise_cancel) token.run_sync_soon(_core.reschedule, task, result) return _core.Abort.FAILED await _core.wait_task_rescheduled(slow_abort) async def test_slow_abort_edge_cases(): record = [] async def slow_aborter(): task = _core.current_task() token = _core.current_trio_token() def slow_abort(raise_cancel): record.append("abort-called") result = outcome.capture(raise_cancel) token.run_sync_soon(_core.reschedule, task, result) return _core.Abort.FAILED with pytest.raises(_core.Cancelled): record.append("sleeping") await _core.wait_task_rescheduled(slow_abort) record.append("cancelled") # blocking again, this time it's okay, because we're shielded await _core.checkpoint() record.append("done") with _core.CancelScope() as outer1: with _core.CancelScope() as outer2: async with _core.open_nursery() as nursery: # So we have a task blocked on an operation that can't be # aborted immediately nursery.start_soon(slow_aborter) await wait_all_tasks_blocked() assert record == ["sleeping"] # And then we cancel it, so the abort callback gets run outer1.cancel() assert record == ["sleeping", "abort-called"] # In fact that happens twice! (This used to cause the abort # callback to be run twice) outer2.cancel() assert record == ["sleeping", "abort-called"] # But then before the abort finishes, the task gets shielded! nursery.cancel_scope.shield = True # Now we wait for the task to finish... # The cancellation was delivered, even though it was shielded assert record == ["sleeping", "abort-called", "cancelled", "done"] async def test_task_tree_introspection(): tasks = {} nurseries = {} async def parent(task_status=_core.TASK_STATUS_IGNORED): tasks["parent"] = _core.current_task() assert tasks["parent"].child_nurseries == [] async with _core.open_nursery() as nursery1: async with _core.open_nursery() as nursery2: assert tasks["parent"].child_nurseries == [nursery1, nursery2] assert tasks["parent"].child_nurseries == [] async with _core.open_nursery() as nursery: nurseries["parent"] = nursery await nursery.start(child1) # Upward links survive after tasks/nurseries exit assert nurseries["parent"].parent_task is tasks["parent"] assert tasks["child1"].parent_nursery is nurseries["parent"] assert nurseries["child1"].parent_task is tasks["child1"] assert tasks["child2"].parent_nursery is nurseries["child1"] nursery = _core.current_task().parent_nursery # Make sure that chaining eventually gives a nursery of None (and not, # for example, an error) while nursery is not None: t = nursery.parent_task nursery = t.parent_nursery async def child2(): tasks["child2"] = _core.current_task() assert tasks["parent"].child_nurseries == [nurseries["parent"]] assert nurseries["parent"].child_tasks == frozenset({tasks["child1"]}) assert tasks["child1"].child_nurseries == [nurseries["child1"]] assert nurseries["child1"].child_tasks == frozenset({tasks["child2"]}) assert tasks["child2"].child_nurseries == [] async def child1(task_status=_core.TASK_STATUS_IGNORED): me = tasks["child1"] = _core.current_task() assert me.parent_nursery.parent_task is tasks["parent"] assert me.parent_nursery is not nurseries["parent"] assert me.eventual_parent_nursery is nurseries["parent"] task_status.started() assert me.parent_nursery is nurseries["parent"] assert me.eventual_parent_nursery is None # Wait for the start() call to return and close its internal nursery, to # ensure consistent results in child2: await _core.wait_all_tasks_blocked() async with _core.open_nursery() as nursery: nurseries["child1"] = nursery nursery.start_soon(child2) async with _core.open_nursery() as nursery: nursery.start_soon(parent) # There are no pending starts, so no one should have a non-None # eventual_parent_nursery for task in tasks.values(): assert task.eventual_parent_nursery is None async def test_nursery_closure(): async def child1(nursery): # We can add new tasks to the nursery even after entering __aexit__, # so long as there are still tasks running nursery.start_soon(child2) async def child2(): pass async with _core.open_nursery() as nursery: nursery.start_soon(child1, nursery) # But once we've left __aexit__, the nursery is closed with pytest.raises(RuntimeError): nursery.start_soon(child2) async def test_spawn_name(): async def func1(expected): task = _core.current_task() assert expected in task.name async def func2(): # pragma: no cover pass async with _core.open_nursery() as nursery: for spawn_fn in [nursery.start_soon, _core.spawn_system_task]: spawn_fn(func1, "func1") spawn_fn(func1, "func2", name=func2) spawn_fn(func1, "func3", name="func3") spawn_fn(functools.partial(func1, "func1")) spawn_fn(func1, "object", name=object()) async def test_current_effective_deadline(mock_clock): assert _core.current_effective_deadline() == inf with _core.CancelScope(deadline=5) as scope1: with _core.CancelScope(deadline=10) as scope2: assert _core.current_effective_deadline() == 5 scope2.deadline = 3 assert _core.current_effective_deadline() == 3 scope2.deadline = 10 assert _core.current_effective_deadline() == 5 scope2.shield = True assert _core.current_effective_deadline() == 10 scope2.shield = False assert _core.current_effective_deadline() == 5 scope1.cancel() assert _core.current_effective_deadline() == -inf scope2.shield = True assert _core.current_effective_deadline() == 10 assert _core.current_effective_deadline() == -inf assert _core.current_effective_deadline() == inf def test_nice_error_on_bad_calls_to_run_or_spawn(): def bad_call_run(*args): _core.run(*args) def bad_call_spawn(*args): async def main(): async with _core.open_nursery() as nursery: nursery.start_soon(*args) _core.run(main) for bad_call in bad_call_run, bad_call_spawn: async def f(): # pragma: no cover pass with pytest.raises(TypeError, match="expecting an async function"): bad_call(f()) async def async_gen(arg): # pragma: no cover yield arg with pytest.raises( TypeError, match="expected an async function but got an async generator" ): bad_call(async_gen, 0) def test_calling_asyncio_function_gives_nice_error(): async def child_xyzzy(): await create_asyncio_future_in_new_loop() async def misguided(): await child_xyzzy() with pytest.raises(TypeError) as excinfo: _core.run(misguided) assert "asyncio" in str(excinfo.value) # The traceback should point to the location of the foreign await assert any( # pragma: no branch entry.name == "child_xyzzy" for entry in excinfo.traceback ) async def test_asyncio_function_inside_nursery_does_not_explode(): # Regression test for https://github.com/python-trio/trio/issues/552 with pytest.raises(TypeError) as excinfo: async with _core.open_nursery() as nursery: import asyncio nursery.start_soon(sleep_forever) await create_asyncio_future_in_new_loop() assert "asyncio" in str(excinfo.value) async def test_trivial_yields(): with assert_checkpoints(): await _core.checkpoint() with assert_checkpoints(): await _core.checkpoint_if_cancelled() await _core.cancel_shielded_checkpoint() with assert_checkpoints(): async with _core.open_nursery(): pass with _core.CancelScope() as cancel_scope: cancel_scope.cancel() with pytest.raises(_core.MultiError) as excinfo: async with _core.open_nursery(): raise KeyError assert len(excinfo.value.exceptions) == 2 assert {type(e) for e in excinfo.value.exceptions} == { KeyError, _core.Cancelled, } async def test_nursery_start(autojump_clock): async def no_args(): # pragma: no cover pass # Errors in calling convention get raised immediately from start async with _core.open_nursery() as nursery: with pytest.raises(TypeError): await nursery.start(no_args) async def sleep_then_start(seconds, *, task_status=_core.TASK_STATUS_IGNORED): repr(task_status) # smoke test await sleep(seconds) task_status.started(seconds) await sleep(seconds) # Basic happy-path check: start waits for the task to call started(), then # returns, passes back the value, and the given nursery then waits for it # to exit. for seconds in [1, 2]: async with _core.open_nursery() as nursery: assert len(nursery.child_tasks) == 0 t0 = _core.current_time() assert await nursery.start(sleep_then_start, seconds) == seconds assert _core.current_time() - t0 == seconds assert len(nursery.child_tasks) == 1 assert _core.current_time() - t0 == 2 * seconds # Make sure TASK_STATUS_IGNORED works so task function can be called # directly t0 = _core.current_time() await sleep_then_start(3) assert _core.current_time() - t0 == 2 * 3 # calling started twice async def double_started(task_status=_core.TASK_STATUS_IGNORED): task_status.started() with pytest.raises(RuntimeError): task_status.started() async with _core.open_nursery() as nursery: await nursery.start(double_started) # child crashes before calling started -> error comes out of .start() async def raise_keyerror(task_status=_core.TASK_STATUS_IGNORED): raise KeyError("oops") async with _core.open_nursery() as nursery: with pytest.raises(KeyError): await nursery.start(raise_keyerror) # child exiting cleanly before calling started -> triggers a RuntimeError async def nothing(task_status=_core.TASK_STATUS_IGNORED): return async with _core.open_nursery() as nursery: with pytest.raises(RuntimeError) as excinfo: await nursery.start(nothing) assert "exited without calling" in str(excinfo.value) # if the call to start() is cancelled, then the call to started() does # nothing -- the child keeps executing under start(). The value it passed # is ignored; start() raises Cancelled. async def just_started(task_status=_core.TASK_STATUS_IGNORED): task_status.started("hi") async with _core.open_nursery() as nursery: with _core.CancelScope() as cs: cs.cancel() with pytest.raises(_core.Cancelled): await nursery.start(just_started) # and if after the no-op started(), the child crashes, the error comes out # of start() async def raise_keyerror_after_started(task_status=_core.TASK_STATUS_IGNORED): task_status.started() raise KeyError("whoopsiedaisy") async with _core.open_nursery() as nursery: with _core.CancelScope() as cs: cs.cancel() with pytest.raises(_core.MultiError) as excinfo: await nursery.start(raise_keyerror_after_started) assert {type(e) for e in excinfo.value.exceptions} == { _core.Cancelled, KeyError, } # trying to start in a closed nursery raises an error immediately async with _core.open_nursery() as closed_nursery: pass t0 = _core.current_time() with pytest.raises(RuntimeError): await closed_nursery.start(sleep_then_start, 7) assert _core.current_time() == t0 async def test_task_nursery_stack(): task = _core.current_task() assert task._child_nurseries == [] async with _core.open_nursery() as nursery1: assert task._child_nurseries == [nursery1] with pytest.raises(KeyError): async with _core.open_nursery() as nursery2: assert task._child_nurseries == [nursery1, nursery2] raise KeyError assert task._child_nurseries == [nursery1] assert task._child_nurseries == [] async def test_nursery_start_with_cancelled_nursery(): # This function isn't testing task_status, it's using task_status as a # convenient way to get a nursery that we can test spawning stuff into. async def setup_nursery(task_status=_core.TASK_STATUS_IGNORED): async with _core.open_nursery() as nursery: task_status.started(nursery) await sleep_forever() # Calls started() while children are asleep, so we can make sure # that the cancellation machinery notices and aborts when a sleeping task # is moved into a cancelled scope. async def sleeping_children(fn, *, task_status=_core.TASK_STATUS_IGNORED): async with _core.open_nursery() as nursery: nursery.start_soon(sleep_forever) nursery.start_soon(sleep_forever) await wait_all_tasks_blocked() fn() task_status.started() # Cancelling the setup_nursery just *before* calling started() async with _core.open_nursery() as nursery: target_nursery = await nursery.start(setup_nursery) await target_nursery.start( sleeping_children, target_nursery.cancel_scope.cancel ) # Cancelling the setup_nursery just *after* calling started() async with _core.open_nursery() as nursery: target_nursery = await nursery.start(setup_nursery) await target_nursery.start(sleeping_children, lambda: None) target_nursery.cancel_scope.cancel() async def test_nursery_start_keeps_nursery_open(autojump_clock): async def sleep_a_bit(task_status=_core.TASK_STATUS_IGNORED): await sleep(2) task_status.started() await sleep(3) async with _core.open_nursery() as nursery1: t0 = _core.current_time() async with _core.open_nursery() as nursery2: # Start the 'start' call running in the background nursery1.start_soon(nursery2.start, sleep_a_bit) # Sleep a bit await sleep(1) # Start another one. nursery1.start_soon(nursery2.start, sleep_a_bit) # Then exit this nursery. At this point, there are no tasks # present in this nursery -- the only thing keeping it open is # that the tasks will be placed into it soon, when they call # started(). assert _core.current_time() - t0 == 6 # Check that it still works even if the task that the nursery is waiting # for ends up crashing, and never actually enters the nursery. async def sleep_then_crash(task_status=_core.TASK_STATUS_IGNORED): await sleep(7) raise KeyError async def start_sleep_then_crash(nursery): with pytest.raises(KeyError): await nursery.start(sleep_then_crash) async with _core.open_nursery() as nursery1: t0 = _core.current_time() async with _core.open_nursery() as nursery2: nursery1.start_soon(start_sleep_then_crash, nursery2) await wait_all_tasks_blocked() assert _core.current_time() - t0 == 7 async def test_nursery_explicit_exception(): with pytest.raises(KeyError): async with _core.open_nursery(): raise KeyError() async def test_nursery_stop_iteration(): async def fail(): raise ValueError try: async with _core.open_nursery() as nursery: nursery.start_soon(fail) raise StopIteration except _core.MultiError as e: assert tuple(map(type, e.exceptions)) == (StopIteration, ValueError) async def test_nursery_stop_async_iteration(): class it: def __init__(self, count): self.count = count self.val = 0 async def __anext__(self): await sleep(0) val = self.val if val >= self.count: raise StopAsyncIteration self.val += 1 return val class async_zip: def __init__(self, *largs): self.nexts = [obj.__anext__ for obj in largs] async def _accumulate(self, f, items, i): items[i] = await f() def __aiter__(self): return self async def __anext__(self): nexts = self.nexts items = [None] * len(nexts) got_stop = False def handle(exc): nonlocal got_stop if isinstance(exc, StopAsyncIteration): got_stop = True return None else: # pragma: no cover return exc with _core.MultiError.catch(handle): async with _core.open_nursery() as nursery: for i, f in enumerate(nexts): nursery.start_soon(self._accumulate, f, items, i) if got_stop: raise StopAsyncIteration return items result = [] async for vals in async_zip(it(4), it(2)): result.append(vals) assert result == [[0, 0], [1, 1]] async def test_traceback_frame_removal(): async def my_child_task(): raise KeyError() try: # Trick: For now cancel/nursery scopes still leave a bunch of tb gunk # behind. But if there's a MultiError, they leave it on the MultiError, # which lets us get a clean look at the KeyError itself. Someday I # guess this will always be a MultiError (#611), but for now we can # force it by raising two exceptions. async with _core.open_nursery() as nursery: nursery.start_soon(my_child_task) nursery.start_soon(my_child_task) except _core.MultiError as exc: first_exc = exc.exceptions[0] assert isinstance(first_exc, KeyError) # The top frame in the exception traceback should be inside the child # task, not trio/contextvars internals. And there's only one frame # inside the child task, so this will also detect if our frame-removal # is too eager. frame = first_exc.__traceback__.tb_frame assert frame.f_code is my_child_task.__code__ def test_contextvar_support(): var = contextvars.ContextVar("test") var.set("before") assert var.get() == "before" async def inner(): task = _core.current_task() assert task.context.get(var) == "before" assert var.get() == "before" var.set("after") assert var.get() == "after" assert var in task.context assert task.context.get(var) == "after" _core.run(inner) assert var.get() == "before" async def test_contextvar_multitask(): var = contextvars.ContextVar("test", default="hmmm") async def t1(): assert var.get() == "hmmm" var.set("hmmmm") assert var.get() == "hmmmm" async def t2(): assert var.get() == "hmmmm" async with _core.open_nursery() as n: n.start_soon(t1) await wait_all_tasks_blocked() assert var.get() == "hmmm" var.set("hmmmm") n.start_soon(t2) await wait_all_tasks_blocked() def test_system_task_contexts(): cvar = contextvars.ContextVar("qwilfish") cvar.set("water") async def system_task(): assert cvar.get() == "water" async def regular_task(): assert cvar.get() == "poison" async def inner(): async with _core.open_nursery() as nursery: cvar.set("poison") nursery.start_soon(regular_task) _core.spawn_system_task(system_task) await wait_all_tasks_blocked() _core.run(inner) def test_Nursery_init(): with pytest.raises(TypeError): _core._run.Nursery(None, None) async def test_Nursery_private_init(): # context manager creation should not raise async with _core.open_nursery() as nursery: assert False == nursery._closed def test_Nursery_subclass(): with pytest.raises(TypeError): class Subclass(_core._run.Nursery): pass def test_Cancelled_init(): with pytest.raises(TypeError): raise _core.Cancelled with pytest.raises(TypeError): _core.Cancelled() # private constructor should not raise _core.Cancelled._create() def test_Cancelled_str(): cancelled = _core.Cancelled._create() assert str(cancelled) == "Cancelled" def test_Cancelled_subclass(): with pytest.raises(TypeError): class Subclass(_core.Cancelled): pass def test_CancelScope_subclass(): with pytest.raises(TypeError): class Subclass(_core.CancelScope): pass def test_sniffio_integration(): with pytest.raises(sniffio.AsyncLibraryNotFoundError): sniffio.current_async_library() async def check_inside_trio(): assert sniffio.current_async_library() == "trio" _core.run(check_inside_trio) with pytest.raises(sniffio.AsyncLibraryNotFoundError): sniffio.current_async_library() async def test_Task_custom_sleep_data(): task = _core.current_task() assert task.custom_sleep_data is None task.custom_sleep_data = 1 assert task.custom_sleep_data == 1 await _core.checkpoint() assert task.custom_sleep_data is None @types.coroutine def async_yield(value): yield value async def test_permanently_detach_coroutine_object(): task = None pdco_outcome = None async def detachable_coroutine(task_outcome, yield_value): await sleep(0) nonlocal task, pdco_outcome task = _core.current_task() pdco_outcome = await outcome.acapture( _core.permanently_detach_coroutine_object, task_outcome ) await async_yield(yield_value) async with _core.open_nursery() as nursery: nursery.start_soon(detachable_coroutine, outcome.Value(None), "I'm free!") # If we get here then Trio thinks the task has exited... but the coroutine # is still iterable assert pdco_outcome is None assert task.coro.send("be free!") == "I'm free!" assert pdco_outcome == outcome.Value("be free!") with pytest.raises(StopIteration): task.coro.send(None) # Check the exception paths too task = None pdco_outcome = None with pytest.raises(KeyError): async with _core.open_nursery() as nursery: nursery.start_soon(detachable_coroutine, outcome.Error(KeyError()), "uh oh") throw_in = ValueError() assert task.coro.throw(throw_in) == "uh oh" assert pdco_outcome == outcome.Error(throw_in) with pytest.raises(StopIteration): task.coro.send(None) async def bad_detach(): async with _core.open_nursery(): with pytest.raises(RuntimeError) as excinfo: await _core.permanently_detach_coroutine_object(outcome.Value(None)) assert "open nurser" in str(excinfo.value) async with _core.open_nursery() as nursery: nursery.start_soon(bad_detach) async def test_detach_and_reattach_coroutine_object(): unrelated_task = None task = None async def unrelated_coroutine(): nonlocal unrelated_task unrelated_task = _core.current_task() async def reattachable_coroutine(): await sleep(0) nonlocal task task = _core.current_task() def abort_fn(_): # pragma: no cover return _core.Abort.FAILED got = await _core.temporarily_detach_coroutine_object(abort_fn) assert got == "not trio!" await async_yield(1) await async_yield(2) with pytest.raises(RuntimeError) as excinfo: await _core.reattach_detached_coroutine_object(unrelated_task, None) assert "does not match" in str(excinfo.value) await _core.reattach_detached_coroutine_object(task, "byebye") await sleep(0) async with _core.open_nursery() as nursery: nursery.start_soon(unrelated_coroutine) nursery.start_soon(reattachable_coroutine) await wait_all_tasks_blocked() assert unrelated_task is not None assert task is not None # Okay, it's detached. Here's our coroutine runner: assert task.coro.send("not trio!") == 1 assert task.coro.send(None) == 2 assert task.coro.send(None) == "byebye" # Now it's been reattached, and we can leave the nursery async def test_detached_coroutine_cancellation(): abort_fn_called = False task = None async def reattachable_coroutine(): await sleep(0) nonlocal task task = _core.current_task() def abort_fn(_): nonlocal abort_fn_called abort_fn_called = True return _core.Abort.FAILED await _core.temporarily_detach_coroutine_object(abort_fn) await _core.reattach_detached_coroutine_object(task, None) with pytest.raises(_core.Cancelled): await sleep(0) async with _core.open_nursery() as nursery: nursery.start_soon(reattachable_coroutine) await wait_all_tasks_blocked() assert task is not None nursery.cancel_scope.cancel() task.coro.send(None) assert abort_fn_called @restore_unraisablehook() def test_async_function_implemented_in_C(): # These used to crash because we'd try to mutate the coroutine object's # cr_frame, but C functions don't have Python frames. async def agen_fn(record): assert not _core.currently_ki_protected() record.append("the generator ran") yield run_record = [] agen = agen_fn(run_record) _core.run(agen.__anext__) assert run_record == ["the generator ran"] async def main(): start_soon_record = [] agen = agen_fn(start_soon_record) async with _core.open_nursery() as nursery: nursery.start_soon(agen.__anext__) assert start_soon_record == ["the generator ran"] _core.run(main) async def test_very_deep_cancel_scope_nesting(): # This used to crash with a RecursionError in CancelStatus.recalculate with ExitStack() as exit_stack: outermost_scope = _core.CancelScope() exit_stack.enter_context(outermost_scope) for _ in range(5000): exit_stack.enter_context(_core.CancelScope()) outermost_scope.cancel() async def test_cancel_scope_deadline_duplicates(): # This exercises an assert in Deadlines._prune, by intentionally creating # duplicate entries in the deadline heap. now = _core.current_time() with _core.CancelScope() as cscope: for _ in range(DEADLINE_HEAP_MIN_PRUNE_THRESHOLD * 2): cscope.deadline = now + 9998 cscope.deadline = now + 9999 await sleep(0.01) @pytest.mark.skipif( sys.implementation.name != "cpython", reason="Only makes sense with refcounting GC" ) async def test_simple_cancel_scope_usage_doesnt_create_cyclic_garbage(): # https://github.com/python-trio/trio/issues/1770 gc.collect() async def do_a_cancel(): with _core.CancelScope() as cscope: cscope.cancel() await sleep_forever() old_flags = gc.get_debug() try: gc.collect() gc.set_debug(gc.DEBUG_SAVEALL) await do_a_cancel() await do_a_cancel() async with _core.open_nursery() as nursery: nursery.start_soon(do_a_cancel) gc.collect() assert not gc.garbage finally: gc.set_debug(old_flags) gc.garbage.clear() @pytest.mark.skipif( sys.implementation.name != "cpython", reason="Only makes sense with refcounting GC" ) async def test_nursery_cancel_doesnt_create_cyclic_garbage(): # https://github.com/python-trio/trio/issues/1770#issuecomment-730229423 gc.collect() old_flags = gc.get_debug() try: for i in range(3): async with _core.open_nursery() as nursery: gc.collect() gc.set_debug(gc.DEBUG_LEAK) nursery.cancel_scope.cancel() gc.collect() gc.set_debug(0) assert not gc.garbage finally: gc.set_debug(old_flags) gc.garbage.clear() @pytest.mark.skipif( sys.implementation.name != "cpython", reason="Only makes sense with refcounting GC" ) async def test_locals_destroyed_promptly_on_cancel(): destroyed = False def finalizer(): nonlocal destroyed destroyed = True class A: pass async def task(): a = A() weakref.finalize(a, finalizer) await _core.checkpoint() async with _core.open_nursery() as nursery: nursery.start_soon(task) nursery.cancel_scope.cancel() assert destroyed ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_thread_cache.py0000644000076500000000000001163300000000000021335 0ustar00quentinwheelimport pytest import threading from queue import Queue import time import sys from contextlib import contextmanager from .tutil import slow, gc_collect_harder, disable_threading_excepthook from .. import _thread_cache from .._thread_cache import start_thread_soon, ThreadCache def test_thread_cache_basics(): q = Queue() def fn(): raise RuntimeError("hi") def deliver(outcome): q.put(outcome) start_thread_soon(fn, deliver) outcome = q.get() with pytest.raises(RuntimeError, match="hi"): outcome.unwrap() def test_thread_cache_deref(): res = [False] class del_me: def __call__(self): return 42 def __del__(self): res[0] = True q = Queue() def deliver(outcome): q.put(outcome) start_thread_soon(del_me(), deliver) outcome = q.get() assert outcome.unwrap() == 42 gc_collect_harder() assert res[0] @slow def test_spawning_new_thread_from_deliver_reuses_starting_thread(): # We know that no-one else is using the thread cache, so if we keep # submitting new jobs the instant the previous one is finished, we should # keep getting the same thread over and over. This tests both that the # thread cache is LIFO, and that threads can be assigned new work *before* # deliver exits. # Make sure there are a few threads running, so if we weren't LIFO then we # could grab the wrong one. q = Queue() COUNT = 5 for _ in range(COUNT): start_thread_soon(lambda: time.sleep(1), lambda result: q.put(result)) for _ in range(COUNT): q.get().unwrap() seen_threads = set() done = threading.Event() def deliver(n, _): print(n) seen_threads.add(threading.current_thread()) if n == 0: done.set() else: start_thread_soon(lambda: None, lambda _: deliver(n - 1, _)) start_thread_soon(lambda: None, lambda _: deliver(5, _)) done.wait() assert len(seen_threads) == 1 @slow def test_idle_threads_exit(monkeypatch): # Temporarily set the idle timeout to something tiny, to speed up the # test. (But non-zero, so that the worker loop will at least yield the # CPU.) monkeypatch.setattr(_thread_cache, "IDLE_TIMEOUT", 0.0001) q = Queue() start_thread_soon(lambda: None, lambda _: q.put(threading.current_thread())) seen_thread = q.get() # Since the idle timeout is 0, after sleeping for 1 second, the thread # should have exited time.sleep(1) assert not seen_thread.is_alive() @contextmanager def _join_started_threads(): before = frozenset(threading.enumerate()) try: yield finally: for thread in threading.enumerate(): if thread not in before: thread.join() def test_race_between_idle_exit_and_job_assignment(monkeypatch): # This is a lock where the first few times you try to acquire it with a # timeout, it waits until the lock is available and then pretends to time # out. Using this in our thread cache implementation causes the following # sequence: # # 1. start_thread_soon grabs the worker thread, assigns it a job, and # releases its lock. # 2. The worker thread wakes up (because the lock has been released), but # the JankyLock lies to it and tells it that the lock timed out. So the # worker thread tries to exit. # 3. The worker thread checks for the race between exiting and being # assigned a job, and discovers that it *is* in the process of being # assigned a job, so it loops around and tries to acquire the lock # again. # 4. Eventually the JankyLock admits that the lock is available, and # everything proceeds as normal. class JankyLock: def __init__(self): self._lock = threading.Lock() self._counter = 3 def acquire(self, timeout=None): self._lock.acquire() if timeout is None: return True else: if self._counter > 0: self._counter -= 1 self._lock.release() return False return True def release(self): self._lock.release() monkeypatch.setattr(_thread_cache, "Lock", JankyLock) with disable_threading_excepthook(), _join_started_threads(): tc = ThreadCache() done = threading.Event() tc.start_thread_soon(lambda: None, lambda _: done.set()) done.wait() # Let's kill the thread we started, so it doesn't hang around until the # test suite finishes. Doesn't really do any harm, but it can be confusing # to see it in debug output. This is hacky, and leaves our ThreadCache # object in an inconsistent state... but it doesn't matter, because we're # not going to use it again anyway. tc.start_thread_soon(lambda: None, lambda _: sys.exit()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_tutil.py0000644000076500000000000000070300000000000020100 0ustar00quentinwheelimport pytest from .tutil import check_sequence_matches def test_check_sequence_matches(): check_sequence_matches([1, 2, 3], [1, 2, 3]) with pytest.raises(AssertionError): check_sequence_matches([1, 3, 2], [1, 2, 3]) check_sequence_matches([1, 2, 3, 4], [1, {2, 3}, 4]) check_sequence_matches([1, 3, 2, 4], [1, {2, 3}, 4]) with pytest.raises(AssertionError): check_sequence_matches([1, 2, 4, 3], [1, {2, 3}, 4]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_unbounded_queue.py0000644000076500000000000000777000000000000022141 0ustar00quentinwheelimport itertools import pytest from ... import _core from ...testing import assert_checkpoints, wait_all_tasks_blocked pytestmark = pytest.mark.filterwarnings( "ignore:.*UnboundedQueue:trio.TrioDeprecationWarning" ) async def test_UnboundedQueue_basic(): q = _core.UnboundedQueue() q.put_nowait("hi") assert await q.get_batch() == ["hi"] with pytest.raises(_core.WouldBlock): q.get_batch_nowait() q.put_nowait(1) q.put_nowait(2) q.put_nowait(3) assert q.get_batch_nowait() == [1, 2, 3] assert q.empty() assert q.qsize() == 0 q.put_nowait(None) assert not q.empty() assert q.qsize() == 1 stats = q.statistics() assert stats.qsize == 1 assert stats.tasks_waiting == 0 # smoke test repr(q) async def test_UnboundedQueue_blocking(): record = [] q = _core.UnboundedQueue() async def get_batch_consumer(): while True: batch = await q.get_batch() assert batch record.append(batch) async def aiter_consumer(): async for batch in q: assert batch record.append(batch) for consumer in (get_batch_consumer, aiter_consumer): record.clear() async with _core.open_nursery() as nursery: nursery.start_soon(consumer) await _core.wait_all_tasks_blocked() stats = q.statistics() assert stats.qsize == 0 assert stats.tasks_waiting == 1 q.put_nowait(10) q.put_nowait(11) await _core.wait_all_tasks_blocked() q.put_nowait(12) await _core.wait_all_tasks_blocked() assert record == [[10, 11], [12]] nursery.cancel_scope.cancel() async def test_UnboundedQueue_fairness(): q = _core.UnboundedQueue() # If there's no-one else around, we can put stuff in and take it out # again, no problem q.put_nowait(1) q.put_nowait(2) assert q.get_batch_nowait() == [1, 2] result = None async def get_batch(q): nonlocal result result = await q.get_batch() # But if someone else is waiting to read, then they get dibs async with _core.open_nursery() as nursery: nursery.start_soon(get_batch, q) await _core.wait_all_tasks_blocked() q.put_nowait(3) q.put_nowait(4) with pytest.raises(_core.WouldBlock): q.get_batch_nowait() assert result == [3, 4] # If two tasks are trying to read, they alternate record = [] async def reader(name): while True: record.append((name, await q.get_batch())) async with _core.open_nursery() as nursery: nursery.start_soon(reader, "a") await _core.wait_all_tasks_blocked() nursery.start_soon(reader, "b") await _core.wait_all_tasks_blocked() for i in range(20): q.put_nowait(i) await _core.wait_all_tasks_blocked() nursery.cancel_scope.cancel() assert record == list(zip(itertools.cycle("ab"), [[i] for i in range(20)])) async def test_UnboundedQueue_trivial_yields(): q = _core.UnboundedQueue() q.put_nowait(None) with assert_checkpoints(): await q.get_batch() q.put_nowait(None) with assert_checkpoints(): async for _ in q: # noqa # pragma: no branch break async def test_UnboundedQueue_no_spurious_wakeups(): # If we have two tasks waiting, and put two items into the queue... then # only one task wakes up record = [] async def getter(q, i): got = await q.get_batch() record.append((i, got)) async with _core.open_nursery() as nursery: q = _core.UnboundedQueue() nursery.start_soon(getter, q, 1) await wait_all_tasks_blocked() nursery.start_soon(getter, q, 2) await wait_all_tasks_blocked() for i in range(10): q.put_nowait(i) await wait_all_tasks_blocked() assert record == [(1, list(range(10)))] nursery.cancel_scope.cancel() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_util.py0000644000076500000000000000001600000000000017711 0ustar00quentinwheelimport pytest ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/test_windows.py0000644000076500000000000001717700000000000020446 0ustar00quentinwheelimport os import tempfile from contextlib import contextmanager import pytest on_windows = os.name == "nt" # Mark all the tests in this file as being windows-only pytestmark = pytest.mark.skipif(not on_windows, reason="windows only") from .tutil import slow, gc_collect_harder, restore_unraisablehook from ... import _core, sleep, move_on_after from ...testing import wait_all_tasks_blocked if on_windows: from .._windows_cffi import ( ffi, kernel32, INVALID_HANDLE_VALUE, raise_winerror, FileFlags, ) # The undocumented API that this is testing should be changed to stop using # UnboundedQueue (or just removed until we have time to redo it), but until # then we filter out the warning. @pytest.mark.filterwarnings("ignore:.*UnboundedQueue:trio.TrioDeprecationWarning") async def test_completion_key_listen(): async def post(key): iocp = ffi.cast("HANDLE", _core.current_iocp()) for i in range(10): print("post", i) if i % 3 == 0: await _core.checkpoint() success = kernel32.PostQueuedCompletionStatus(iocp, i, key, ffi.NULL) assert success with _core.monitor_completion_key() as (key, queue): async with _core.open_nursery() as nursery: nursery.start_soon(post, key) i = 0 print("loop") async for batch in queue: # pragma: no branch print("got some", batch) for info in batch: assert info.lpOverlapped == 0 assert info.dwNumberOfBytesTransferred == i i += 1 if i == 10: break print("end loop") async def test_readinto_overlapped(): data = b"1" * 1024 + b"2" * 1024 + b"3" * 1024 + b"4" * 1024 buffer = bytearray(len(data)) with tempfile.TemporaryDirectory() as tdir: tfile = os.path.join(tdir, "numbers.txt") with open(tfile, "wb") as fp: fp.write(data) fp.flush() rawname = tfile.encode("utf-16le") + b"\0\0" rawname_buf = ffi.from_buffer(rawname) handle = kernel32.CreateFileW( ffi.cast("LPCWSTR", rawname_buf), FileFlags.GENERIC_READ, FileFlags.FILE_SHARE_READ, ffi.NULL, # no security attributes FileFlags.OPEN_EXISTING, FileFlags.FILE_FLAG_OVERLAPPED, ffi.NULL, # no template file ) if handle == INVALID_HANDLE_VALUE: # pragma: no cover raise_winerror() try: with memoryview(buffer) as buffer_view: async def read_region(start, end): await _core.readinto_overlapped( handle, buffer_view[start:end], start ) _core.register_with_iocp(handle) async with _core.open_nursery() as nursery: for start in range(0, 4096, 512): nursery.start_soon(read_region, start, start + 512) assert buffer == data with pytest.raises(BufferError): await _core.readinto_overlapped(handle, b"immutable") finally: kernel32.CloseHandle(handle) @contextmanager def pipe_with_overlapped_read(): from asyncio.windows_utils import pipe import msvcrt read_handle, write_handle = pipe(overlapped=(True, False)) try: write_fd = msvcrt.open_osfhandle(write_handle, 0) yield os.fdopen(write_fd, "wb", closefd=False), read_handle finally: kernel32.CloseHandle(ffi.cast("HANDLE", read_handle)) kernel32.CloseHandle(ffi.cast("HANDLE", write_handle)) @restore_unraisablehook() def test_forgot_to_register_with_iocp(): with pipe_with_overlapped_read() as (write_fp, read_handle): with write_fp: write_fp.write(b"test\n") left_run_yet = False async def main(): target = bytearray(1) try: async with _core.open_nursery() as nursery: nursery.start_soon( _core.readinto_overlapped, read_handle, target, name="xyz" ) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() finally: # Run loop is exited without unwinding running tasks, so # we don't get here until the main() coroutine is GC'ed assert left_run_yet with pytest.raises(_core.TrioInternalError) as exc_info: _core.run(main) left_run_yet = True assert "Failed to cancel overlapped I/O in xyz " in str(exc_info.value) assert "forget to call register_with_iocp()?" in str(exc_info.value) # Make sure the Nursery.__del__ assertion about dangling children # gets put with the correct test del exc_info gc_collect_harder() @slow async def test_too_late_to_cancel(): import time with pipe_with_overlapped_read() as (write_fp, read_handle): _core.register_with_iocp(read_handle) target = bytearray(6) async with _core.open_nursery() as nursery: # Start an async read in the background nursery.start_soon(_core.readinto_overlapped, read_handle, target) await wait_all_tasks_blocked() # Synchronous write to the other end of the pipe with write_fp: write_fp.write(b"test1\ntest2\n") # Note: not trio.sleep! We're making sure the OS level # ReadFile completes, before Trio has a chance to execute # another checkpoint and notice it completed. time.sleep(1) nursery.cancel_scope.cancel() assert target[:6] == b"test1\n" # Do another I/O to make sure we've actually processed the # fallback completion that was posted when CancelIoEx failed. assert await _core.readinto_overlapped(read_handle, target) == 6 assert target[:6] == b"test2\n" def test_lsp_that_hooks_select_gives_good_error(monkeypatch): from .._windows_cffi import WSAIoctls, _handle from .. import _io_windows def patched_get_underlying(sock, *, which=WSAIoctls.SIO_BASE_HANDLE): if hasattr(sock, "fileno"): # pragma: no branch sock = sock.fileno() if which == WSAIoctls.SIO_BSP_HANDLE_SELECT: return _handle(sock + 1) else: return _handle(sock) monkeypatch.setattr(_io_windows, "_get_underlying_socket", patched_get_underlying) with pytest.raises( RuntimeError, match="SIO_BASE_HANDLE and SIO_BSP_HANDLE_SELECT differ" ): _core.run(sleep, 0) def test_lsp_that_completely_hides_base_socket_gives_good_error(monkeypatch): # This tests behavior with an LSP that fails SIO_BASE_HANDLE and returns # self for SIO_BSP_HANDLE_SELECT (like Komodia), but also returns # self for SIO_BSP_HANDLE_POLL. No known LSP does this, but we want to # make sure we get an error rather than an infinite loop. from .._windows_cffi import WSAIoctls, _handle from .. import _io_windows def patched_get_underlying(sock, *, which=WSAIoctls.SIO_BASE_HANDLE): if hasattr(sock, "fileno"): # pragma: no branch sock = sock.fileno() if which == WSAIoctls.SIO_BASE_HANDLE: raise OSError("nope") else: return _handle(sock) monkeypatch.setattr(_io_windows, "_get_underlying_socket", patched_get_underlying) with pytest.raises( RuntimeError, match="SIO_BASE_HANDLE failed and SIO_BSP_HANDLE_POLL didn't return a diff", ): _core.run(sleep, 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_core/tests/tutil.py0000644000076500000000000001106300000000000017042 0ustar00quentinwheel# Utilities for testing import asyncio import socket as stdlib_socket import threading import os import sys from typing import TYPE_CHECKING import pytest import warnings from contextlib import contextmanager, closing import gc # See trio/tests/conftest.py for the other half of this from trio.tests.conftest import RUN_SLOW slow = pytest.mark.skipif(not RUN_SLOW, reason="use --run-slow to run slow tests") # PyPy 7.2 was released with a bug that just never called the async # generator 'firstiter' hook at all. This impacts tests of end-of-run # finalization (nothing gets added to runner.asyncgens) and tests of # "foreign" async generator behavior (since the firstiter hook is what # marks the asyncgen as foreign), but most tests of GC-mediated # finalization still work. buggy_pypy_asyncgens = ( not TYPE_CHECKING and sys.implementation.name == "pypy" and sys.pypy_version_info < (7, 3) ) try: s = stdlib_socket.socket(stdlib_socket.AF_INET6, stdlib_socket.SOCK_STREAM, 0) except OSError: # pragma: no cover # Some systems don't even support creating an IPv6 socket, let alone # binding it. (ex: Linux with 'ipv6.disable=1' in the kernel command line) # We don't have any of those in our CI, and there's nothing that gets # tested _only_ if can_create_ipv6 = False, so we'll just no-cover this. can_create_ipv6 = False can_bind_ipv6 = False else: can_create_ipv6 = True with s: try: s.bind(("::1", 0)) except OSError: can_bind_ipv6 = False else: can_bind_ipv6 = True creates_ipv6 = pytest.mark.skipif(not can_create_ipv6, reason="need IPv6") binds_ipv6 = pytest.mark.skipif(not can_bind_ipv6, reason="need IPv6") def gc_collect_harder(): # In the test suite we sometimes want to call gc.collect() to make sure # that any objects with noisy __del__ methods (e.g. unawaited coroutines) # get collected before we continue, so their noise doesn't leak into # unrelated tests. # # On PyPy, coroutine objects (for example) can survive at least 1 round of # garbage collection, because executing their __del__ method to print the # warning can cause them to be resurrected. So we call collect a few times # to make sure. for _ in range(4): gc.collect() # Some of our tests need to leak coroutines, and thus trigger the # "RuntimeWarning: coroutine '...' was never awaited" message. This context # manager should be used anywhere this happens to hide those messages, because # when expected they're clutter. @contextmanager def ignore_coroutine_never_awaited_warnings(): with warnings.catch_warnings(): warnings.filterwarnings("ignore", message="coroutine '.*' was never awaited") try: yield finally: # Make sure to trigger any coroutine __del__ methods now, before # we leave the context manager. gc_collect_harder() def _noop(*args, **kwargs): pass if sys.version_info >= (3, 8): @contextmanager def restore_unraisablehook(): sys.unraisablehook, prev = sys.__unraisablehook__, sys.unraisablehook try: yield finally: sys.unraisablehook = prev @contextmanager def disable_threading_excepthook(): if sys.version_info >= (3, 10): threading.excepthook, prev = threading.__excepthook__, threading.excepthook else: threading.excepthook, prev = _noop, threading.excepthook try: yield finally: threading.excepthook = prev else: @contextmanager def restore_unraisablehook(): # pragma: no cover yield @contextmanager def disable_threading_excepthook(): # pragma: no cover yield # template is like: # [1, {2.1, 2.2}, 3] -> matches [1, 2.1, 2.2, 3] or [1, 2.2, 2.1, 3] def check_sequence_matches(seq, template): i = 0 for pattern in template: if not isinstance(pattern, set): pattern = {pattern} got = set(seq[i : i + len(pattern)]) assert got == pattern i += len(got) # https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246350 skip_if_fbsd_pipes_broken = pytest.mark.skipif( sys.platform != "win32" # prevent mypy from complaining about missing uname and hasattr(os, "uname") and os.uname().sysname == "FreeBSD" and os.uname().release[:4] < "12.2", reason="hangs on FreeBSD 12.1 and earlier, due to FreeBSD bug #246350", ) def create_asyncio_future_in_new_loop(): with closing(asyncio.new_event_loop()) as loop: return loop.create_future() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_deprecate.py0000644000076500000000000001112400000000000015541 0ustar00quentinwheelimport sys from functools import wraps from types import ModuleType import warnings import attr # We want our warnings to be visible by default (at least for now), but we # also want it to be possible to override that using the -W switch. AFAICT # this means we cannot inherit from DeprecationWarning, because the only way # to make it visible by default then would be to add our own filter at import # time, but that would override -W switches... class TrioDeprecationWarning(FutureWarning): """Warning emitted if you use deprecated Trio functionality. As a young project, Trio is currently quite aggressive about deprecating and/or removing functionality that we realize was a bad idea. If you use Trio, you should subscribe to `issue #1 `__ to get information about upcoming deprecations and other backwards compatibility breaking changes. Despite the name, this class currently inherits from :class:`FutureWarning`, not :class:`DeprecationWarning`, because while we're in young-and-aggressive mode we want these warnings to be visible by default. You can hide them by installing a filter or with the ``-W`` switch: see the :mod:`warnings` documentation for details. """ def _url_for_issue(issue): return "https://github.com/python-trio/trio/issues/{}".format(issue) def _stringify(thing): if hasattr(thing, "__module__") and hasattr(thing, "__qualname__"): return "{}.{}".format(thing.__module__, thing.__qualname__) return str(thing) def warn_deprecated(thing, version, *, issue, instead, stacklevel=2): stacklevel += 1 msg = "{} is deprecated since Trio {}".format(_stringify(thing), version) if instead is None: msg += " with no replacement" else: msg += "; use {} instead".format(_stringify(instead)) if issue is not None: msg += " ({})".format(_url_for_issue(issue)) warnings.warn(TrioDeprecationWarning(msg), stacklevel=stacklevel) # @deprecated("0.2.0", issue=..., instead=...) # def ... def deprecated(version, *, thing=None, issue, instead): def do_wrap(fn): nonlocal thing @wraps(fn) def wrapper(*args, **kwargs): warn_deprecated(thing, version, instead=instead, issue=issue) return fn(*args, **kwargs) # If our __module__ or __qualname__ get modified, we want to pick up # on that, so we read them off the wrapper object instead of the (now # hidden) fn object if thing is None: thing = wrapper if wrapper.__doc__ is not None: doc = wrapper.__doc__ doc = doc.rstrip() doc += "\n\n" doc += ".. deprecated:: {}\n".format(version) if instead is not None: doc += " Use {} instead.\n".format(_stringify(instead)) if issue is not None: doc += " For details, see `issue #{} <{}>`__.\n".format( issue, _url_for_issue(issue) ) doc += "\n" wrapper.__doc__ = doc return wrapper return do_wrap def deprecated_alias(old_qualname, new_fn, version, *, issue): @deprecated(version, issue=issue, instead=new_fn) @wraps(new_fn, assigned=("__module__", "__annotations__")) def wrapper(*args, **kwargs): "Deprecated alias." return new_fn(*args, **kwargs) wrapper.__qualname__ = old_qualname wrapper.__name__ = old_qualname.rpartition(".")[-1] return wrapper @attr.s(frozen=True) class DeprecatedAttribute: _not_set = object() value = attr.ib() version = attr.ib() issue = attr.ib() instead = attr.ib(default=_not_set) class _ModuleWithDeprecations(ModuleType): def __getattr__(self, name): if name in self.__deprecated_attributes__: info = self.__deprecated_attributes__[name] instead = info.instead if instead is DeprecatedAttribute._not_set: instead = info.value thing = "{}.{}".format(self.__name__, name) warn_deprecated(thing, info.version, issue=info.issue, instead=instead) return info.value msg = "module '{}' has no attribute '{}'" raise AttributeError(msg.format(self.__name__, name)) def enable_attribute_deprecations(module_name): module = sys.modules[module_name] module.__class__ = _ModuleWithDeprecations # Make sure that this is always defined so that # _ModuleWithDeprecations.__getattr__ can access it without jumping # through hoops or risking infinite recursion. module.__deprecated_attributes__ = {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_file_io.py0000644000076500000000000001071200000000000015215 0ustar00quentinwheelfrom functools import partial import io from .abc import AsyncResource from ._util import async_wraps import trio # This list is also in the docs, make sure to keep them in sync _FILE_SYNC_ATTRS = { "closed", "encoding", "errors", "fileno", "isatty", "newlines", "readable", "seekable", "writable", # not defined in *IOBase: "buffer", "raw", "line_buffering", "closefd", "name", "mode", "getvalue", "getbuffer", } # This list is also in the docs, make sure to keep them in sync _FILE_ASYNC_METHODS = { "flush", "read", "read1", "readall", "readinto", "readline", "readlines", "seek", "tell", "truncate", "write", "writelines", # not defined in *IOBase: "readinto1", "peek", } class AsyncIOWrapper(AsyncResource): """A generic :class:`~io.IOBase` wrapper that implements the :term:`asynchronous file object` interface. Wrapped methods that could block are executed in :meth:`trio.to_thread.run_sync`. All properties and methods defined in in :mod:`~io` are exposed by this wrapper, if they exist in the wrapped file object. """ def __init__(self, file): self._wrapped = file @property def wrapped(self): """object: A reference to the wrapped file object""" return self._wrapped def __getattr__(self, name): if name in _FILE_SYNC_ATTRS: return getattr(self._wrapped, name) if name in _FILE_ASYNC_METHODS: meth = getattr(self._wrapped, name) @async_wraps(self.__class__, self._wrapped.__class__, name) async def wrapper(*args, **kwargs): func = partial(meth, *args, **kwargs) return await trio.to_thread.run_sync(func) # cache the generated method setattr(self, name, wrapper) return wrapper raise AttributeError(name) def __dir__(self): attrs = set(super().__dir__()) attrs.update(a for a in _FILE_SYNC_ATTRS if hasattr(self.wrapped, a)) attrs.update(a for a in _FILE_ASYNC_METHODS if hasattr(self.wrapped, a)) return attrs def __aiter__(self): return self async def __anext__(self): line = await self.readline() if line: return line else: raise StopAsyncIteration async def detach(self): """Like :meth:`io.BufferedIOBase.detach`, but async. This also re-wraps the result in a new :term:`asynchronous file object` wrapper. """ raw = await trio.to_thread.run_sync(self._wrapped.detach) return wrap_file(raw) async def aclose(self): """Like :meth:`io.IOBase.close`, but async. This is also shielded from cancellation; if a cancellation scope is cancelled, the wrapped file object will still be safely closed. """ # ensure the underling file is closed during cancellation with trio.CancelScope(shield=True): await trio.to_thread.run_sync(self._wrapped.close) await trio.lowlevel.checkpoint_if_cancelled() async def open_file( file, mode="r", buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, ): """Asynchronous version of :func:`io.open`. Returns: An :term:`asynchronous file object` Example:: async with await trio.open_file(filename) as f: async for line in f: pass assert f.closed See also: :func:`trio.Path.open` """ _file = wrap_file( await trio.to_thread.run_sync( io.open, file, mode, buffering, encoding, errors, newline, closefd, opener ) ) return _file def wrap_file(file): """This wraps any file object in a wrapper that provides an asynchronous file object interface. Args: file: a :term:`file object` Returns: An :term:`asynchronous file object` that wraps ``file`` Example:: async_file = trio.wrap_file(StringIO('asdf')) assert await async_file.read() == 'asdf' """ def has(attr): return hasattr(file, attr) and callable(getattr(file, attr)) if not (has("close") and (has("read") or has("write"))): raise TypeError( "{} does not implement required duck-file methods: " "close and (read or write)".format(file) ) return AsyncIOWrapper(file) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_generic.py0000644000076500000000000000753000000000000017256 0ustar00quentinwheelimport attr import trio from .abc import HalfCloseableStream from trio._util import Final async def aclose_forcefully(resource): """Close an async resource or async generator immediately, without blocking to do any graceful cleanup. :class:`~trio.abc.AsyncResource` objects guarantee that if their :meth:`~trio.abc.AsyncResource.aclose` method is cancelled, then they will still close the resource (albeit in a potentially ungraceful fashion). :func:`aclose_forcefully` is a convenience function that exploits this behavior to let you force a resource to be closed without blocking: it works by calling ``await resource.aclose()`` and then cancelling it immediately. Most users won't need this, but it may be useful on cleanup paths where you can't afford to block, or if you want to close a resource and don't care about handling it gracefully. For example, if :class:`~trio.SSLStream` encounters an error and cannot perform its own graceful close, then there's no point in waiting to gracefully shut down the underlying transport either, so it calls ``await aclose_forcefully(self.transport_stream)``. Note that this function is async, and that it acts as a checkpoint, but unlike most async functions it cannot block indefinitely (at least, assuming the underlying resource object is correctly implemented). """ with trio.CancelScope() as cs: cs.cancel() await resource.aclose() @attr.s(eq=False, hash=False) class StapledStream(HalfCloseableStream, metaclass=Final): """This class `staples `__ together two unidirectional streams to make single bidirectional stream. Args: send_stream (~trio.abc.SendStream): The stream to use for sending. receive_stream (~trio.abc.ReceiveStream): The stream to use for receiving. Example: A silly way to make a stream that echoes back whatever you write to it:: left, right = trio.testing.memory_stream_pair() echo_stream = StapledStream(SocketStream(left), SocketStream(right)) await echo_stream.send_all(b"x") assert await echo_stream.receive_some() == b"x" :class:`StapledStream` objects implement the methods in the :class:`~trio.abc.HalfCloseableStream` interface. They also have two additional public attributes: .. attribute:: send_stream The underlying :class:`~trio.abc.SendStream`. :meth:`send_all` and :meth:`wait_send_all_might_not_block` are delegated to this object. .. attribute:: receive_stream The underlying :class:`~trio.abc.ReceiveStream`. :meth:`receive_some` is delegated to this object. """ send_stream = attr.ib() receive_stream = attr.ib() async def send_all(self, data): """Calls ``self.send_stream.send_all``.""" return await self.send_stream.send_all(data) async def wait_send_all_might_not_block(self): """Calls ``self.send_stream.wait_send_all_might_not_block``.""" return await self.send_stream.wait_send_all_might_not_block() async def send_eof(self): """Shuts down the send side of the stream. If ``self.send_stream.send_eof`` exists, then calls it. Otherwise, calls ``self.send_stream.aclose()``. """ if hasattr(self.send_stream, "send_eof"): return await self.send_stream.send_eof() else: return await self.send_stream.aclose() async def receive_some(self, max_bytes=None): """Calls ``self.receive_stream.receive_some``.""" return await self.receive_stream.receive_some(max_bytes) async def aclose(self): """Calls ``aclose`` on both underlying streams.""" try: await self.send_stream.aclose() finally: await self.receive_stream.aclose() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_open_tcp_listeners.py0000644000076500000000000002155600000000000021545 0ustar00quentinwheelimport errno import sys from math import inf import trio from . import socket as tsocket # Default backlog size: # # Having the backlog too low can cause practical problems (a perfectly healthy # service that starts failing to accept connections if they arrive in a # burst). # # Having it too high doesn't really cause any problems. Like any buffer, you # want backlog queue to be zero usually, and it won't save you if you're # getting connection attempts faster than you can call accept() on an ongoing # basis. But unlike other buffers, this one doesn't really provide any # backpressure. If a connection gets stuck waiting in the backlog queue, then # from the peer's point of view the connection succeeded but then their # send/recv will stall until we get to it, possibly for a long time. OTOH if # there isn't room in the backlog queue... then their connect stalls, possibly # for a long time, which is pretty much the same thing. # # A large backlog can also use a bit more kernel memory, but this seems fairly # negligible these days. # # So this suggests we should make the backlog as large as possible. This also # matches what Golang does. However, they do it in a weird way, where they # have a bunch of code to sniff out the configured upper limit for backlog on # different operating systems. But on every system, passing in a too-large # backlog just causes it to be silently truncated to the configured maximum, # so this is unnecessary -- we can just pass in "infinity" and get the maximum # that way. (Verified on Windows, Linux, macOS using # notes-to-self/measure-listen-backlog.py) def _compute_backlog(backlog): if backlog is None: backlog = inf # Many systems (Linux, BSDs, ...) store the backlog in a uint16 and are # missing overflow protection, so we apply our own overflow protection. # https://github.com/golang/go/issues/5030 return min(backlog, 0xFFFF) async def open_tcp_listeners(port, *, host=None, backlog=None): """Create :class:`SocketListener` objects to listen for TCP connections. Args: port (int): The port to listen on. If you use 0 as your port, then the kernel will automatically pick an arbitrary open port. But be careful: if you use this feature when binding to multiple IP addresses, then each IP address will get its own random port, and the returned listeners will probably be listening on different ports. In particular, this will happen if you use ``host=None`` – which is the default – because in this case :func:`open_tcp_listeners` will bind to both the IPv4 wildcard address (``0.0.0.0``) and also the IPv6 wildcard address (``::``). host (str, bytes-like, or None): The local interface to bind to. This is passed to :func:`~socket.getaddrinfo` with the ``AI_PASSIVE`` flag set. If you want to bind to the wildcard address on both IPv4 and IPv6, in order to accept connections on all available interfaces, then pass ``None``. This is the default. If you have a specific interface you want to bind to, pass its IP address or hostname here. If a hostname resolves to multiple IP addresses, this function will open one listener on each of them. If you want to use only IPv4, or only IPv6, but want to accept on all interfaces, pass the family-specific wildcard address: ``"0.0.0.0"`` for IPv4-only and ``"::"`` for IPv6-only. backlog (int or None): The listen backlog to use. If you leave this as ``None`` then Trio will pick a good default. (Currently: whatever your system has configured as the maximum backlog.) Returns: list of :class:`SocketListener` """ # getaddrinfo sometimes allows port=None, sometimes not (depending on # whether host=None). And on some systems it treats "" as 0, others it # doesn't: # http://klickverbot.at/blog/2012/01/getaddrinfo-edge-case-behavior-on-windows-linux-and-osx/ if not isinstance(port, int): raise TypeError("port must be an int not {!r}".format(port)) backlog = _compute_backlog(backlog) addresses = await tsocket.getaddrinfo( host, port, type=tsocket.SOCK_STREAM, flags=tsocket.AI_PASSIVE ) listeners = [] unsupported_address_families = [] try: for family, type, proto, _, sockaddr in addresses: try: sock = tsocket.socket(family, type, proto) except OSError as ex: if ex.errno == errno.EAFNOSUPPORT: # If a system only supports IPv4, or only IPv6, it # is still likely that getaddrinfo will return # both an IPv4 and an IPv6 address. As long as at # least one of the returned addresses can be # turned into a socket, we won't complain about a # failure to create the other. unsupported_address_families.append(ex) continue else: raise try: # See https://github.com/python-trio/trio/issues/39 if sys.platform != "win32": sock.setsockopt(tsocket.SOL_SOCKET, tsocket.SO_REUSEADDR, 1) if family == tsocket.AF_INET6: sock.setsockopt(tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, 1) await sock.bind(sockaddr) sock.listen(backlog) listeners.append(trio.SocketListener(sock)) except: sock.close() raise except: for listener in listeners: listener.socket.close() raise if unsupported_address_families and not listeners: raise OSError( errno.EAFNOSUPPORT, "This system doesn't support any of the kinds of " "socket that that address could use", ) from trio.MultiError(unsupported_address_families) return listeners async def serve_tcp( handler, port, *, host=None, backlog=None, handler_nursery=None, task_status=trio.TASK_STATUS_IGNORED, ): """Listen for incoming TCP connections, and for each one start a task running ``handler(stream)``. This is a thin convenience wrapper around :func:`open_tcp_listeners` and :func:`serve_listeners` – see them for full details. .. warning:: If ``handler`` raises an exception, then this function doesn't do anything special to catch it – so by default the exception will propagate out and crash your server. If you don't want this, then catch exceptions inside your ``handler``, or use a ``handler_nursery`` object that responds to exceptions in some other way. When used with ``nursery.start`` you get back the newly opened listeners. So, for example, if you want to start a server in your test suite and then connect to it to check that it's working properly, you can use something like:: from trio.testing import open_stream_to_socket_listener async with trio.open_nursery() as nursery: listeners = await nursery.start(serve_tcp, handler, 0) client_stream = await open_stream_to_socket_listener(listeners[0]) # Then send and receive data on 'client_stream', for example: await client_stream.send_all(b"GET / HTTP/1.0\\r\\n\\r\\n") This avoids several common pitfalls: 1. It lets the kernel pick a random open port, so your test suite doesn't depend on any particular port being open. 2. It waits for the server to be accepting connections on that port before ``start`` returns, so there's no race condition where the incoming connection arrives before the server is ready. 3. It uses the Listener object to find out which port was picked, so it can connect to the right place. Args: handler: The handler to start for each incoming connection. Passed to :func:`serve_listeners`. port: The port to listen on. Use 0 to let the kernel pick an open port. Passed to :func:`open_tcp_listeners`. host (str, bytes, or None): The host interface to listen on; use ``None`` to bind to the wildcard address. Passed to :func:`open_tcp_listeners`. backlog: The listen backlog, or None to have a good default picked. Passed to :func:`open_tcp_listeners`. handler_nursery: The nursery to start handlers in, or None to use an internal nursery. Passed to :func:`serve_listeners`. task_status: This function can be used with ``nursery.start``. Returns: This function only returns when cancelled. """ listeners = await trio.open_tcp_listeners(port, host=host, backlog=backlog) await trio.serve_listeners( handler, listeners, handler_nursery=handler_nursery, task_status=task_status ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_open_tcp_stream.py0000644000076500000000000004213100000000000021020 0ustar00quentinwheelfrom contextlib import contextmanager import trio from trio.socket import getaddrinfo, SOCK_STREAM, socket # Implementation of RFC 6555 "Happy eyeballs" # https://tools.ietf.org/html/rfc6555 # # Basically, the problem here is that if we want to connect to some host, and # DNS returns multiple IP addresses, then we don't know which of them will # actually work -- it can happen that some of them are reachable, and some of # them are not. One particularly common situation where this happens is on a # host that thinks it has ipv6 connectivity, but really doesn't. But in # principle this could happen for any kind of multi-home situation (e.g. the # route to one mirror is down but another is up). # # The naive algorithm (e.g. the stdlib's socket.create_connection) would be to # pick one of the IP addresses and try to connect; if that fails, try the # next; etc. The problem with this is that TCP is stubborn, and if the first # address is a blackhole then it might take a very long time (tens of seconds) # before that connection attempt fails. # # That's where RFC 6555 comes in. It tells us that what we do is: # - get the list of IPs from getaddrinfo, trusting the order it gives us (with # one exception noted in section 5.4) # - start a connection attempt to the first IP # - when this fails OR if it's still going after DELAY seconds, then start a # connection attempt to the second IP # - when this fails OR if it's still going after another DELAY seconds, then # start a connection attempt to the third IP # - ... repeat until we run out of IPs. # # Our implementation is similarly straightforward: we spawn a chain of tasks, # where each one (a) waits until the previous connection has failed or DELAY # seconds have passed, (b) spawns the next task, (c) attempts to connect. As # soon as any task crashes or succeeds, we cancel all the tasks and return. # # Note: this currently doesn't attempt to cache any results, so if you make # multiple connections to the same host it'll re-run the happy-eyeballs # algorithm each time. RFC 6555 is pretty confusing about whether this is # allowed. Section 4 describes an algorithm that attempts ipv4 and ipv6 # simultaneously, and then says "The client MUST cache information regarding # the outcome of each connection attempt, and it uses that information to # avoid thrashing the network with subsequent attempts." Then section 4.2 says # "implementations MUST prefer the first IP address family returned by the # host's address preference policy, unless implementing a stateful # algorithm". Here "stateful" means "one that caches information about # previous attempts". So my reading of this is that IF you're starting ipv4 # and ipv6 at the same time then you MUST cache the result for ~ten minutes, # but IF you're "preferring" one protocol by trying it first (like we are), # then you don't need to cache. # # Caching is quite tricky: to get it right you need to do things like detect # when the network interfaces are reconfigured, and if you get it wrong then # connection attempts basically just don't work. So we don't even try. # "Firefox and Chrome use 300 ms" # https://tools.ietf.org/html/rfc6555#section-6 # Though # https://www.researchgate.net/profile/Vaibhav_Bajpai3/publication/304568993_Measuring_the_Effects_of_Happy_Eyeballs/links/5773848e08ae6f328f6c284c/Measuring-the-Effects-of-Happy-Eyeballs.pdf # claims that Firefox actually uses 0 ms, unless an about:config option is # toggled and then it uses 250 ms. DEFAULT_DELAY = 0.250 # How should we call getaddrinfo? In particular, should we use AI_ADDRCONFIG? # # The idea of AI_ADDRCONFIG is that it only returns addresses that might # work. E.g., if getaddrinfo knows that you don't have any IPv6 connectivity, # then it doesn't return any IPv6 addresses. And this is kinda nice, because # it means maybe you can skip sending AAAA requests entirely. But in practice, # it doesn't really work right. # # - on Linux/glibc, empirically, the default is to return all addresses, and # with AI_ADDRCONFIG then it only returns IPv6 addresses if there is at least # one non-loopback IPv6 address configured... but this can be a link-local # address, so in practice I guess this is basically always configured if IPv6 # is enabled at all. OTOH if you pass in "::1" as the target address with # AI_ADDRCONFIG and there's no *external* IPv6 address configured, you get an # error. So AI_ADDRCONFIG mostly doesn't do anything, even when you would want # it to, and when it does do something it might break things that would have # worked. # # - on Windows 10, empirically, if no IPv6 address is configured then by # default they are also suppressed from getaddrinfo (flags=0 and # flags=AI_ADDRCONFIG seem to do the same thing). If you pass AI_ALL, then you # get the full list. # ...except for localhost! getaddrinfo("localhost", "80") gives me ::1, even # though there's no ipv6 and other queries only return ipv4. # If you pass in and IPv6 IP address as the target address, then that's always # returned OK, even with AI_ADDRCONFIG set and no IPv6 configured. # # But I guess other versions of windows messed this up, judging from these bug # reports: # https://bugs.chromium.org/p/chromium/issues/detail?id=5234 # https://bugs.chromium.org/p/chromium/issues/detail?id=32522#c50 # # So basically the options are either to use AI_ADDRCONFIG and then add some # complicated special cases to work around its brokenness, or else don't use # AI_ADDRCONFIG and accept that sometimes on legacy/misconfigured networks # we'll waste 300 ms trying to connect to a blackholed destination. # # Twisted and Tornado always uses default flags. I think we'll do the same. @contextmanager def close_all(): sockets_to_close = set() try: yield sockets_to_close finally: errs = [] for sock in sockets_to_close: try: sock.close() except BaseException as exc: errs.append(exc) if errs: raise trio.MultiError(errs) def reorder_for_rfc_6555_section_5_4(targets): # RFC 6555 section 5.4 says that if getaddrinfo returns multiple address # families (e.g. IPv4 and IPv6), then you should make sure that your first # and second attempts use different families: # # https://tools.ietf.org/html/rfc6555#section-5.4 # # This function post-processes the results from getaddrinfo, in-place, to # satisfy this requirement. for i in range(1, len(targets)): if targets[i][0] != targets[0][0]: # Found the first entry with a different address family; move it # so that it becomes the second item on the list. if i != 1: targets.insert(1, targets.pop(i)) break def format_host_port(host, port): host = host.decode("ascii") if isinstance(host, bytes) else host if ":" in host: return "[{}]:{}".format(host, port) else: return "{}:{}".format(host, port) # Twisted's HostnameEndpoint has a good set of configurables: # https://twistedmatrix.com/documents/current/api/twisted.internet.endpoints.HostnameEndpoint.html # # - per-connection timeout # this doesn't seem useful -- we let you set a timeout on the whole thing # using Trio's normal mechanisms, and that seems like enough # - delay between attempts # - bind address (but not port!) # they *don't* support multiple address bindings, like giving the ipv4 and # ipv6 addresses of the host. # I think maybe our semantics should be: we accept a list of bind addresses, # and we bind to the first one that is compatible with the # connection attempt we want to make, and if none are compatible then we # don't try to connect to that target. # # XX TODO: implement bind address support # # Actually, the best option is probably to be explicit: {AF_INET: "...", # AF_INET6: "..."} # this might be simpler after async def open_tcp_stream( host, port, *, happy_eyeballs_delay=DEFAULT_DELAY, local_address=None ): """Connect to the given host and port over TCP. If the given ``host`` has multiple IP addresses associated with it, then we have a problem: which one do we use? One approach would be to attempt to connect to the first one, and then if that fails, attempt to connect to the second one ... until we've tried all of them. But the problem with this is that if the first IP address is unreachable (for example, because it's an IPv6 address and our network discards IPv6 packets), then we might end up waiting tens of seconds for the first connection attempt to timeout before we try the second address. Another approach would be to attempt to connect to all of the addresses at the same time, in parallel, and then use whichever connection succeeds first, abandoning the others. This would be fast, but create a lot of unnecessary load on the network and the remote server. This function strikes a balance between these two extremes: it works its way through the available addresses one at a time, like the first approach; but, if ``happy_eyeballs_delay`` seconds have passed and it's still waiting for an attempt to succeed or fail, then it gets impatient and starts the next connection attempt in parallel. As soon as any one connection attempt succeeds, all the other attempts are cancelled. This avoids unnecessary load because most connections will succeed after just one or two attempts, but if one of the addresses is unreachable then it doesn't slow us down too much. This is known as a "happy eyeballs" algorithm, and our particular variant is modelled after how Chrome connects to webservers; see `RFC 6555 `__ for more details. Args: host (str or bytes): The host to connect to. Can be an IPv4 address, IPv6 address, or a hostname. port (int): The port to connect to. happy_eyeballs_delay (float): How many seconds to wait for each connection attempt to succeed or fail before getting impatient and starting another one in parallel. Set to `math.inf` if you want to limit to only one connection attempt at a time (like :func:`socket.create_connection`). Default: 0.25 (250 ms). local_address (None or str): The local IP address or hostname to use as the source for outgoing connections. If ``None``, we let the OS pick the source IP. This is useful in some exotic networking configurations where your host has multiple IP addresses, and you want to force the use of a specific one. Note that if you pass an IPv4 ``local_address``, then you won't be able to connect to IPv6 hosts, and vice-versa. If you want to take advantage of this to force the use of IPv4 or IPv6 without specifying an exact source address, you can use the IPv4 wildcard address ``local_address="0.0.0.0"``, or the IPv6 wildcard address ``local_address="::"``. Returns: SocketStream: a :class:`~trio.abc.Stream` connected to the given server. Raises: OSError: if the connection fails. See also: open_ssl_over_tcp_stream """ # To keep our public API surface smaller, rule out some cases that # getaddrinfo will accept in some circumstances, but that act weird or # have non-portable behavior or are just plain not useful. # No type check on host though b/c we want to allow bytes-likes. if host is None: raise ValueError("host cannot be None") if not isinstance(port, int): raise TypeError("port must be int, not {!r}".format(port)) if happy_eyeballs_delay is None: happy_eyeballs_delay = DEFAULT_DELAY targets = await getaddrinfo(host, port, type=SOCK_STREAM) # I don't think this can actually happen -- if there are no results, # getaddrinfo should have raised OSError instead of returning an empty # list. But let's be paranoid and handle it anyway: if not targets: msg = "no results found for hostname lookup: {}".format( format_host_port(host, port) ) raise OSError(msg) reorder_for_rfc_6555_section_5_4(targets) # This list records all the connection failures that we ignored. oserrors = [] # Keeps track of the socket that we're going to complete with, # need to make sure this isn't automatically closed winning_socket = None # Try connecting to the specified address. Possible outcomes: # - success: record connected socket in winning_socket and cancel # concurrent attempts # - failure: record exception in oserrors, set attempt_failed allowing # the next connection attempt to start early # code needs to ensure sockets can be closed appropriately in the # face of crash or cancellation async def attempt_connect(socket_args, sockaddr, attempt_failed): nonlocal winning_socket try: sock = socket(*socket_args) open_sockets.add(sock) if local_address is not None: # TCP connections are identified by a 4-tuple: # # (local IP, local port, remote IP, remote port) # # So if a single local IP wants to make multiple connections # to the same (remote IP, remote port) pair, then those # connections have to use different local ports, or else TCP # won't be able to tell them apart. OTOH, if you have multiple # connections to different remote IP/ports, then those # connections can share a local port. # # Normally, when you call bind(), the kernel will immediately # assign a specific local port to your socket. At this point # the kernel doesn't know which (remote IP, remote port) # you're going to use, so it has to pick a local port that # *no* other connection is using. That's the only way to # guarantee that this local port will be usable later when we # call connect(). (Alternatively, you can set SO_REUSEADDR to # allow multiple nascent connections to share the same port, # but then connect() might fail with EADDRNOTAVAIL if we get # unlucky and our TCP 4-tuple ends up colliding with another # unrelated connection.) # # So calling bind() before connect() works, but it disables # sharing of local ports. This is inefficient: it makes you # more likely to run out of local ports. # # But on some versions of Linux, we can re-enable sharing of # local ports by setting a special flag. This flag tells # bind() to only bind the IP, and not the port. That way, # connect() is allowed to pick the the port, and it can do a # better job of it because it knows the remote IP/port. try: sock.setsockopt( trio.socket.IPPROTO_IP, trio.socket.IP_BIND_ADDRESS_NO_PORT, 1 ) except (OSError, AttributeError): pass try: await sock.bind((local_address, 0)) except OSError: raise OSError( f"local_address={local_address!r} is incompatible " f"with remote address {sockaddr}" ) await sock.connect(sockaddr) # Success! Save the winning socket and cancel all outstanding # connection attempts. winning_socket = sock nursery.cancel_scope.cancel() except OSError as exc: # This connection attempt failed, but the next one might # succeed. Save the error for later so we can report it if # everything fails, and tell the next attempt that it should go # ahead (if it hasn't already). oserrors.append(exc) attempt_failed.set() with close_all() as open_sockets: # nursery spawns a task for each connection attempt, will be # cancelled by the task that gets a successful connection async with trio.open_nursery() as nursery: for *sa, _, addr in targets: # create an event to indicate connection failure, # allowing the next target to be tried early attempt_failed = trio.Event() nursery.start_soon(attempt_connect, sa, addr, attempt_failed) # give this attempt at most this time before moving on with trio.move_on_after(happy_eyeballs_delay): await attempt_failed.wait() # nothing succeeded if winning_socket is None: assert len(oserrors) == len(targets) msg = "all attempts to connect to {} failed".format( format_host_port(host, port) ) raise OSError(msg) from trio.MultiError(oserrors) else: stream = trio.SocketStream(winning_socket) open_sockets.remove(winning_socket) return stream ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_open_unix_stream.py0000644000076500000000000000233700000000000021221 0ustar00quentinwheelimport os from contextlib import contextmanager import trio from trio.socket import socket, SOCK_STREAM try: from trio.socket import AF_UNIX has_unix = True except ImportError: has_unix = False @contextmanager def close_on_error(obj): try: yield obj except: obj.close() raise async def open_unix_socket(filename): """Opens a connection to the specified `Unix domain socket `__. You must have read/write permission on the specified file to connect. Args: filename (str or bytes): The filename to open the connection to. Returns: SocketStream: a :class:`~trio.abc.Stream` connected to the given file. Raises: OSError: If the socket file could not be connected to. RuntimeError: If AF_UNIX sockets are not supported. """ if not has_unix: raise RuntimeError("Unix sockets are not supported on this platform") # much more simplified logic vs tcp sockets - one socket type and only one # possible location to connect to sock = socket(AF_UNIX, SOCK_STREAM) with close_on_error(sock): await sock.connect(os.fspath(filename)) return trio.SocketStream(sock) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_serve_listeners.py0000644000076500000000000001023200000000000021047 0ustar00quentinwheelimport errno import logging import os import trio # Errors that accept(2) can return, and which indicate that the system is # overloaded ACCEPT_CAPACITY_ERRNOS = { errno.EMFILE, errno.ENFILE, errno.ENOMEM, errno.ENOBUFS, } # How long to sleep when we get one of those errors SLEEP_TIME = 0.100 # The logger we use to complain when this happens LOGGER = logging.getLogger("trio.serve_listeners") async def _run_handler(stream, handler): try: await handler(stream) finally: await trio.aclose_forcefully(stream) async def _serve_one_listener(listener, handler_nursery, handler): async with listener: while True: try: stream = await listener.accept() except OSError as exc: if exc.errno in ACCEPT_CAPACITY_ERRNOS: LOGGER.error( "accept returned %s (%s); retrying in %s seconds", errno.errorcode[exc.errno], os.strerror(exc.errno), SLEEP_TIME, exc_info=True, ) await trio.sleep(SLEEP_TIME) else: raise else: handler_nursery.start_soon(_run_handler, stream, handler) async def serve_listeners( handler, listeners, *, handler_nursery=None, task_status=trio.TASK_STATUS_IGNORED ): r"""Listen for incoming connections on ``listeners``, and for each one start a task running ``handler(stream)``. .. warning:: If ``handler`` raises an exception, then this function doesn't do anything special to catch it – so by default the exception will propagate out and crash your server. If you don't want this, then catch exceptions inside your ``handler``, or use a ``handler_nursery`` object that responds to exceptions in some other way. Args: handler: An async callable, that will be invoked like ``handler_nursery.start_soon(handler, stream)`` for each incoming connection. listeners: A list of :class:`~trio.abc.Listener` objects. :func:`serve_listeners` takes responsibility for closing them. handler_nursery: The nursery used to start handlers, or any object with a ``start_soon`` method. If ``None`` (the default), then :func:`serve_listeners` will create a new nursery internally and use that. task_status: This function can be used with ``nursery.start``, which will return ``listeners``. Returns: This function never returns unless cancelled. Resource handling: If ``handler`` neglects to close the ``stream``, then it will be closed using :func:`trio.aclose_forcefully`. Error handling: Most errors coming from :meth:`~trio.abc.Listener.accept` are allowed to propagate out (crashing the server in the process). However, some errors – those which indicate that the server is temporarily overloaded – are handled specially. These are :class:`OSError`\s with one of the following errnos: * ``EMFILE``: process is out of file descriptors * ``ENFILE``: system is out of file descriptors * ``ENOBUFS``, ``ENOMEM``: the kernel hit some sort of memory limitation when trying to create a socket object When :func:`serve_listeners` gets one of these errors, then it: * Logs the error to the standard library logger ``trio.serve_listeners`` (level = ERROR, with exception information included). By default this causes it to be printed to stderr. * Waits 100 ms before calling ``accept`` again, in hopes that the system will recover. """ async with trio.open_nursery() as nursery: if handler_nursery is None: handler_nursery = nursery for listener in listeners: nursery.start_soon(_serve_one_listener, listener, handler_nursery, handler) # The listeners are already queueing connections when we're called, # but we wait until the end to call started() just in case we get an # error or whatever. task_status.started(listeners) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_socket.py0000644000076500000000000003424700000000000017137 0ustar00quentinwheel# "High-level" networking interface import errno from contextlib import contextmanager import trio from . import socket as tsocket from ._util import ConflictDetector, Final from .abc import HalfCloseableStream, Listener # XX TODO: this number was picked arbitrarily. We should do experiments to # tune it. (Or make it dynamic -- one idea is to start small and increase it # if we observe single reads filling up the whole buffer, at least within some # limits.) DEFAULT_RECEIVE_SIZE = 65536 _closed_stream_errnos = { # Unix errno.EBADF, # Windows errno.ENOTSOCK, } @contextmanager def _translate_socket_errors_to_stream_errors(): try: yield except OSError as exc: if exc.errno in _closed_stream_errnos: raise trio.ClosedResourceError("this socket was already closed") from None else: raise trio.BrokenResourceError( "socket connection broken: {}".format(exc) ) from exc class SocketStream(HalfCloseableStream, metaclass=Final): """An implementation of the :class:`trio.abc.HalfCloseableStream` interface based on a raw network socket. Args: socket: The Trio socket object to wrap. Must have type ``SOCK_STREAM``, and be connected. By default for TCP sockets, :class:`SocketStream` enables ``TCP_NODELAY``, and (on platforms where it's supported) enables ``TCP_NOTSENT_LOWAT`` with a reasonable buffer size (currently 16 KiB) – see `issue #72 `__ for discussion. You can of course override these defaults by calling :meth:`setsockopt`. Once a :class:`SocketStream` object is constructed, it implements the full :class:`trio.abc.HalfCloseableStream` interface. In addition, it provides a few extra features: .. attribute:: socket The Trio socket object that this stream wraps. """ def __init__(self, socket): if not isinstance(socket, tsocket.SocketType): raise TypeError("SocketStream requires a Trio socket object") if socket.type != tsocket.SOCK_STREAM: raise ValueError("SocketStream requires a SOCK_STREAM socket") self.socket = socket self._send_conflict_detector = ConflictDetector( "another task is currently sending data on this SocketStream" ) # Socket defaults: # Not supported on e.g. unix domain sockets try: self.setsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY, True) except OSError: pass if hasattr(tsocket, "TCP_NOTSENT_LOWAT"): try: # 16 KiB is pretty arbitrary and could probably do with some # tuning. (Apple is also setting this by default in CFNetwork # apparently -- I'm curious what value they're using, though I # couldn't find it online trivially. CFNetwork-129.20 source # has no mentions of TCP_NOTSENT_LOWAT. This presentation says # "typically 8 kilobytes": # http://devstreaming.apple.com/videos/wwdc/2015/719ui2k57m/719/719_your_app_and_next_generation_networks.pdf?dl=1 # ). The theory is that you want it to be bandwidth * # rescheduling interval. self.setsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NOTSENT_LOWAT, 2 ** 14) except OSError: pass async def send_all(self, data): if self.socket.did_shutdown_SHUT_WR: raise trio.ClosedResourceError("can't send data after sending EOF") with self._send_conflict_detector: with _translate_socket_errors_to_stream_errors(): with memoryview(data) as data: if not data: if self.socket.fileno() == -1: raise trio.ClosedResourceError("socket was already closed") await trio.lowlevel.checkpoint() return total_sent = 0 while total_sent < len(data): with data[total_sent:] as remaining: sent = await self.socket.send(remaining) total_sent += sent async def wait_send_all_might_not_block(self): with self._send_conflict_detector: if self.socket.fileno() == -1: raise trio.ClosedResourceError with _translate_socket_errors_to_stream_errors(): await self.socket.wait_writable() async def send_eof(self): with self._send_conflict_detector: await trio.lowlevel.checkpoint() # On macOS, calling shutdown a second time raises ENOTCONN, but # send_eof needs to be idempotent. if self.socket.did_shutdown_SHUT_WR: return with _translate_socket_errors_to_stream_errors(): self.socket.shutdown(tsocket.SHUT_WR) async def receive_some(self, max_bytes=None): if max_bytes is None: max_bytes = DEFAULT_RECEIVE_SIZE if max_bytes < 1: raise ValueError("max_bytes must be >= 1") with _translate_socket_errors_to_stream_errors(): return await self.socket.recv(max_bytes) async def aclose(self): self.socket.close() await trio.lowlevel.checkpoint() # __aenter__, __aexit__ inherited from HalfCloseableStream are OK def setsockopt(self, level, option, value): """Set an option on the underlying socket. See :meth:`socket.socket.setsockopt` for details. """ return self.socket.setsockopt(level, option, value) def getsockopt(self, level, option, buffersize=0): """Check the current value of an option on the underlying socket. See :meth:`socket.socket.getsockopt` for details. """ # This is to work around # https://bitbucket.org/pypy/pypy/issues/2561 # We should be able to drop it when the next PyPy3 beta is released. if buffersize == 0: return self.socket.getsockopt(level, option) else: return self.socket.getsockopt(level, option, buffersize) ################################################################ # SocketListener ################################################################ # Accept error handling # ===================== # # Literature review # ----------------- # # Here's a list of all the possible errors that accept() can return, according # to the POSIX spec or the Linux, FreeBSD, macOS, and Windows docs: # # Can't happen with a Trio socket: # - EAGAIN/(WSA)EWOULDBLOCK # - EINTR # - WSANOTINITIALISED # - WSAEINPROGRESS: a blocking call is already in progress # - WSAEINTR: someone called WSACancelBlockingCall, but we don't make blocking # calls in the first place # # Something is wrong with our call: # - EBADF: not a file descriptor # - (WSA)EINVAL: socket isn't listening, or (Linux, BSD) bad flags # - (WSA)ENOTSOCK: not a socket # - (WSA)EOPNOTSUPP: this kind of socket doesn't support accept # - (Linux, FreeBSD, Windows) EFAULT: the sockaddr pointer points to readonly # memory # # Something is wrong with the environment: # - (WSA)EMFILE: this process hit its fd limit # - ENFILE: the system hit its fd limit # - (WSA)ENOBUFS, ENOMEM: unspecified memory problems # # Something is wrong with the connection we were going to accept. There's a # ton of variability between systems here: # - ECONNABORTED: documented everywhere, but apparently only the BSDs do this # (signals a connection was closed/reset before being accepted) # - EPROTO: unspecified protocol error # - (Linux) EPERM: firewall rule prevented connection # - (Linux) ENETDOWN, EPROTO, ENOPROTOOPT, EHOSTDOWN, ENONET, EHOSTUNREACH, # EOPNOTSUPP, ENETUNREACH, ENOSR, ESOCKTNOSUPPORT, EPROTONOSUPPORT, # ETIMEDOUT, ... or any other error that the socket could give, because # apparently if an error happens on a connection before it's accept()ed, # Linux will report that error from accept(). # - (Windows) WSAECONNRESET, WSAENETDOWN # # # Code review # ----------- # # What do other libraries do? # # Twisted on Unix or when using nonblocking I/O on Windows: # - ignores EPERM, with comment about Linux firewalls # - logs and ignores EMFILE, ENOBUFS, ENFILE, ENOMEM, ECONNABORTED # Comment notes that ECONNABORTED is a BSDism and that Linux returns the # socket before having it fail, and macOS just silently discards it. # - other errors are raised, which is logged + kills the socket # ref: src/twisted/internet/tcp.py, Port.doRead # # Twisted using IOCP on Windows: # - logs and ignores all errors # ref: src/twisted/internet/iocpreactor/tcp.py, Port.handleAccept # # Tornado: # - ignore ECONNABORTED (comments notes that it was observed on FreeBSD) # - everything else raised, but all this does (by default) is cause it to be # logged and then ignored # (ref: tornado/netutil.py, tornado/ioloop.py) # # libuv on Unix: # - ignores ECONNABORTED # - does a "trick" for EMFILE or ENFILE # - all other errors passed to the connection_cb to be handled # (ref: src/unix/stream.c:uv__server_io, uv__emfile_trick) # # libuv on Windows: # src/win/tcp.c:uv_tcp_queue_accept # this calls AcceptEx, and then arranges to call: # src/win/tcp.c:uv_process_tcp_accept_req # this gets the result from AcceptEx. If the original AcceptEx call failed, # then "we stop accepting connections and report this error to the # connection callback". I think this is for things like ENOTSOCK. If # AcceptEx successfully queues an overlapped operation, and then that # reports an error, it's just discarded. # # asyncio, selector mode: # - ignores EWOULDBLOCK, EINTR, ECONNABORTED # - on EMFILE, ENFILE, ENOBUFS, ENOMEM, logs an error and then disables the # listening loop for 1 second # - everything else raises, but then the event loop just logs and ignores it # (selector_events.py: BaseSelectorEventLoop._accept_connection) # # # What should we do? # ------------------ # # When accept() returns an error, we can either ignore it or raise it. # # We have a long list of errors that should be ignored, and a long list of # errors that should be raised. The big question is what to do with an error # that isn't on either list. On Linux apparently you can get nearly arbitrary # errors from accept() and they should be ignored, because it just indicates a # socket that crashed before it began, and there isn't really anything to be # done about this, plus on other platforms you may not get any indication at # all, so programs have to tolerate not getting any indication too. OTOH if we # get an unexpected error then it could indicate something arbitrarily bad -- # after all, it's unexpected. # # Given that we know that other libraries seem to be getting along fine with a # fairly minimal list of errors to ignore, I think we'll be OK if we write # down that list and then raise on everything else. # # The other question is what to do about the capacity problem errors: EMFILE, # ENFILE, ENOBUFS, ENOMEM. Just flat out ignoring these is clearly not optimal # -- at the very least you want to log them, and probably you want to take # some remedial action. And if we ignore them then it prevents higher levels # from doing anything clever with them. So we raise them. _ignorable_accept_errno_names = [ # Linux can do this when the a connection is denied by the firewall "EPERM", # BSDs with an early close/reset "ECONNABORTED", # All the other miscellany noted above -- may not happen in practice, but # whatever. "EPROTO", "ENETDOWN", "ENOPROTOOPT", "EHOSTDOWN", "ENONET", "EHOSTUNREACH", "EOPNOTSUPP", "ENETUNREACH", "ENOSR", "ESOCKTNOSUPPORT", "EPROTONOSUPPORT", "ETIMEDOUT", "ECONNRESET", ] # Not all errnos are defined on all platforms _ignorable_accept_errnos = set() for name in _ignorable_accept_errno_names: try: _ignorable_accept_errnos.add(getattr(errno, name)) except AttributeError: pass class SocketListener(Listener[SocketStream], metaclass=Final): """A :class:`~trio.abc.Listener` that uses a listening socket to accept incoming connections as :class:`SocketStream` objects. Args: socket: The Trio socket object to wrap. Must have type ``SOCK_STREAM``, and be listening. Note that the :class:`SocketListener` "takes ownership" of the given socket; closing the :class:`SocketListener` will also close the socket. .. attribute:: socket The Trio socket object that this stream wraps. """ def __init__(self, socket): if not isinstance(socket, tsocket.SocketType): raise TypeError("SocketListener requires a Trio socket object") if socket.type != tsocket.SOCK_STREAM: raise ValueError("SocketListener requires a SOCK_STREAM socket") try: listening = socket.getsockopt(tsocket.SOL_SOCKET, tsocket.SO_ACCEPTCONN) except OSError: # SO_ACCEPTCONN fails on macOS; we just have to trust the user. pass else: if not listening: raise ValueError("SocketListener requires a listening socket") self.socket = socket async def accept(self): """Accept an incoming connection. Returns: :class:`SocketStream` Raises: OSError: if the underlying call to ``accept`` raises an unexpected error. ClosedResourceError: if you already closed the socket. This method handles routine errors like ``ECONNABORTED``, but passes other errors on to its caller. In particular, it does *not* make any special effort to handle resource exhaustion errors like ``EMFILE``, ``ENFILE``, ``ENOBUFS``, ``ENOMEM``. """ while True: try: sock, _ = await self.socket.accept() except OSError as exc: if exc.errno in _closed_stream_errnos: raise trio.ClosedResourceError if exc.errno not in _ignorable_accept_errnos: raise else: return SocketStream(sock) async def aclose(self): """Close this listener and its underlying socket.""" self.socket.close() await trio.lowlevel.checkpoint() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_highlevel_ssl_helpers.py0000644000076500000000000001312500000000000020162 0ustar00quentinwheelimport trio import ssl from ._highlevel_open_tcp_stream import DEFAULT_DELAY # It might have been nice to take a ssl_protocols= argument here to set up # NPN/ALPN, but to do this we have to mutate the context object, which is OK # if it's one we created, but not OK if it's one that was passed in... and # the one major protocol using NPN/ALPN is HTTP/2, which mandates that you use # a specially configured SSLContext anyway! I also thought maybe we could copy # the given SSLContext and then mutate the copy, but it's no good as SSLContext # objects can't be copied: https://bugs.python.org/issue33023. # So... let's punt on that for now. Hopefully we'll be getting a new Python # TLS API soon and can revisit this then. async def open_ssl_over_tcp_stream( host, port, *, https_compatible=False, ssl_context=None, # No trailing comma b/c bpo-9232 (fixed in py36) happy_eyeballs_delay=DEFAULT_DELAY, ): """Make a TLS-encrypted Connection to the given host and port over TCP. This is a convenience wrapper that calls :func:`open_tcp_stream` and wraps the result in an :class:`~trio.SSLStream`. This function does not perform the TLS handshake; you can do it manually by calling :meth:`~trio.SSLStream.do_handshake`, or else it will be performed automatically the first time you send or receive data. Args: host (bytes or str): The host to connect to. We require the server to have a TLS certificate valid for this hostname. port (int): The port to connect to. https_compatible (bool): Set this to True if you're connecting to a web server. See :class:`~trio.SSLStream` for details. Default: False. ssl_context (:class:`~ssl.SSLContext` or None): The SSL context to use. If None (the default), :func:`ssl.create_default_context` will be called to create a context. happy_eyeballs_delay (float): See :func:`open_tcp_stream`. Returns: trio.SSLStream: the encrypted connection to the server. """ tcp_stream = await trio.open_tcp_stream( host, port, happy_eyeballs_delay=happy_eyeballs_delay ) if ssl_context is None: ssl_context = ssl.create_default_context() return trio.SSLStream( tcp_stream, ssl_context, server_hostname=host, https_compatible=https_compatible ) async def open_ssl_over_tcp_listeners( port, ssl_context, *, host=None, https_compatible=False, backlog=None ): """Start listening for SSL/TLS-encrypted TCP connections to the given port. Args: port (int): The port to listen on. See :func:`open_tcp_listeners`. ssl_context (~ssl.SSLContext): The SSL context to use for all incoming connections. host (str, bytes, or None): The address to bind to; use ``None`` to bind to the wildcard address. See :func:`open_tcp_listeners`. https_compatible (bool): See :class:`~trio.SSLStream` for details. backlog (int or None): See :func:`open_tcp_listeners` for details. """ tcp_listeners = await trio.open_tcp_listeners(port, host=host, backlog=backlog) ssl_listeners = [ trio.SSLListener(tcp_listener, ssl_context, https_compatible=https_compatible) for tcp_listener in tcp_listeners ] return ssl_listeners async def serve_ssl_over_tcp( handler, port, ssl_context, *, host=None, https_compatible=False, backlog=None, handler_nursery=None, task_status=trio.TASK_STATUS_IGNORED, ): """Listen for incoming TCP connections, and for each one start a task running ``handler(stream)``. This is a thin convenience wrapper around :func:`open_ssl_over_tcp_listeners` and :func:`serve_listeners` – see them for full details. .. warning:: If ``handler`` raises an exception, then this function doesn't do anything special to catch it – so by default the exception will propagate out and crash your server. If you don't want this, then catch exceptions inside your ``handler``, or use a ``handler_nursery`` object that responds to exceptions in some other way. When used with ``nursery.start`` you get back the newly opened listeners. See the documentation for :func:`serve_tcp` for an example where this is useful. Args: handler: The handler to start for each incoming connection. Passed to :func:`serve_listeners`. port (int): The port to listen on. Use 0 to let the kernel pick an open port. Ultimately passed to :func:`open_tcp_listeners`. ssl_context (~ssl.SSLContext): The SSL context to use for all incoming connections. Passed to :func:`open_ssl_over_tcp_listeners`. host (str, bytes, or None): The address to bind to; use ``None`` to bind to the wildcard address. Ultimately passed to :func:`open_tcp_listeners`. https_compatible (bool): Set this to True if you want to use "HTTPS-style" TLS. See :class:`~trio.SSLStream` for details. backlog (int or None): See :class:`~trio.SSLStream` for details. handler_nursery: The nursery to start handlers in, or None to use an internal nursery. Passed to :func:`serve_listeners`. task_status: This function can be used with ``nursery.start``. Returns: This function only returns when cancelled. """ listeners = await trio.open_ssl_over_tcp_listeners( port, ssl_context, host=host, https_compatible=https_compatible, backlog=backlog, ) await trio.serve_listeners( handler, listeners, handler_nursery=handler_nursery, task_status=task_status ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_path.py0000644000076500000000000001425300000000000014547 0ustar00quentinwheel# type: ignore from functools import wraps, partial import os import types import pathlib import trio from trio._util import async_wraps, Final # re-wrap return value from methods that return new instances of pathlib.Path def rewrap_path(value): if isinstance(value, pathlib.Path): value = Path(value) return value def _forward_factory(cls, attr_name, attr): @wraps(attr) def wrapper(self, *args, **kwargs): attr = getattr(self._wrapped, attr_name) value = attr(*args, **kwargs) return rewrap_path(value) return wrapper def _forward_magic(cls, attr): sentinel = object() @wraps(attr) def wrapper(self, other=sentinel): if other is sentinel: return attr(self._wrapped) if isinstance(other, cls): other = other._wrapped value = attr(self._wrapped, other) return rewrap_path(value) return wrapper def iter_wrapper_factory(cls, meth_name): @async_wraps(cls, cls._wraps, meth_name) async def wrapper(self, *args, **kwargs): meth = getattr(self._wrapped, meth_name) func = partial(meth, *args, **kwargs) # Make sure that the full iteration is performed in the thread # by converting the generator produced by pathlib into a list items = await trio.to_thread.run_sync(lambda: list(func())) return (rewrap_path(item) for item in items) return wrapper def thread_wrapper_factory(cls, meth_name): @async_wraps(cls, cls._wraps, meth_name) async def wrapper(self, *args, **kwargs): meth = getattr(self._wrapped, meth_name) func = partial(meth, *args, **kwargs) value = await trio.to_thread.run_sync(func) return rewrap_path(value) return wrapper def classmethod_wrapper_factory(cls, meth_name): @classmethod @async_wraps(cls, cls._wraps, meth_name) async def wrapper(cls, *args, **kwargs): meth = getattr(cls._wraps, meth_name) func = partial(meth, *args, **kwargs) value = await trio.to_thread.run_sync(func) return rewrap_path(value) return wrapper class AsyncAutoWrapperType(Final): def __init__(cls, name, bases, attrs): super().__init__(name, bases, attrs) cls._forward = [] type(cls).generate_forwards(cls, attrs) type(cls).generate_wraps(cls, attrs) type(cls).generate_magic(cls, attrs) type(cls).generate_iter(cls, attrs) def generate_forwards(cls, attrs): # forward functions of _forwards for attr_name, attr in cls._forwards.__dict__.items(): if attr_name.startswith("_") or attr_name in attrs: continue if isinstance(attr, property): cls._forward.append(attr_name) elif isinstance(attr, types.FunctionType): wrapper = _forward_factory(cls, attr_name, attr) setattr(cls, attr_name, wrapper) else: raise TypeError(attr_name, type(attr)) def generate_wraps(cls, attrs): # generate wrappers for functions of _wraps for attr_name, attr in cls._wraps.__dict__.items(): # .z. exclude cls._wrap_iter if attr_name.startswith("_") or attr_name in attrs: continue if isinstance(attr, classmethod): wrapper = classmethod_wrapper_factory(cls, attr_name) setattr(cls, attr_name, wrapper) elif isinstance(attr, types.FunctionType): wrapper = thread_wrapper_factory(cls, attr_name) setattr(cls, attr_name, wrapper) else: raise TypeError(attr_name, type(attr)) def generate_magic(cls, attrs): # generate wrappers for magic for attr_name in cls._forward_magic: attr = getattr(cls._forwards, attr_name) wrapper = _forward_magic(cls, attr) setattr(cls, attr_name, wrapper) def generate_iter(cls, attrs): # generate wrappers for methods that return iterators for attr_name, attr in cls._wraps.__dict__.items(): if attr_name in cls._wrap_iter: wrapper = iter_wrapper_factory(cls, attr_name) setattr(cls, attr_name, wrapper) class Path(metaclass=AsyncAutoWrapperType): """A :class:`pathlib.Path` wrapper that executes blocking methods in :meth:`trio.to_thread.run_sync`. """ _wraps = pathlib.Path _forwards = pathlib.PurePath _forward_magic = [ "__str__", "__bytes__", "__truediv__", "__rtruediv__", "__eq__", "__lt__", "__le__", "__gt__", "__ge__", "__hash__", ] _wrap_iter = ["glob", "rglob", "iterdir"] def __init__(self, *args): self._wrapped = pathlib.Path(*args) def __getattr__(self, name): if name in self._forward: value = getattr(self._wrapped, name) return rewrap_path(value) raise AttributeError(name) def __dir__(self): return super().__dir__() + self._forward def __repr__(self): return "trio.Path({})".format(repr(str(self))) def __fspath__(self): return os.fspath(self._wrapped) @wraps(pathlib.Path.open) async def open(self, *args, **kwargs): """Open the file pointed to by the path, like the :func:`trio.open_file` function does. """ func = partial(self._wrapped.open, *args, **kwargs) value = await trio.to_thread.run_sync(func) return trio.wrap_file(value) Path.iterdir.__doc__ = """ Like :meth:`pathlib.Path.iterdir`, but async. This is an async method that returns a synchronous iterator, so you use it like:: for subpath in await mypath.iterdir(): ... Note that it actually loads the whole directory list into memory immediately, during the initial call. (See `issue #501 `__ for discussion.) """ # The value of Path.absolute.__doc__ makes a reference to # :meth:~pathlib.Path.absolute, which does not exist. Removing this makes more # sense than inventing our own special docstring for this. del Path.absolute.__doc__ os.PathLike.register(Path) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_signals.py0000644000076500000000000001451700000000000015256 0ustar00quentinwheelimport signal from contextlib import contextmanager from collections import OrderedDict import trio from ._util import signal_raise, is_main_thread, ConflictDetector # Discussion of signal handling strategies: # # - On Windows signals barely exist. There are no options; signal handlers are # the only available API. # # - On Linux signalfd is arguably the natural way. Semantics: signalfd acts as # an *alternative* signal delivery mechanism. The way you use it is to mask # out the relevant signals process-wide (so that they don't get delivered # the normal way), and then when you read from signalfd that actually counts # as delivering it (despite the mask). The problem with this is that we # don't have any reliable way to mask out signals process-wide -- the only # way to do that in Python is to call pthread_sigmask from the main thread # *before starting any other threads*, and as a library we can't really # impose that, and the failure mode is annoying (signals get delivered via # signal handlers whether we want them to or not). # # - on macOS/*BSD, kqueue is the natural way. Semantics: kqueue acts as an # *extra* signal delivery mechanism. Signals are delivered the normal # way, *and* are delivered to kqueue. So you want to set them to SIG_IGN so # that they don't end up pending forever (I guess?). I can't find any actual # docs on how masking and EVFILT_SIGNAL interact. I did see someone note # that if a signal is pending when the kqueue filter is added then you # *don't* get notified of that, which makes sense. But still, we have to # manipulate signal state (e.g. setting SIG_IGN) which as far as Python is # concerned means we have to do this from the main thread. # # So in summary, there don't seem to be any compelling advantages to using the # platform-native signal notification systems; they're kinda nice, but it's # simpler to implement the naive signal-handler-based system once and be # done. (The big advantage would be if there were a reliable way to monitor # for SIGCHLD from outside the main thread and without interfering with other # libraries that also want to monitor for SIGCHLD. But there isn't. I guess # kqueue might give us that, but in kqueue we don't need it, because kqueue # can directly monitor for child process state changes.) @contextmanager def _signal_handler(signals, handler): original_handlers = {} try: for signum in set(signals): original_handlers[signum] = signal.signal(signum, handler) yield finally: for signum, original_handler in original_handlers.items(): signal.signal(signum, original_handler) class SignalReceiver: def __init__(self): # {signal num: None} self._pending = OrderedDict() self._lot = trio.lowlevel.ParkingLot() self._conflict_detector = ConflictDetector( "only one task can iterate on a signal receiver at a time" ) self._closed = False def _add(self, signum): if self._closed: signal_raise(signum) else: self._pending[signum] = None self._lot.unpark() def _redeliver_remaining(self): # First make sure that any signals still in the delivery pipeline will # get redelivered self._closed = True # And then redeliver any that are sitting in pending. This is done # using a weird recursive construct to make sure we process everything # even if some of the handlers raise exceptions. def deliver_next(): if self._pending: signum, _ = self._pending.popitem(last=False) try: signal_raise(signum) finally: deliver_next() deliver_next() # Helper for tests, not public or otherwise used def _pending_signal_count(self): return len(self._pending) def __aiter__(self): return self async def __anext__(self): if self._closed: raise RuntimeError("open_signal_receiver block already exited") # In principle it would be possible to support multiple concurrent # calls to __anext__, but doing it without race conditions is quite # tricky, and there doesn't seem to be any point in trying. with self._conflict_detector: if not self._pending: await self._lot.park() else: await trio.lowlevel.checkpoint() signum, _ = self._pending.popitem(last=False) return signum @contextmanager def open_signal_receiver(*signals): """A context manager for catching signals. Entering this context manager starts listening for the given signals and returns an async iterator; exiting the context manager stops listening. The async iterator blocks until a signal arrives, and then yields it. Note that if you leave the ``with`` block while the iterator has unextracted signals still pending inside it, then they will be re-delivered using Python's regular signal handling logic. This avoids a race condition when signals arrives just before we exit the ``with`` block. Args: signals: the signals to listen for. Raises: TypeError: if no signals were provided. RuntimeError: if you try to use this anywhere except Python's main thread. (This is a Python limitation.) Example: A common convention for Unix daemons is that they should reload their configuration when they receive a ``SIGHUP``. Here's a sketch of what that might look like using :func:`open_signal_receiver`:: with trio.open_signal_receiver(signal.SIGHUP) as signal_aiter: async for signum in signal_aiter: assert signum == signal.SIGHUP reload_configuration() """ if not signals: raise TypeError("No signals were provided") if not is_main_thread(): raise RuntimeError( "Sorry, open_signal_receiver is only possible when running in " "Python interpreter's main thread" ) token = trio.lowlevel.current_trio_token() queue = SignalReceiver() def handler(signum, _): token.run_sync_soon(queue._add, signum, idempotent=True) try: with _signal_handler(signals, handler): yield queue finally: queue._redeliver_remaining() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_socket.py0000644000076500000000000007304300000000000015105 0ustar00quentinwheelimport os import sys import select import socket as _stdlib_socket from functools import wraps as _wraps from typing import TYPE_CHECKING import idna as _idna import trio from . import _core # Usage: # # async with _try_sync(): # return sync_call_that_might_fail_with_exception() # # we only get here if the sync call in fact did fail with a # # BlockingIOError # return await do_it_properly_with_a_check_point() # class _try_sync: def __init__(self, blocking_exc_override=None): self._blocking_exc_override = blocking_exc_override def _is_blocking_io_error(self, exc): if self._blocking_exc_override is None: return isinstance(exc, BlockingIOError) else: return self._blocking_exc_override(exc) async def __aenter__(self): await trio.lowlevel.checkpoint_if_cancelled() async def __aexit__(self, etype, value, tb): if value is not None and self._is_blocking_io_error(value): # Discard the exception and fall through to the code below the # block return True else: await trio.lowlevel.cancel_shielded_checkpoint() # Let the return or exception propagate return False ################################################################ # CONSTANTS ################################################################ try: from socket import IPPROTO_IPV6 except ImportError: # As of at least 3.6, python on Windows is missing IPPROTO_IPV6 # https://bugs.python.org/issue29515 if sys.platform == "win32": # pragma: no branch IPPROTO_IPV6 = 41 ################################################################ # Overrides ################################################################ _resolver = _core.RunVar("hostname_resolver") _socket_factory = _core.RunVar("socket_factory") def set_custom_hostname_resolver(hostname_resolver): """Set a custom hostname resolver. By default, Trio's :func:`getaddrinfo` and :func:`getnameinfo` functions use the standard system resolver functions. This function allows you to customize that behavior. The main intended use case is for testing, but it might also be useful for using third-party resolvers like `c-ares `__ (though be warned that these rarely make perfect drop-in replacements for the system resolver). See :class:`trio.abc.HostnameResolver` for more details. Setting a custom hostname resolver affects all future calls to :func:`getaddrinfo` and :func:`getnameinfo` within the enclosing call to :func:`trio.run`. All other hostname resolution in Trio is implemented in terms of these functions. Generally you should call this function just once, right at the beginning of your program. Args: hostname_resolver (trio.abc.HostnameResolver or None): The new custom hostname resolver, or None to restore the default behavior. Returns: The previous hostname resolver (which may be None). """ old = _resolver.get(None) _resolver.set(hostname_resolver) return old def set_custom_socket_factory(socket_factory): """Set a custom socket object factory. This function allows you to replace Trio's normal socket class with a custom class. This is very useful for testing, and probably a bad idea in any other circumstance. See :class:`trio.abc.HostnameResolver` for more details. Setting a custom socket factory affects all future calls to :func:`socket` within the enclosing call to :func:`trio.run`. Generally you should call this function just once, right at the beginning of your program. Args: socket_factory (trio.abc.SocketFactory or None): The new custom socket factory, or None to restore the default behavior. Returns: The previous socket factory (which may be None). """ old = _socket_factory.get(None) _socket_factory.set(socket_factory) return old ################################################################ # getaddrinfo and friends ################################################################ _NUMERIC_ONLY = _stdlib_socket.AI_NUMERICHOST | _stdlib_socket.AI_NUMERICSERV async def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): """Look up a numeric address given a name. Arguments and return values are identical to :func:`socket.getaddrinfo`, except that this version is async. Also, :func:`trio.socket.getaddrinfo` correctly uses IDNA 2008 to process non-ASCII domain names. (:func:`socket.getaddrinfo` uses IDNA 2003, which can give the wrong result in some cases and cause you to connect to a different host than the one you intended; see `bpo-17305 `__.) This function's behavior can be customized using :func:`set_custom_hostname_resolver`. """ # If host and port are numeric, then getaddrinfo doesn't block and we can # skip the whole thread thing, which seems worthwhile. So we try first # with the _NUMERIC_ONLY flags set, and then only spawn a thread if that # fails with EAI_NONAME: def numeric_only_failure(exc): return ( isinstance(exc, _stdlib_socket.gaierror) and exc.errno == _stdlib_socket.EAI_NONAME ) async with _try_sync(numeric_only_failure): return _stdlib_socket.getaddrinfo( host, port, family, type, proto, flags | _NUMERIC_ONLY ) # That failed; it's a real hostname. We better use a thread. # # Also, it might be a unicode hostname, in which case we want to do our # own encoding using the idna module, rather than letting Python do # it. (Python will use the old IDNA 2003 standard, and possibly get the # wrong answer - see bpo-17305). However, the idna module is picky, and # will refuse to process some valid hostname strings, like "::1". So if # it's already ascii, we pass it through; otherwise, we encode it to. if isinstance(host, str): try: host = host.encode("ascii") except UnicodeEncodeError: # UTS-46 defines various normalizations; in particular, by default # idna.encode will error out if the hostname has Capital Letters # in it; with uts46=True it will lowercase them instead. host = _idna.encode(host, uts46=True) hr = _resolver.get(None) if hr is not None: return await hr.getaddrinfo(host, port, family, type, proto, flags) else: return await trio.to_thread.run_sync( _stdlib_socket.getaddrinfo, host, port, family, type, proto, flags, cancellable=True, ) async def getnameinfo(sockaddr, flags): """Look up a name given a numeric address. Arguments and return values are identical to :func:`socket.getnameinfo`, except that this version is async. This function's behavior can be customized using :func:`set_custom_hostname_resolver`. """ hr = _resolver.get(None) if hr is not None: return await hr.getnameinfo(sockaddr, flags) else: return await trio.to_thread.run_sync( _stdlib_socket.getnameinfo, sockaddr, flags, cancellable=True ) async def getprotobyname(name): """Look up a protocol number by name. (Rarely used.) Like :func:`socket.getprotobyname`, but async. """ return await trio.to_thread.run_sync( _stdlib_socket.getprotobyname, name, cancellable=True ) # obsolete gethostbyname etc. intentionally omitted # likewise for create_connection (use open_tcp_stream instead) ################################################################ # Socket "constructors" ################################################################ def from_stdlib_socket(sock): """Convert a standard library :func:`socket.socket` object into a Trio socket object. """ return _SocketType(sock) @_wraps(_stdlib_socket.fromfd, assigned=(), updated=()) def fromfd(fd, family, type, proto=0): """Like :func:`socket.fromfd`, but returns a Trio socket object.""" family, type, proto = _sniff_sockopts_for_fileno(family, type, proto, fd) return from_stdlib_socket(_stdlib_socket.fromfd(fd, family, type, proto)) if sys.platform == "win32" or ( not TYPE_CHECKING and hasattr(_stdlib_socket, "fromshare") ): @_wraps(_stdlib_socket.fromshare, assigned=(), updated=()) def fromshare(*args, **kwargs): return from_stdlib_socket(_stdlib_socket.fromshare(*args, **kwargs)) @_wraps(_stdlib_socket.socketpair, assigned=(), updated=()) def socketpair(*args, **kwargs): """Like :func:`socket.socketpair`, but returns a pair of Trio socket objects. """ left, right = _stdlib_socket.socketpair(*args, **kwargs) return (from_stdlib_socket(left), from_stdlib_socket(right)) @_wraps(_stdlib_socket.socket, assigned=(), updated=()) def socket( family=_stdlib_socket.AF_INET, type=_stdlib_socket.SOCK_STREAM, proto=0, fileno=None, ): """Create a new Trio socket, like :func:`socket.socket`. This function's behavior can be customized using :func:`set_custom_socket_factory`. """ if fileno is None: sf = _socket_factory.get(None) if sf is not None: return sf.socket(family, type, proto) else: family, type, proto = _sniff_sockopts_for_fileno(family, type, proto, fileno) stdlib_socket = _stdlib_socket.socket(family, type, proto, fileno) return from_stdlib_socket(stdlib_socket) def _sniff_sockopts_for_fileno(family, type, proto, fileno): """Correct SOCKOPTS for given fileno, falling back to provided values.""" # Wrap the raw fileno into a Python socket object # This object might have the wrong metadata, but it lets us easily call getsockopt # and then we'll throw it away and construct a new one with the correct metadata. if sys.platform != "linux": return family, type, proto from socket import SO_DOMAIN, SO_PROTOCOL, SOL_SOCKET, SO_TYPE sockobj = _stdlib_socket.socket(family, type, proto, fileno=fileno) try: family = sockobj.getsockopt(SOL_SOCKET, SO_DOMAIN) proto = sockobj.getsockopt(SOL_SOCKET, SO_PROTOCOL) type = sockobj.getsockopt(SOL_SOCKET, SO_TYPE) finally: # Unwrap it again, so that sockobj.__del__ doesn't try to close our socket sockobj.detach() return family, type, proto ################################################################ # _SocketType ################################################################ # sock.type gets weird stuff set in it, in particular on Linux: # # https://bugs.python.org/issue21327 # # But on other platforms (e.g. Windows) SOCK_NONBLOCK and SOCK_CLOEXEC aren't # even defined. To recover the actual socket type (e.g. SOCK_STREAM) from a # socket.type attribute, mask with this: _SOCK_TYPE_MASK = ~( getattr(_stdlib_socket, "SOCK_NONBLOCK", 0) | getattr(_stdlib_socket, "SOCK_CLOEXEC", 0) ) # This function will modify the given socket to match the behavior in python # 3.7. This will become unnecessary and can be removed when support for versions # older than 3.7 is dropped. def real_socket_type(type_num): return type_num & _SOCK_TYPE_MASK def _make_simple_sock_method_wrapper(methname, wait_fn, maybe_avail=False): fn = getattr(_stdlib_socket.socket, methname) @_wraps(fn, assigned=("__name__",), updated=()) async def wrapper(self, *args, **kwargs): return await self._nonblocking_helper(fn, args, kwargs, wait_fn) wrapper.__doc__ = f"""Like :meth:`socket.socket.{methname}`, but async. """ if maybe_avail: wrapper.__doc__ += ( f"Only available on platforms where :meth:`socket.socket.{methname}` is " "available." ) return wrapper class SocketType: def __init__(self): raise TypeError( "SocketType is an abstract class; use trio.socket.socket if you " "want to construct a socket object" ) class _SocketType(SocketType): def __init__(self, sock): if type(sock) is not _stdlib_socket.socket: # For example, ssl.SSLSocket subclasses socket.socket, but we # certainly don't want to blindly wrap one of those. raise TypeError( "expected object of type 'socket.socket', not '{}".format( type(sock).__name__ ) ) self._sock = sock self._sock.setblocking(False) self._did_shutdown_SHUT_WR = False ################################################################ # Simple + portable methods and attributes ################################################################ # NB this doesn't work because for loops don't create a scope # for _name in [ # ]: # _meth = getattr(_stdlib_socket.socket, _name) # @_wraps(_meth, assigned=("__name__", "__doc__"), updated=()) # def _wrapped(self, *args, **kwargs): # return getattr(self._sock, _meth)(*args, **kwargs) # locals()[_meth] = _wrapped # del _name, _meth, _wrapped _forward = { "detach", "get_inheritable", "set_inheritable", "fileno", "getpeername", "getsockname", "getsockopt", "setsockopt", "listen", "share", } def __getattr__(self, name): if name in self._forward: return getattr(self._sock, name) raise AttributeError(name) def __dir__(self): return super().__dir__() + list(self._forward) def __enter__(self): return self def __exit__(self, *exc_info): return self._sock.__exit__(*exc_info) @property def family(self): return self._sock.family @property def type(self): # Modify the socket type do match what is done on python 3.7. When # support for versions older than 3.7 is dropped, this can be updated # to just return self._sock.type return real_socket_type(self._sock.type) @property def proto(self): return self._sock.proto @property def did_shutdown_SHUT_WR(self): return self._did_shutdown_SHUT_WR def __repr__(self): return repr(self._sock).replace("socket.socket", "trio.socket.socket") def dup(self): """Same as :meth:`socket.socket.dup`.""" return _SocketType(self._sock.dup()) def close(self): if self._sock.fileno() != -1: trio.lowlevel.notify_closing(self._sock) self._sock.close() async def bind(self, address): address = await self._resolve_local_address_nocp(address) if ( hasattr(_stdlib_socket, "AF_UNIX") and self.family == _stdlib_socket.AF_UNIX and address[0] ): # Use a thread for the filesystem traversal (unless it's an # abstract domain socket) return await trio.to_thread.run_sync(self._sock.bind, address) else: # POSIX actually says that bind can return EWOULDBLOCK and # complete asynchronously, like connect. But in practice AFAICT # there aren't yet any real systems that do this, so we'll worry # about it when it happens. await trio.lowlevel.checkpoint() return self._sock.bind(address) def shutdown(self, flag): # no need to worry about return value b/c always returns None: self._sock.shutdown(flag) # only do this if the call succeeded: if flag in [_stdlib_socket.SHUT_WR, _stdlib_socket.SHUT_RDWR]: self._did_shutdown_SHUT_WR = True def is_readable(self): # use select.select on Windows, and select.poll everywhere else if sys.platform == "win32": rready, _, _ = select.select([self._sock], [], [], 0) return bool(rready) p = select.poll() p.register(self._sock, select.POLLIN) return bool(p.poll(0)) async def wait_writable(self): await _core.wait_writable(self._sock) ################################################################ # Address handling ################################################################ # Take an address in Python's representation, and returns a new address in # the same representation, but with names resolved to numbers, # etc. # # NOTE: this function does not always checkpoint async def _resolve_address_nocp(self, address, flags): # Do some pre-checking (or exit early for non-IP sockets) if self._sock.family == _stdlib_socket.AF_INET: if not isinstance(address, tuple) or not len(address) == 2: raise ValueError("address should be a (host, port) tuple") elif self._sock.family == _stdlib_socket.AF_INET6: if not isinstance(address, tuple) or not 2 <= len(address) <= 4: raise ValueError( "address should be a (host, port, [flowinfo, [scopeid]]) tuple" ) elif self._sock.family == _stdlib_socket.AF_UNIX: # unwrap path-likes return os.fspath(address) else: return address # -- From here on we know we have IPv4 or IPV6 -- host, port, *_ = address # Fast path for the simple case: already-resolved IP address, # already-resolved port. This is particularly important for UDP, since # every sendto call goes through here. if isinstance(port, int): try: _stdlib_socket.inet_pton(self._sock.family, address[0]) except (OSError, TypeError): pass else: return address # Special cases to match the stdlib, see gh-277 if host == "": host = None if host == "": host = "255.255.255.255" # Since we always pass in an explicit family here, AI_ADDRCONFIG # doesn't add any value -- if we have no ipv6 connectivity and are # working with an ipv6 socket, then things will break soon enough! And # if we do enable it, then it makes it impossible to even run tests # for ipv6 address resolution on travis-ci, which as of 2017-03-07 has # no ipv6. # flags |= AI_ADDRCONFIG if self._sock.family == _stdlib_socket.AF_INET6: if not self._sock.getsockopt(IPPROTO_IPV6, _stdlib_socket.IPV6_V6ONLY): flags |= _stdlib_socket.AI_V4MAPPED gai_res = await getaddrinfo( host, port, self._sock.family, self.type, self._sock.proto, flags ) # AFAICT from the spec it's not possible for getaddrinfo to return an # empty list. assert len(gai_res) >= 1 # Address is the last item in the first entry (*_, normed), *_ = gai_res # The above ignored any flowid and scopeid in the passed-in address, # so restore them if present: if self._sock.family == _stdlib_socket.AF_INET6: normed = list(normed) assert len(normed) == 4 if len(address) >= 3: normed[2] = address[2] if len(address) >= 4: normed[3] = address[3] normed = tuple(normed) return normed # Returns something appropriate to pass to bind() # # NOTE: this function does not always checkpoint async def _resolve_local_address_nocp(self, address): return await self._resolve_address_nocp(address, _stdlib_socket.AI_PASSIVE) # Returns something appropriate to pass to connect()/sendto()/sendmsg() # # NOTE: this function does not always checkpoint async def _resolve_remote_address_nocp(self, address): return await self._resolve_address_nocp(address, 0) async def _nonblocking_helper(self, fn, args, kwargs, wait_fn): # We have to reconcile two conflicting goals: # - We want to make it look like we always blocked in doing these # operations. The obvious way is to always do an IO wait before # calling the function. # - But, we also want to provide the correct semantics, and part # of that means giving correct errors. So, for example, if you # haven't called .listen(), then .accept() raises an error # immediately. But in this same circumstance, then on macOS, the # socket does not register as readable. So if we block waiting # for read *before* we call accept, then we'll be waiting # forever instead of properly raising an error. (On Linux, # interestingly, AFAICT a socket that can't possible read/write # *does* count as readable/writable for select() purposes. But # not on macOS.) # # So, we have to call the function once, with the appropriate # cancellation/yielding sandwich if it succeeds, and if it gives # BlockingIOError *then* we fall back to IO wait. # # XX think if this can be combined with the similar logic for IOCP # submission... async with _try_sync(): return fn(self._sock, *args, **kwargs) # First attempt raised BlockingIOError: while True: await wait_fn(self._sock) try: return fn(self._sock, *args, **kwargs) except BlockingIOError: pass ################################################################ # accept ################################################################ _accept = _make_simple_sock_method_wrapper("accept", _core.wait_readable) async def accept(self): """Like :meth:`socket.socket.accept`, but async.""" sock, addr = await self._accept() return from_stdlib_socket(sock), addr ################################################################ # connect ################################################################ async def connect(self, address): # nonblocking connect is weird -- you call it to start things # off, then the socket becomes writable as a completion # notification. This means it isn't really cancellable... we close the # socket if cancelled, to avoid confusion. try: address = await self._resolve_remote_address_nocp(address) async with _try_sync(): # An interesting puzzle: can a non-blocking connect() return EINTR # (= raise InterruptedError)? PEP 475 specifically left this as # the one place where it lets an InterruptedError escape instead # of automatically retrying. This is based on the idea that EINTR # from connect means that the connection was already started, and # will continue in the background. For a blocking connect, this # sort of makes sense: if it returns EINTR then the connection # attempt is continuing in the background, and on many system you # can't then call connect() again because there is already a # connect happening. See: # # http://www.madore.org/~david/computers/connect-intr.html # # For a non-blocking connect, it doesn't make as much sense -- # surely the interrupt didn't happen after we successfully # initiated the connect and are just waiting for it to complete, # because a non-blocking connect does not wait! And the spec # describes the interaction between EINTR/blocking connect, but # doesn't have anything useful to say about non-blocking connect: # # http://pubs.opengroup.org/onlinepubs/007904975/functions/connect.html # # So we have a conundrum: if EINTR means that the connect() hasn't # happened (like it does for essentially every other syscall), # then InterruptedError should be caught and retried. If EINTR # means that the connect() has successfully started, then # InterruptedError should be caught and ignored. Which should we # do? # # In practice, the resolution is probably that non-blocking # connect simply never returns EINTR, so the question of how to # handle it is moot. Someone spelunked macOS/FreeBSD and # confirmed this is true there: # # https://stackoverflow.com/questions/14134440/eintr-and-non-blocking-calls # # and exarkun seems to think it's true in general of non-blocking # calls: # # https://twistedmatrix.com/pipermail/twisted-python/2010-September/022864.html # (and indeed, AFAICT twisted doesn't try to handle # InterruptedError). # # So we don't try to catch InterruptedError. This way if it # happens, someone will hopefully tell us, and then hopefully we # can investigate their system to figure out what its semantics # are. return self._sock.connect(address) # It raised BlockingIOError, meaning that it's started the # connection attempt. We wait for it to complete: await _core.wait_writable(self._sock) except trio.Cancelled: # We can't really cancel a connect, and the socket is in an # indeterminate state. Better to close it so we don't get # confused. self._sock.close() raise # Okay, the connect finished, but it might have failed: err = self._sock.getsockopt(_stdlib_socket.SOL_SOCKET, _stdlib_socket.SO_ERROR) if err != 0: raise OSError(err, "Error in connect: " + os.strerror(err)) ################################################################ # recv ################################################################ recv = _make_simple_sock_method_wrapper("recv", _core.wait_readable) ################################################################ # recv_into ################################################################ recv_into = _make_simple_sock_method_wrapper("recv_into", _core.wait_readable) ################################################################ # recvfrom ################################################################ recvfrom = _make_simple_sock_method_wrapper("recvfrom", _core.wait_readable) ################################################################ # recvfrom_into ################################################################ recvfrom_into = _make_simple_sock_method_wrapper( "recvfrom_into", _core.wait_readable ) ################################################################ # recvmsg ################################################################ if hasattr(_stdlib_socket.socket, "recvmsg"): recvmsg = _make_simple_sock_method_wrapper( "recvmsg", _core.wait_readable, maybe_avail=True ) ################################################################ # recvmsg_into ################################################################ if hasattr(_stdlib_socket.socket, "recvmsg_into"): recvmsg_into = _make_simple_sock_method_wrapper( "recvmsg_into", _core.wait_readable, maybe_avail=True ) ################################################################ # send ################################################################ send = _make_simple_sock_method_wrapper("send", _core.wait_writable) ################################################################ # sendto ################################################################ @_wraps(_stdlib_socket.socket.sendto, assigned=(), updated=()) async def sendto(self, *args): """Similar to :meth:`socket.socket.sendto`, but async.""" # args is: data[, flags], address) # and kwargs are not accepted args = list(args) args[-1] = await self._resolve_remote_address_nocp(args[-1]) return await self._nonblocking_helper( _stdlib_socket.socket.sendto, args, {}, _core.wait_writable ) ################################################################ # sendmsg ################################################################ if sys.platform != "win32" or ( not TYPE_CHECKING and hasattr(_stdlib_socket.socket, "sendmsg") ): @_wraps(_stdlib_socket.socket.sendmsg, assigned=(), updated=()) async def sendmsg(self, *args): """Similar to :meth:`socket.socket.sendmsg`, but async. Only available on platforms where :meth:`socket.socket.sendmsg` is available. """ # args is: buffers[, ancdata[, flags[, address]]] # and kwargs are not accepted if len(args) == 4 and args[-1] is not None: args = list(args) args[-1] = await self._resolve_remote_address_nocp(args[-1]) return await self._nonblocking_helper( _stdlib_socket.socket.sendmsg, args, {}, _core.wait_writable ) ################################################################ # sendfile ################################################################ # Not implemented yet: # async def sendfile(self, file, offset=0, count=None): # XX # Intentionally omitted: # sendall # makefile # setblocking/getblocking # settimeout/gettimeout # timeout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_ssl.py0000644000076500000000000012614400000000000014417 0ustar00quentinwheel# General theory of operation: # # We implement an API that closely mirrors the stdlib ssl module's blocking # API, and we do it using the stdlib ssl module's non-blocking in-memory API. # The stdlib non-blocking in-memory API is barely documented, and acts as a # thin wrapper around openssl, whose documentation also leaves something to be # desired. So here's the main things you need to know to understand the code # in this file: # # We use an ssl.SSLObject, which exposes the four main I/O operations: # # - do_handshake: performs the initial handshake. Must be called once at the # beginning of each connection; is a no-op once it's completed once. # # - write: takes some unencrypted data and attempts to send it to the remote # peer. # - read: attempts to decrypt and return some data from the remote peer. # # - unwrap: this is weirdly named; maybe it helps to realize that the thing it # wraps is called SSL_shutdown. It sends a cryptographically signed message # saying "I'm closing this connection now", and then waits to receive the # same from the remote peer (unless we already received one, in which case # it returns immediately). # # All of these operations read and write from some in-memory buffers called # "BIOs", which are an opaque OpenSSL-specific object that's basically # semantically equivalent to a Python bytearray. When they want to send some # bytes to the remote peer, they append them to the outgoing BIO, and when # they want to receive some bytes from the remote peer, they try to pull them # out of the incoming BIO. "Sending" always succeeds, because the outgoing BIO # can always be extended to hold more data. "Receiving" acts sort of like a # non-blocking socket: it might manage to get some data immediately, or it # might fail and need to be tried again later. We can also directly add or # remove data from the BIOs whenever we want. # # Now the problem is that while these I/O operations are opaque atomic # operations from the point of view of us calling them, under the hood they # might require some arbitrary sequence of sends and receives from the remote # peer. This is particularly true for do_handshake, which generally requires a # few round trips, but it's also true for write and read, due to an evil thing # called "renegotiation". # # Renegotiation is the process by which one of the peers might arbitrarily # decide to redo the handshake at any time. Did I mention it's evil? It's # pretty evil, and almost universally hated. The HTTP/2 spec forbids the use # of TLS renegotiation for HTTP/2 connections. TLS 1.3 removes it from the # protocol entirely. It's impossible to trigger a renegotiation if using # Python's ssl module. OpenSSL's renegotiation support is pretty buggy [1]. # Nonetheless, it does get used in real life, mostly in two cases: # # 1) Normally in TLS 1.2 and below, when the client side of a connection wants # to present a certificate to prove their identity, that certificate gets sent # in plaintext. This is bad, because it means that anyone eavesdropping can # see who's connecting – it's like sending your username in plain text. Not as # bad as sending your password in plain text, but still, pretty bad. However, # renegotiations *are* encrypted. So as a workaround, it's not uncommon for # systems that want to use client certificates to first do an anonymous # handshake, and then to turn around and do a second handshake (= # renegotiation) and this time ask for a client cert. Or sometimes this is # done on a case-by-case basis, e.g. a web server might accept a connection, # read the request, and then once it sees the page you're asking for it might # stop and ask you for a certificate. # # 2) In principle the same TLS connection can be used for an arbitrarily long # time, and might transmit arbitrarily large amounts of data. But this creates # a cryptographic problem: an attacker who has access to arbitrarily large # amounts of data that's all encrypted using the same key may eventually be # able to use this to figure out the key. Is this a real practical problem? I # have no idea, I'm not a cryptographer. In any case, some people worry that # it's a problem, so their TLS libraries are designed to automatically trigger # a renegotiation every once in a while on some sort of timer. # # The end result is that you might be going along, minding your own business, # and then *bam*! a wild renegotiation appears! And you just have to cope. # # The reason that coping with renegotiations is difficult is that some # unassuming "read" or "write" call might find itself unable to progress until # it does a handshake, which remember is a process with multiple round # trips. So read might have to send data, and write might have to receive # data, and this might happen multiple times. And some of those attempts might # fail because there isn't any data yet, and need to be retried. Managing all # this is pretty complicated. # # Here's how openssl (and thus the stdlib ssl module) handle this. All of the # I/O operations above follow the same rules. When you call one of them: # # - it might write some data to the outgoing BIO # - it might read some data from the incoming BIO # - it might raise SSLWantReadError if it can't complete without reading more # data from the incoming BIO. This is important: the "read" in ReadError # refers to reading from the *underlying* stream. # - (and in principle it might raise SSLWantWriteError too, but that never # happens when using memory BIOs, so never mind) # # If it doesn't raise an error, then the operation completed successfully # (though we still need to take any outgoing data out of the memory buffer and # put it onto the wire). If it *does* raise an error, then we need to retry # *exactly that method call* later – in particular, if a 'write' failed, we # need to try again later *with the same data*, because openssl might have # already committed some of the initial parts of our data to its output even # though it didn't tell us that, and has remembered that the next time we call # write it needs to skip the first 1024 bytes or whatever it is. (Well, # technically, we're actually allowed to call 'write' again with a data buffer # which is the same as our old one PLUS some extra stuff added onto the end, # but in Trio that never comes up so never mind.) # # There are some people online who claim that once you've gotten a Want*Error # then the *very next call* you make to openssl *must* be the same as the # previous one. I'm pretty sure those people are wrong. In particular, it's # okay to call write, get a WantReadError, and then call read a few times; # it's just that *the next time you call write*, it has to be with the same # data. # # One final wrinkle: we want our SSLStream to support full-duplex operation, # i.e. it should be possible for one task to be calling send_all while another # task is calling receive_some. But renegotiation makes this a big hassle, because # even if SSLStream's restricts themselves to one task calling send_all and one # task calling receive_some, those two tasks might end up both wanting to call # send_all, or both to call receive_some at the same time *on the underlying # stream*. So we have to do some careful locking to hide this problem from our # users. # # (Renegotiation is evil.) # # So our basic strategy is to define a single helper method called "_retry", # which has generic logic for dealing with SSLWantReadError, pushing data from # the outgoing BIO to the wire, reading data from the wire to the incoming # BIO, retrying an I/O call until it works, and synchronizing with other tasks # that might be calling _retry concurrently. Basically it takes an SSLObject # non-blocking in-memory method and converts it into a Trio async blocking # method. _retry is only about 30 lines of code, but all these cases # multiplied by concurrent calls make it extremely tricky, so there are lots # of comments down below on the details, and a really extensive test suite in # test_ssl.py. And now you know *why* it's so tricky, and can probably # understand how it works. # # [1] https://rt.openssl.org/Ticket/Display.html?id=3712 # XX how closely should we match the stdlib API? # - maybe suppress_ragged_eofs=False is a better default? # - maybe check crypto folks for advice? # - this is also interesting: https://bugs.python.org/issue8108#msg102867 # Definitely keep an eye on Cory's TLS API ideas on security-sig etc. # XX document behavior on cancellation/error (i.e.: all is lost abandon # stream) # docs will need to make very clear that this is different from all the other # cancellations in core Trio import operator as _operator import ssl as _stdlib_ssl from enum import Enum as _Enum import trio from .abc import Stream, Listener from ._highlevel_generic import aclose_forcefully from . import _sync from ._util import ConflictDetector, Final ################################################################ # SSLStream ################################################################ # Ideally, when the user calls SSLStream.receive_some() with no argument, then # we should do exactly one call to self.transport_stream.receive_some(), # decrypt everything we got, and return it. Unfortunately, the way openssl's # API works, we have to pick how much data we want to allow when we call # read(), and then it (potentially) triggers a call to # transport_stream.receive_some(). So at the time we pick the amount of data # to decrypt, we don't know how much data we've read. As a simple heuristic, # we record the max amount of data returned by previous calls to # transport_stream.receive_some(), and we use that for future calls to read(). # But what do we use for the very first call? That's what this constant sets. # # Note that the value passed to read() is a limit on the amount of # *decrypted* data, but we can only see the size of the *encrypted* data # returned by transport_stream.receive_some(). TLS adds a small amount of # framing overhead, and TLS compression is rarely used these days because it's # insecure. So the size of the encrypted data should be a slight over-estimate # of the size of the decrypted data, which is exactly what we want. # # The specific value is not really based on anything; it might be worth tuning # at some point. But, if you have an TCP connection with the typical 1500 byte # MTU and an initial window of 10 (see RFC 6928), then the initial burst of # data will be limited to ~15000 bytes (or a bit less due to IP-level framing # overhead), so this is chosen to be larger than that. STARTING_RECEIVE_SIZE = 16384 class NeedHandshakeError(Exception): """Some :class:`SSLStream` methods can't return any meaningful data until after the handshake. If you call them before the handshake, they raise this error. """ class _Once: def __init__(self, afn, *args): self._afn = afn self._args = args self.started = False self._done = _sync.Event() async def ensure(self, *, checkpoint): if not self.started: self.started = True await self._afn(*self._args) self._done.set() elif not checkpoint and self._done.is_set(): return else: await self._done.wait() @property def done(self): return self._done.is_set() _State = _Enum("_State", ["OK", "BROKEN", "CLOSED"]) class SSLStream(Stream, metaclass=Final): r"""Encrypted communication using SSL/TLS. :class:`SSLStream` wraps an arbitrary :class:`~trio.abc.Stream`, and allows you to perform encrypted communication over it using the usual :class:`~trio.abc.Stream` interface. You pass regular data to :meth:`send_all`, then it encrypts it and sends the encrypted data on the underlying :class:`~trio.abc.Stream`; :meth:`receive_some` takes encrypted data out of the underlying :class:`~trio.abc.Stream` and decrypts it before returning it. You should read the standard library's :mod:`ssl` documentation carefully before attempting to use this class, and probably other general documentation on SSL/TLS as well. SSL/TLS is subtle and quick to anger. Really. I'm not kidding. Args: transport_stream (~trio.abc.Stream): The stream used to transport encrypted data. Required. ssl_context (~ssl.SSLContext): The :class:`~ssl.SSLContext` used for this connection. Required. Usually created by calling :func:`ssl.create_default_context`. server_hostname (str or None): The name of the server being connected to. Used for `SNI `__ and for validating the server's certificate (if hostname checking is enabled). This is effectively mandatory for clients, and actually mandatory if ``ssl_context.check_hostname`` is ``True``. server_side (bool): Whether this stream is acting as a client or server. Defaults to False, i.e. client mode. https_compatible (bool): There are two versions of SSL/TLS commonly encountered in the wild: the standard version, and the version used for HTTPS (HTTP-over-SSL/TLS). Standard-compliant SSL/TLS implementations always send a cryptographically signed ``close_notify`` message before closing the connection. This is important because if the underlying transport were simply closed, then there wouldn't be any way for the other side to know whether the connection was intentionally closed by the peer that they negotiated a cryptographic connection to, or by some `man-in-the-middle `__ attacker who can't manipulate the cryptographic stream, but can manipulate the transport layer (a so-called "truncation attack"). However, this part of the standard is widely ignored by real-world HTTPS implementations, which means that if you want to interoperate with them, then you NEED to ignore it too. Fortunately this isn't as bad as it sounds, because the HTTP protocol already includes its own equivalent of ``close_notify``, so doing this again at the SSL/TLS level is redundant. But not all protocols do! Therefore, by default Trio implements the safer standard-compliant version (``https_compatible=False``). But if you're speaking HTTPS or some other protocol where ``close_notify``\s are commonly skipped, then you should set ``https_compatible=True``; with this setting, Trio will neither expect nor send ``close_notify`` messages. If you have code that was written to use :class:`ssl.SSLSocket` and now you're porting it to Trio, then it may be useful to know that a difference between :class:`SSLStream` and :class:`ssl.SSLSocket` is that :class:`~ssl.SSLSocket` implements the ``https_compatible=True`` behavior by default. Attributes: transport_stream (trio.abc.Stream): The underlying transport stream that was passed to ``__init__``. An example of when this would be useful is if you're using :class:`SSLStream` over a :class:`~trio.SocketStream` and want to call the :class:`~trio.SocketStream`'s :meth:`~trio.SocketStream.setsockopt` method. Internally, this class is implemented using an instance of :class:`ssl.SSLObject`, and all of :class:`~ssl.SSLObject`'s methods and attributes are re-exported as methods and attributes on this class. However, there is one difference: :class:`~ssl.SSLObject` has several methods that return information about the encrypted connection, like :meth:`~ssl.SSLSocket.cipher` or :meth:`~ssl.SSLSocket.selected_alpn_protocol`. If you call them before the handshake, when they can't possibly return useful data, then :class:`ssl.SSLObject` returns None, but :class:`trio.SSLStream` raises :exc:`NeedHandshakeError`. This also means that if you register a SNI callback using `~ssl.SSLContext.sni_callback`, then the first argument your callback receives will be a :class:`ssl.SSLObject`. """ # Note: any new arguments here should likely also be added to # SSLListener.__init__, and maybe the open_ssl_over_tcp_* helpers. def __init__( self, transport_stream, ssl_context, *, server_hostname=None, server_side=False, https_compatible=False, ): self.transport_stream = transport_stream self._state = _State.OK self._https_compatible = https_compatible self._outgoing = _stdlib_ssl.MemoryBIO() self._delayed_outgoing = None self._incoming = _stdlib_ssl.MemoryBIO() self._ssl_object = ssl_context.wrap_bio( self._incoming, self._outgoing, server_side=server_side, server_hostname=server_hostname, ) # Tracks whether we've already done the initial handshake self._handshook = _Once(self._do_handshake) # These are used to synchronize access to self.transport_stream self._inner_send_lock = _sync.StrictFIFOLock() self._inner_recv_count = 0 self._inner_recv_lock = _sync.Lock() # These are used to make sure that our caller doesn't attempt to make # multiple concurrent calls to send_all/wait_send_all_might_not_block # or to receive_some. self._outer_send_conflict_detector = ConflictDetector( "another task is currently sending data on this SSLStream" ) self._outer_recv_conflict_detector = ConflictDetector( "another task is currently receiving data on this SSLStream" ) self._estimated_receive_size = STARTING_RECEIVE_SIZE _forwarded = { "context", "server_side", "server_hostname", "session", "session_reused", "getpeercert", "selected_npn_protocol", "cipher", "shared_ciphers", "compression", "pending", "get_channel_binding", "selected_alpn_protocol", "version", } _after_handshake = { "session_reused", "getpeercert", "selected_npn_protocol", "cipher", "shared_ciphers", "compression", "get_channel_binding", "selected_alpn_protocol", "version", } def __getattr__(self, name): if name in self._forwarded: if name in self._after_handshake and not self._handshook.done: raise NeedHandshakeError( "call do_handshake() before calling {!r}".format(name) ) return getattr(self._ssl_object, name) else: raise AttributeError(name) def __setattr__(self, name, value): if name in self._forwarded: setattr(self._ssl_object, name, value) else: super().__setattr__(name, value) def __dir__(self): return super().__dir__() + list(self._forwarded) def _check_status(self): if self._state is _State.OK: return elif self._state is _State.BROKEN: raise trio.BrokenResourceError elif self._state is _State.CLOSED: raise trio.ClosedResourceError else: # pragma: no cover assert False # This is probably the single trickiest function in Trio. It has lots of # comments, though, just make sure to think carefully if you ever have to # touch it. The big comment at the top of this file will help explain # too. async def _retry(self, fn, *args, ignore_want_read=False, is_handshake=False): await trio.lowlevel.checkpoint_if_cancelled() yielded = False finished = False while not finished: # WARNING: this code needs to be very careful with when it # calls 'await'! There might be multiple tasks calling this # function at the same time trying to do different operations, # so we need to be careful to: # # 1) interact with the SSLObject, then # 2) await on exactly one thing that lets us make forward # progress, then # 3) loop or exit # # In particular we don't want to yield while interacting with # the SSLObject (because it's shared state, so someone else # might come in and mess with it while we're suspended), and # we don't want to yield *before* starting the operation that # will help us make progress, because then someone else might # come in and leapfrog us. # Call the SSLObject method, and get its result. # # NB: despite what the docs say, SSLWantWriteError can't # happen – "Writes to memory BIOs will always succeed if # memory is available: that is their size can grow # indefinitely." # https://wiki.openssl.org/index.php/Manual:BIO_s_mem(3) want_read = False ret = None try: ret = fn(*args) except _stdlib_ssl.SSLWantReadError: want_read = True except (_stdlib_ssl.SSLError, _stdlib_ssl.CertificateError) as exc: self._state = _State.BROKEN raise trio.BrokenResourceError from exc else: finished = True if ignore_want_read: want_read = False finished = True to_send = self._outgoing.read() # Some versions of SSL_do_handshake have a bug in how they handle # the TLS 1.3 handshake on the server side: after the handshake # finishes, they automatically send session tickets, even though # the client may not be expecting data to arrive at this point and # sending it could cause a deadlock or lost data. This applies at # least to OpenSSL 1.1.1c and earlier, and the OpenSSL devs # currently have no plans to fix it: # # https://github.com/openssl/openssl/issues/7948 # https://github.com/openssl/openssl/issues/7967 # # The correct behavior is to wait to send session tickets on the # first call to SSL_write. (This is what BoringSSL does.) So, we # use a heuristic to detect when OpenSSL has tried to send session # tickets, and we manually delay sending them until the # appropriate moment. For more discussion see: # # https://github.com/python-trio/trio/issues/819#issuecomment-517529763 if ( is_handshake and not want_read and self._ssl_object.server_side and self._ssl_object.version() == "TLSv1.3" ): assert self._delayed_outgoing is None self._delayed_outgoing = to_send to_send = b"" # Outputs from the above code block are: # # - to_send: bytestring; if non-empty then we need to send # this data to make forward progress # # - want_read: True if we need to receive_some some data to make # forward progress # # - finished: False means that we need to retry the call to # fn(*args) again, after having pushed things forward. True # means we still need to do whatever was said (in particular # send any data in to_send), but once we do then we're # done. # # - ret: the operation's return value. (Meaningless unless # finished is True.) # # Invariant: want_read and finished can't both be True at the # same time. # # Now we need to move things forward. There are two things we # might have to do, and any given operation might require # either, both, or neither to proceed: # # - send the data in to_send # # - receive_some some data and put it into the incoming BIO # # Our strategy is: if there's data to send, send it; # *otherwise* if there's data to receive_some, receive_some it. # # If both need to happen, then we only send. Why? Well, we # know that *right now* we have to both send and receive_some # before the operation can complete. But as soon as we yield, # that information becomes potentially stale – e.g. while # we're sending, some other task might go and receive_some the # data we need and put it into the incoming BIO. And if it # does, then we *definitely don't* want to do a receive_some – # there might not be any more data coming, and we'd deadlock! # We could do something tricky to keep track of whether a # receive_some happens while we're sending, but the case where # we have to do both is very unusual (only during a # renegotiation), so it's better to keep things simple. So we # do just one potentially-blocking operation, then check again # for fresh information. # # And we prioritize sending over receiving because, if there # are multiple tasks that want to receive_some, then it # doesn't matter what order they go in. But if there are # multiple tasks that want to send, then they each have # different data, and the data needs to get put onto the wire # in the same order that it was retrieved from the outgoing # BIO. So if we have data to send, that *needs* to be the # *very* *next* *thing* we do, to make sure no-one else sneaks # in before us. Or if we can't send immediately because # someone else is, then we at least need to get in line # immediately. if to_send: # NOTE: This relies on the lock being strict FIFO fair! async with self._inner_send_lock: yielded = True try: if self._delayed_outgoing is not None: to_send = self._delayed_outgoing + to_send self._delayed_outgoing = None await self.transport_stream.send_all(to_send) except: # Some unknown amount of our data got sent, and we # don't know how much. This stream is doomed. self._state = _State.BROKEN raise elif want_read: # It's possible that someone else is already blocked in # transport_stream.receive_some. If so then we want to # wait for them to finish, but we don't want to call # transport_stream.receive_some again ourselves; we just # want to loop around and check if their contribution # helped anything. So we make a note of how many times # some task has been through here before taking the lock, # and if it's changed by the time we get the lock, then we # skip calling transport_stream.receive_some and loop # around immediately. recv_count = self._inner_recv_count async with self._inner_recv_lock: yielded = True if recv_count == self._inner_recv_count: data = await self.transport_stream.receive_some() if not data: self._incoming.write_eof() else: self._estimated_receive_size = max( self._estimated_receive_size, len(data) ) self._incoming.write(data) self._inner_recv_count += 1 if not yielded: await trio.lowlevel.cancel_shielded_checkpoint() return ret async def _do_handshake(self): try: await self._retry(self._ssl_object.do_handshake, is_handshake=True) except: self._state = _State.BROKEN raise async def do_handshake(self): """Ensure that the initial handshake has completed. The SSL protocol requires an initial handshake to exchange certificates, select cryptographic keys, and so forth, before any actual data can be sent or received. You don't have to call this method; if you don't, then :class:`SSLStream` will automatically perform the handshake as needed, the first time you try to send or receive data. But if you want to trigger it manually – for example, because you want to look at the peer's certificate before you start talking to them – then you can call this method. If the initial handshake is already in progress in another task, this waits for it to complete and then returns. If the initial handshake has already completed, this returns immediately without doing anything (except executing a checkpoint). .. warning:: If this method is cancelled, then it may leave the :class:`SSLStream` in an unusable state. If this happens then any future attempt to use the object will raise :exc:`trio.BrokenResourceError`. """ self._check_status() await self._handshook.ensure(checkpoint=True) # Most things work if we don't explicitly force do_handshake to be called # before calling receive_some or send_all, because openssl will # automatically perform the handshake on the first SSL_{read,write} # call. BUT, allowing openssl to do this will disable Python's hostname # checking!!! See: # https://bugs.python.org/issue30141 # So we *definitely* have to make sure that do_handshake is called # before doing anything else. async def receive_some(self, max_bytes=None): """Read some data from the underlying transport, decrypt it, and return it. See :meth:`trio.abc.ReceiveStream.receive_some` for details. .. warning:: If this method is cancelled while the initial handshake or a renegotiation are in progress, then it may leave the :class:`SSLStream` in an unusable state. If this happens then any future attempt to use the object will raise :exc:`trio.BrokenResourceError`. """ with self._outer_recv_conflict_detector: self._check_status() try: await self._handshook.ensure(checkpoint=False) except trio.BrokenResourceError as exc: # For some reason, EOF before handshake sometimes raises # SSLSyscallError instead of SSLEOFError (e.g. on my linux # laptop, but not on appveyor). Thanks openssl. if self._https_compatible and isinstance( exc.__cause__, (_stdlib_ssl.SSLEOFError, _stdlib_ssl.SSLSyscallError), ): await trio.lowlevel.checkpoint() return b"" else: raise if max_bytes is None: # If we somehow have more data already in our pending buffer # than the estimate receive size, bump up our size a bit for # this read only. max_bytes = max(self._estimated_receive_size, self._incoming.pending) else: max_bytes = _operator.index(max_bytes) if max_bytes < 1: raise ValueError("max_bytes must be >= 1") try: return await self._retry(self._ssl_object.read, max_bytes) except trio.BrokenResourceError as exc: # This isn't quite equivalent to just returning b"" in the # first place, because we still end up with self._state set to # BROKEN. But that's actually fine, because after getting an # EOF on TLS then the only thing you can do is close the # stream, and closing doesn't care about the state. if self._https_compatible and isinstance( exc.__cause__, _stdlib_ssl.SSLEOFError ): await trio.lowlevel.checkpoint() return b"" else: raise async def send_all(self, data): """Encrypt some data and then send it on the underlying transport. See :meth:`trio.abc.SendStream.send_all` for details. .. warning:: If this method is cancelled, then it may leave the :class:`SSLStream` in an unusable state. If this happens then any attempt to use the object will raise :exc:`trio.BrokenResourceError`. """ with self._outer_send_conflict_detector: self._check_status() await self._handshook.ensure(checkpoint=False) # SSLObject interprets write(b"") as an EOF for some reason, which # is not what we want. if not data: await trio.lowlevel.checkpoint() return await self._retry(self._ssl_object.write, data) async def unwrap(self): """Cleanly close down the SSL/TLS encryption layer, allowing the underlying stream to be used for unencrypted communication. You almost certainly don't need this. Returns: A pair ``(transport_stream, trailing_bytes)``, where ``transport_stream`` is the underlying transport stream, and ``trailing_bytes`` is a byte string. Since :class:`SSLStream` doesn't necessarily know where the end of the encrypted data will be, it can happen that it accidentally reads too much from the underlying stream. ``trailing_bytes`` contains this extra data; you should process it as if it was returned from a call to ``transport_stream.receive_some(...)``. """ with self._outer_recv_conflict_detector, self._outer_send_conflict_detector: self._check_status() await self._handshook.ensure(checkpoint=False) await self._retry(self._ssl_object.unwrap) transport_stream = self.transport_stream self.transport_stream = None self._state = _State.CLOSED return (transport_stream, self._incoming.read()) async def aclose(self): """Gracefully shut down this connection, and close the underlying transport. If ``https_compatible`` is False (the default), then this attempts to first send a ``close_notify`` and then close the underlying stream by calling its :meth:`~trio.abc.AsyncResource.aclose` method. If ``https_compatible`` is set to True, then this simply closes the underlying stream and marks this stream as closed. """ if self._state is _State.CLOSED: await trio.lowlevel.checkpoint() return if self._state is _State.BROKEN or self._https_compatible: self._state = _State.CLOSED await self.transport_stream.aclose() return try: # https_compatible=False, so we're in spec-compliant mode and have # to send close_notify so that the other side gets a cryptographic # assurance that we've called aclose. Of course, we can't do # anything cryptographic until after we've completed the # handshake: await self._handshook.ensure(checkpoint=False) # Then, we call SSL_shutdown *once*, because we want to send a # close_notify but *not* wait for the other side to send back a # response. In principle it would be more polite to wait for the # other side to reply with their own close_notify. However, if # they aren't paying attention (e.g., if they're just sending # data and not receiving) then we will never notice our # close_notify and we'll be waiting forever. Eventually we'll time # out (hopefully), but it's still kind of nasty. And we can't # require the other side to always be receiving, because (a) # backpressure is kind of important, and (b) I bet there are # broken TLS implementations out there that don't receive all the # time. (Like e.g. anyone using Python ssl in synchronous mode.) # # The send-then-immediately-close behavior is explicitly allowed # by the TLS specs, so we're ok on that. # # Subtlety: SSLObject.unwrap will immediately call it a second # time, and the second time will raise SSLWantReadError because # there hasn't been time for the other side to respond # yet. (Unless they spontaneously sent a close_notify before we # called this, and it's either already been processed or gets # pulled out of the buffer by Python's second call.) So the way to # do what we want is to ignore SSLWantReadError on this call. # # Also, because the other side might have already sent # close_notify and closed their connection then it's possible that # our attempt to send close_notify will raise # BrokenResourceError. This is totally legal, and in fact can happen # with two well-behaved Trio programs talking to each other, so we # don't want to raise an error. So we suppress BrokenResourceError # here. (This is safe, because literally the only thing this call # to _retry will do is send the close_notify alert, so that's # surely where the error comes from.) # # FYI in some cases this could also raise SSLSyscallError which I # think is because SSL_shutdown is terrible. (Check out that note # at the bottom of the man page saying that it sometimes gets # raised spuriously.) I haven't seen this since we switched to # immediately closing the socket, and I don't know exactly what # conditions cause it and how to respond, so for now we're just # letting that happen. But if you start seeing it, then hopefully # this will give you a little head start on tracking it down, # because whoa did this puzzle us at the 2017 PyCon sprints. # # Also, if someone else is blocked in send/receive, then we aren't # going to be able to do a clean shutdown. If that happens, we'll # just do an unclean shutdown. try: await self._retry(self._ssl_object.unwrap, ignore_want_read=True) except (trio.BrokenResourceError, trio.BusyResourceError): pass except: # Failure! Kill the stream and move on. await aclose_forcefully(self.transport_stream) raise else: # Success! Gracefully close the underlying stream. await self.transport_stream.aclose() finally: self._state = _State.CLOSED async def wait_send_all_might_not_block(self): """See :meth:`trio.abc.SendStream.wait_send_all_might_not_block`.""" # This method's implementation is deceptively simple. # # First, we take the outer send lock, because of Trio's standard # semantics that wait_send_all_might_not_block and send_all # conflict. with self._outer_send_conflict_detector: self._check_status() # Then we take the inner send lock. We know that no other tasks # are calling self.send_all or self.wait_send_all_might_not_block, # because we have the outer_send_lock. But! There might be another # task calling self.receive_some -> transport_stream.send_all, in # which case if we were to call # transport_stream.wait_send_all_might_not_block directly we'd # have two tasks doing write-related operations on # transport_stream simultaneously, which is not allowed. We # *don't* want to raise this conflict to our caller, because it's # purely an internal affair – all they did was call # wait_send_all_might_not_block and receive_some at the same time, # which is totally valid. And waiting for the lock is OK, because # a call to send_all certainly wouldn't complete while the other # task holds the lock. async with self._inner_send_lock: # Now we have the lock, which creates another potential # problem: what if a call to self.receive_some attempts to do # transport_stream.send_all now? It'll have to wait for us to # finish! But that's OK, because we release the lock as soon # as the underlying stream becomes writable, and the # self.receive_some call wasn't going to make any progress # until then anyway. # # Of course, this does mean we might return *before* the # stream is logically writable, because immediately after we # return self.receive_some might write some data and make it # non-writable again. But that's OK too, # wait_send_all_might_not_block only guarantees that it # doesn't return late. await self.transport_stream.wait_send_all_might_not_block() class SSLListener(Listener[SSLStream], metaclass=Final): """A :class:`~trio.abc.Listener` for SSL/TLS-encrypted servers. :class:`SSLListener` wraps around another Listener, and converts all incoming connections to encrypted connections by wrapping them in a :class:`SSLStream`. Args: transport_listener (~trio.abc.Listener): The listener whose incoming connections will be wrapped in :class:`SSLStream`. ssl_context (~ssl.SSLContext): The :class:`~ssl.SSLContext` that will be used for incoming connections. https_compatible (bool): Passed on to :class:`SSLStream`. Attributes: transport_listener (trio.abc.Listener): The underlying listener that was passed to ``__init__``. """ def __init__( self, transport_listener, ssl_context, *, https_compatible=False, ): self.transport_listener = transport_listener self._ssl_context = ssl_context self._https_compatible = https_compatible async def accept(self): """Accept the next connection and wrap it in an :class:`SSLStream`. See :meth:`trio.abc.Listener.accept` for details. """ transport_stream = await self.transport_listener.accept() return SSLStream( transport_stream, self._ssl_context, server_side=True, https_compatible=self._https_compatible, ) async def aclose(self): """Close the transport listener.""" await self.transport_listener.aclose() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_subprocess.py0000644000076500000000000006424000000000000016004 0ustar00quentinwheel# coding: utf-8 import os import subprocess import sys from typing import Optional from functools import partial import warnings from typing import TYPE_CHECKING from ._abc import AsyncResource, SendStream, ReceiveStream from ._highlevel_generic import StapledStream from ._sync import Lock from ._subprocess_platform import ( wait_child_exiting, create_pipe_to_child_stdin, create_pipe_from_child_output, ) from ._util import NoPublicConstructor import trio # Linux-specific, but has complex lifetime management stuff so we hard-code it # here instead of hiding it behind the _subprocess_platform abstraction can_try_pidfd_open: bool if TYPE_CHECKING: def pidfd_open(fd: int, flags: int) -> int: ... else: can_try_pidfd_open = True try: from os import pidfd_open except ImportError: if sys.platform == "linux": import ctypes _cdll_for_pidfd_open = ctypes.CDLL(None, use_errno=True) _cdll_for_pidfd_open.syscall.restype = ctypes.c_long # pid and flags are actually int-sized, but the syscall() function # always takes longs. (Except on x32 where long is 32-bits and syscall # takes 64-bit arguments. But in the unlikely case that anyone is # using x32, this will still work, b/c we only need to pass in 32 bits # of data, and the C ABI doesn't distinguish between passing 32-bit vs # 64-bit integers; our 32-bit values will get loaded into 64-bit # registers where syscall() will find them.) _cdll_for_pidfd_open.syscall.argtypes = [ ctypes.c_long, # syscall number ctypes.c_long, # pid ctypes.c_long, # flags ] __NR_pidfd_open = 434 def pidfd_open(fd: int, flags: int) -> int: result = _cdll_for_pidfd_open.syscall(__NR_pidfd_open, fd, flags) if result < 0: err = ctypes.get_errno() raise OSError(err, os.strerror(err)) return result else: can_try_pidfd_open = False class Process(AsyncResource, metaclass=NoPublicConstructor): r"""A child process. Like :class:`subprocess.Popen`, but async. This class has no public constructor. To create a child process, use `open_process`:: process = await trio.open_process(...) `Process` implements the `~trio.abc.AsyncResource` interface. In order to make sure your process doesn't end up getting abandoned by mistake or after an exception, you can use ``async with``:: async with await trio.open_process(...) as process: ... "Closing" a :class:`Process` will close any pipes to the child and wait for it to exit; if cancelled, the child will be forcibly killed and we will ensure it has finished exiting before allowing the cancellation to propagate. Attributes: args (str or list): The ``command`` passed at construction time, specifying the process to execute and its arguments. pid (int): The process ID of the child process managed by this object. stdin (trio.abc.SendStream or None): A stream connected to the child's standard input stream: when you write bytes here, they become available for the child to read. Only available if the :class:`Process` was constructed using ``stdin=PIPE``; otherwise this will be None. stdout (trio.abc.ReceiveStream or None): A stream connected to the child's standard output stream: when the child writes to standard output, the written bytes become available for you to read here. Only available if the :class:`Process` was constructed using ``stdout=PIPE``; otherwise this will be None. stderr (trio.abc.ReceiveStream or None): A stream connected to the child's standard error stream: when the child writes to standard error, the written bytes become available for you to read here. Only available if the :class:`Process` was constructed using ``stderr=PIPE``; otherwise this will be None. stdio (trio.StapledStream or None): A stream that sends data to the child's standard input and receives from the child's standard output. Only available if both :attr:`stdin` and :attr:`stdout` are available; otherwise this will be None. """ universal_newlines = False encoding = None errors = None # Available for the per-platform wait_child_exiting() implementations # to stash some state; waitid platforms use this to avoid spawning # arbitrarily many threads if wait() keeps getting cancelled. _wait_for_exit_data = None def __init__(self, popen, stdin, stdout, stderr): self._proc = popen self.stdin = stdin # type: Optional[SendStream] self.stdout = stdout # type: Optional[ReceiveStream] self.stderr = stderr # type: Optional[ReceiveStream] self.stdio = None # type: Optional[StapledStream] if self.stdin is not None and self.stdout is not None: self.stdio = StapledStream(self.stdin, self.stdout) self._wait_lock = Lock() self._pidfd = None if can_try_pidfd_open: try: fd = pidfd_open(self._proc.pid, 0) except OSError: # Well, we tried, but it didn't work (probably because we're # running on an older kernel, or in an older sandbox, that # hasn't been updated to support pidfd_open). We'll fall back # on waitid instead. pass else: # It worked! Wrap the raw fd up in a Python file object to # make sure it'll get closed. self._pidfd = open(fd) self.args = self._proc.args self.pid = self._proc.pid def __repr__(self): returncode = self.returncode if returncode is None: status = "running with PID {}".format(self.pid) else: if returncode < 0: status = "exited with signal {}".format(-returncode) else: status = "exited with status {}".format(returncode) return "".format(self.args, status) @property def returncode(self): """The exit status of the process (an integer), or ``None`` if it's still running. By convention, a return code of zero indicates success. On UNIX, negative values indicate termination due to a signal, e.g., -11 if terminated by signal 11 (``SIGSEGV``). On Windows, a process that exits due to a call to :meth:`Process.terminate` will have an exit status of 1. Unlike the standard library `subprocess.Popen.returncode`, you don't have to call `poll` or `wait` to update this attribute; it's automatically updated as needed, and will always give you the latest information. """ result = self._proc.poll() if result is not None: self._close_pidfd() return result async def aclose(self): """Close any pipes we have to the process (both input and output) and wait for it to exit. If cancelled, kills the process and waits for it to finish exiting before propagating the cancellation. """ with trio.CancelScope(shield=True): if self.stdin is not None: await self.stdin.aclose() if self.stdout is not None: await self.stdout.aclose() if self.stderr is not None: await self.stderr.aclose() try: await self.wait() finally: if self._proc.returncode is None: self.kill() with trio.CancelScope(shield=True): await self.wait() def _close_pidfd(self): if self._pidfd is not None: self._pidfd.close() self._pidfd = None async def wait(self): """Block until the process exits. Returns: The exit status of the process; see :attr:`returncode`. """ async with self._wait_lock: if self.poll() is None: if self._pidfd is not None: await trio.lowlevel.wait_readable(self._pidfd) else: await wait_child_exiting(self) # We have to use .wait() here, not .poll(), because on macOS # (and maybe other systems, who knows), there's a race # condition inside the kernel that creates a tiny window where # kqueue reports that the process has exited, but # waitpid(WNOHANG) can't yet reap it. So this .wait() may # actually block for a tiny fraction of a second. self._proc.wait() self._close_pidfd() assert self._proc.returncode is not None return self._proc.returncode def poll(self): """Returns the exit status of the process (an integer), or ``None`` if it's still running. Note that on Trio (unlike the standard library `subprocess.Popen`), ``process.poll()`` and ``process.returncode`` always give the same result. See `returncode` for more details. This method is only included to make it easier to port code from `subprocess`. """ return self.returncode def send_signal(self, sig): """Send signal ``sig`` to the process. On UNIX, ``sig`` may be any signal defined in the :mod:`signal` module, such as ``signal.SIGINT`` or ``signal.SIGTERM``. On Windows, it may be anything accepted by the standard library :meth:`subprocess.Popen.send_signal`. """ self._proc.send_signal(sig) def terminate(self): """Terminate the process, politely if possible. On UNIX, this is equivalent to ``send_signal(signal.SIGTERM)``; by convention this requests graceful termination, but a misbehaving or buggy process might ignore it. On Windows, :meth:`terminate` forcibly terminates the process in the same manner as :meth:`kill`. """ self._proc.terminate() def kill(self): """Immediately terminate the process. On UNIX, this is equivalent to ``send_signal(signal.SIGKILL)``. On Windows, it calls ``TerminateProcess``. In both cases, the process cannot prevent itself from being killed, but the termination will be delivered asynchronously; use :meth:`wait` if you want to ensure the process is actually dead before proceeding. """ self._proc.kill() async def open_process( command, *, stdin=None, stdout=None, stderr=None, **options ) -> Process: r"""Execute a child program in a new process. After construction, you can interact with the child process by writing data to its `~Process.stdin` stream (a `~trio.abc.SendStream`), reading data from its `~Process.stdout` and/or `~Process.stderr` streams (both `~trio.abc.ReceiveStream`\s), sending it signals using `~Process.terminate`, `~Process.kill`, or `~Process.send_signal`, and waiting for it to exit using `~Process.wait`. See `Process` for details. Each standard stream is only available if you specify that a pipe should be created for it. For example, if you pass ``stdin=subprocess.PIPE``, you can write to the `~Process.stdin` stream, else `~Process.stdin` will be ``None``. Args: command (list or str): The command to run. Typically this is a sequence of strings such as ``['ls', '-l', 'directory with spaces']``, where the first element names the executable to invoke and the other elements specify its arguments. With ``shell=True`` in the ``**options``, or on Windows, ``command`` may alternatively be a string, which will be parsed following platform-dependent :ref:`quoting rules `. stdin: Specifies what the child process's standard input stream should connect to: output written by the parent (``subprocess.PIPE``), nothing (``subprocess.DEVNULL``), or an open file (pass a file descriptor or something whose ``fileno`` method returns one). If ``stdin`` is unspecified, the child process will have the same standard input stream as its parent. stdout: Like ``stdin``, but for the child process's standard output stream. stderr: Like ``stdin``, but for the child process's standard error stream. An additional value ``subprocess.STDOUT`` is supported, which causes the child's standard output and standard error messages to be intermixed on a single standard output stream, attached to whatever the ``stdout`` option says to attach it to. **options: Other :ref:`general subprocess options ` are also accepted. Returns: A new `Process` object. Raises: OSError: if the process spawning fails, for example because the specified command could not be found. """ for key in ("universal_newlines", "text", "encoding", "errors", "bufsize"): if options.get(key): raise TypeError( "trio.Process only supports communicating over " "unbuffered byte streams; the '{}' option is not supported".format(key) ) if os.name == "posix": if isinstance(command, str) and not options.get("shell"): raise TypeError( "command must be a sequence (not a string) if shell=False " "on UNIX systems" ) if not isinstance(command, str) and options.get("shell"): raise TypeError( "command must be a string (not a sequence) if shell=True " "on UNIX systems" ) trio_stdin = None # type: Optional[SendStream] trio_stdout = None # type: Optional[ReceiveStream] trio_stderr = None # type: Optional[ReceiveStream] if stdin == subprocess.PIPE: trio_stdin, stdin = create_pipe_to_child_stdin() if stdout == subprocess.PIPE: trio_stdout, stdout = create_pipe_from_child_output() if stderr == subprocess.STDOUT: # If we created a pipe for stdout, pass the same pipe for # stderr. If stdout was some non-pipe thing (DEVNULL or a # given FD), pass the same thing. If stdout was passed as # None, keep stderr as STDOUT to allow subprocess to dup # our stdout. Regardless of which of these is applicable, # don't create a new Trio stream for stderr -- if stdout # is piped, stderr will be intermixed on the stdout stream. if stdout is not None: stderr = stdout elif stderr == subprocess.PIPE: trio_stderr, stderr = create_pipe_from_child_output() try: popen = await trio.to_thread.run_sync( partial( subprocess.Popen, command, stdin=stdin, stdout=stdout, stderr=stderr, **options, ) ) finally: # Close the parent's handle for each child side of a pipe; # we want the child to have the only copy, so that when # it exits we can read EOF on our side. if trio_stdin is not None: os.close(stdin) if trio_stdout is not None: os.close(stdout) if trio_stderr is not None: os.close(stderr) return Process._create(popen, trio_stdin, trio_stdout, trio_stderr) async def _windows_deliver_cancel(p): try: p.terminate() except OSError as exc: warnings.warn(RuntimeWarning(f"TerminateProcess on {p!r} failed with: {exc!r}")) async def _posix_deliver_cancel(p): try: p.terminate() await trio.sleep(5) warnings.warn( RuntimeWarning( f"process {p!r} ignored SIGTERM for 5 seconds. " f"(Maybe you should pass a custom deliver_cancel?) " f"Trying SIGKILL." ) ) p.kill() except OSError as exc: warnings.warn( RuntimeWarning(f"tried to kill process {p!r}, but failed with: {exc!r}") ) async def run_process( command, *, stdin=b"", capture_stdout=False, capture_stderr=False, check=True, deliver_cancel=None, **options, ): """Run ``command`` in a subprocess, wait for it to complete, and return a :class:`subprocess.CompletedProcess` instance describing the results. If cancelled, :func:`run_process` terminates the subprocess and waits for it to exit before propagating the cancellation, like :meth:`Process.aclose`. **Input:** The subprocess's standard input stream is set up to receive the bytes provided as ``stdin``. Once the given input has been fully delivered, or if none is provided, the subprocess will receive end-of-file when reading from its standard input. Alternatively, if you want the subprocess to read its standard input from the same place as the parent Trio process, you can pass ``stdin=None``. **Output:** By default, any output produced by the subprocess is passed through to the standard output and error streams of the parent Trio process. If you would like to capture this output and do something with it, you can pass ``capture_stdout=True`` to capture the subprocess's standard output, and/or ``capture_stderr=True`` to capture its standard error. Captured data is provided as the :attr:`~subprocess.CompletedProcess.stdout` and/or :attr:`~subprocess.CompletedProcess.stderr` attributes of the returned :class:`~subprocess.CompletedProcess` object. The value for any stream that was not captured will be ``None``. If you want to capture both stdout and stderr while keeping them separate, pass ``capture_stdout=True, capture_stderr=True``. If you want to capture both stdout and stderr but mixed together in the order they were printed, use: ``capture_stdout=True, stderr=subprocess.STDOUT``. This directs the child's stderr into its stdout, so the combined output will be available in the `~subprocess.CompletedProcess.stdout` attribute. **Error checking:** If the subprocess exits with a nonzero status code, indicating failure, :func:`run_process` raises a :exc:`subprocess.CalledProcessError` exception rather than returning normally. The captured outputs are still available as the :attr:`~subprocess.CalledProcessError.stdout` and :attr:`~subprocess.CalledProcessError.stderr` attributes of that exception. To disable this behavior, so that :func:`run_process` returns normally even if the subprocess exits abnormally, pass ``check=False``. Args: command (list or str): The command to run. Typically this is a sequence of strings such as ``['ls', '-l', 'directory with spaces']``, where the first element names the executable to invoke and the other elements specify its arguments. With ``shell=True`` in the ``**options``, or on Windows, ``command`` may alternatively be a string, which will be parsed following platform-dependent :ref:`quoting rules `. stdin (:obj:`bytes`, file descriptor, or None): The bytes to provide to the subprocess on its standard input stream, or ``None`` if the subprocess's standard input should come from the same place as the parent Trio process's standard input. As is the case with the :mod:`subprocess` module, you can also pass a file descriptor or an object with a ``fileno()`` method, in which case the subprocess's standard input will come from that file. capture_stdout (bool): If true, capture the bytes that the subprocess writes to its standard output stream and return them in the :attr:`~subprocess.CompletedProcess.stdout` attribute of the returned :class:`~subprocess.CompletedProcess` object. capture_stderr (bool): If true, capture the bytes that the subprocess writes to its standard error stream and return them in the :attr:`~subprocess.CompletedProcess.stderr` attribute of the returned :class:`~subprocess.CompletedProcess` object. check (bool): If false, don't validate that the subprocess exits successfully. You should be sure to check the ``returncode`` attribute of the returned object if you pass ``check=False``, so that errors don't pass silently. deliver_cancel (async function or None): If `run_process` is cancelled, then it needs to kill the child process. There are multiple ways to do this, so we let you customize it. If you pass None (the default), then the behavior depends on the platform: - On Windows, Trio calls ``TerminateProcess``, which should kill the process immediately. - On Unix-likes, the default behavior is to send a ``SIGTERM``, wait 5 seconds, and send a ``SIGKILL``. Alternatively, you can customize this behavior by passing in an arbitrary async function, which will be called with the `Process` object as an argument. For example, the default Unix behavior could be implemented like this:: async def my_deliver_cancel(process): process.send_signal(signal.SIGTERM) await trio.sleep(5) process.send_signal(signal.SIGKILL) When the process actually exits, the ``deliver_cancel`` function will automatically be cancelled – so if the process exits after ``SIGTERM``, then we'll never reach the ``SIGKILL``. In any case, `run_process` will always wait for the child process to exit before raising `Cancelled`. **options: :func:`run_process` also accepts any :ref:`general subprocess options ` and passes them on to the :class:`~trio.Process` constructor. This includes the ``stdout`` and ``stderr`` options, which provide additional redirection possibilities such as ``stderr=subprocess.STDOUT``, ``stdout=subprocess.DEVNULL``, or file descriptors. Returns: A :class:`subprocess.CompletedProcess` instance describing the return code and outputs. Raises: UnicodeError: if ``stdin`` is specified as a Unicode string, rather than bytes ValueError: if multiple redirections are specified for the same stream, e.g., both ``capture_stdout=True`` and ``stdout=subprocess.DEVNULL`` subprocess.CalledProcessError: if ``check=False`` is not passed and the process exits with a nonzero exit status OSError: if an error is encountered starting or communicating with the process .. note:: The child process runs in the same process group as the parent Trio process, so a Ctrl+C will be delivered simultaneously to both parent and child. If you don't want this behavior, consult your platform's documentation for starting child processes in a different process group. """ if isinstance(stdin, str): raise UnicodeError("process stdin must be bytes, not str") if stdin == subprocess.PIPE: raise ValueError( "stdin=subprocess.PIPE doesn't make sense since the pipe " "is internal to run_process(); pass the actual data you " "want to send over that pipe instead" ) if isinstance(stdin, (bytes, bytearray, memoryview)): input = stdin options["stdin"] = subprocess.PIPE else: # stdin should be something acceptable to Process # (None, DEVNULL, a file descriptor, etc) and Process # will raise if it's not input = None options["stdin"] = stdin if capture_stdout: if "stdout" in options: raise ValueError("can't specify both stdout and capture_stdout") options["stdout"] = subprocess.PIPE if capture_stderr: if "stderr" in options: raise ValueError("can't specify both stderr and capture_stderr") options["stderr"] = subprocess.PIPE if deliver_cancel is None: if os.name == "nt": deliver_cancel = _windows_deliver_cancel else: assert os.name == "posix" deliver_cancel = _posix_deliver_cancel stdout_chunks = [] stderr_chunks = [] async with await open_process(command, **options) as proc: async def feed_input(): async with proc.stdin: try: await proc.stdin.send_all(input) except trio.BrokenResourceError: pass async def read_output(stream, chunks): async with stream: async for chunk in stream: chunks.append(chunk) async with trio.open_nursery() as nursery: if proc.stdin is not None: nursery.start_soon(feed_input) if proc.stdout is not None: nursery.start_soon(read_output, proc.stdout, stdout_chunks) if proc.stderr is not None: nursery.start_soon(read_output, proc.stderr, stderr_chunks) try: await proc.wait() except trio.Cancelled: with trio.CancelScope(shield=True): killer_cscope = trio.CancelScope(shield=True) async def killer(): with killer_cscope: await deliver_cancel(proc) nursery.start_soon(killer) await proc.wait() killer_cscope.cancel() raise stdout = b"".join(stdout_chunks) if proc.stdout is not None else None stderr = b"".join(stderr_chunks) if proc.stderr is not None else None if proc.returncode and check: raise subprocess.CalledProcessError( proc.returncode, proc.args, output=stdout, stderr=stderr ) else: return subprocess.CompletedProcess(proc.args, proc.returncode, stdout, stderr) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.816625 trio-0.19.0/trio/_subprocess_platform/0000755000076500000000000000000000000000000017330 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_subprocess_platform/__init__.py0000644000076500000000000001030700000000000021442 0ustar00quentinwheel# Platform-specific subprocess bits'n'pieces. import os import sys from typing import Optional, Tuple, TYPE_CHECKING from .. import _core, _subprocess from .._abc import SendStream, ReceiveStream _wait_child_exiting_error: Optional[ImportError] = None _create_child_pipe_error: Optional[ImportError] = None # Fallback versions of the functions provided -- implementations # per OS are imported atop these at the bottom of the module. async def wait_child_exiting(process: "_subprocess.Process") -> None: """Block until the child process managed by ``process`` is exiting. It is invalid to call this function if the process has already been waited on; that is, ``process.returncode`` must be None. When this function returns, it indicates that a call to :meth:`subprocess.Popen.wait` will immediately be able to return the process's exit status. The actual exit status is not consumed by this call, since :class:`~subprocess.Popen` wants to be able to do that itself. """ raise NotImplementedError from _wait_child_exiting_error # pragma: no cover def create_pipe_to_child_stdin() -> Tuple[SendStream, int]: """Create a new pipe suitable for sending data from this process to the standard input of a child we're about to spawn. Returns: A pair ``(trio_end, subprocess_end)`` where ``trio_end`` is a :class:`~trio.abc.SendStream` and ``subprocess_end`` is something suitable for passing as the ``stdin`` argument of :class:`subprocess.Popen`. """ raise NotImplementedError from _create_child_pipe_error # pragma: no cover def create_pipe_from_child_output() -> Tuple[ReceiveStream, int]: """Create a new pipe suitable for receiving data into this process from the standard output or error stream of a child we're about to spawn. Returns: A pair ``(trio_end, subprocess_end)`` where ``trio_end`` is a :class:`~trio.abc.ReceiveStream` and ``subprocess_end`` is something suitable for passing as the ``stdin`` argument of :class:`subprocess.Popen`. """ raise NotImplementedError from _create_child_pipe_error # pragma: no cover try: if sys.platform == "win32": from .windows import wait_child_exiting # noqa: F811 elif sys.platform != "linux" and (TYPE_CHECKING or hasattr(_core, "wait_kevent")): from .kqueue import wait_child_exiting # noqa: F811 else: from .waitid import wait_child_exiting # noqa: F811 except ImportError as ex: # pragma: no cover _wait_child_exiting_error = ex try: if TYPE_CHECKING: # Not worth type checking these definitions pass elif os.name == "posix": from ..lowlevel import FdStream def create_pipe_to_child_stdin(): # noqa: F811 rfd, wfd = os.pipe() return FdStream(wfd), rfd def create_pipe_from_child_output(): # noqa: F811 rfd, wfd = os.pipe() return FdStream(rfd), wfd elif os.name == "nt": from .._windows_pipes import PipeSendStream, PipeReceiveStream # This isn't exported or documented, but it's also not # underscore-prefixed, and seems kosher to use. The asyncio docs # for 3.5 included an example that imported socketpair from # windows_utils (before socket.socketpair existed on Windows), and # when asyncio.windows_utils.socketpair was removed in 3.7, the # removal was mentioned in the release notes. from asyncio.windows_utils import pipe as windows_pipe import msvcrt def create_pipe_to_child_stdin(): # noqa: F811 # for stdin, we want the write end (our end) to use overlapped I/O rh, wh = windows_pipe(overlapped=(False, True)) return PipeSendStream(wh), msvcrt.open_osfhandle(rh, os.O_RDONLY) def create_pipe_from_child_output(): # noqa: F811 # for stdout/err, it's the read end that's overlapped rh, wh = windows_pipe(overlapped=(True, False)) return PipeReceiveStream(rh), msvcrt.open_osfhandle(wh, 0) else: # pragma: no cover raise ImportError("pipes not implemented on this platform") except ImportError as ex: # pragma: no cover _create_child_pipe_error = ex ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_subprocess_platform/kqueue.py0000644000076500000000000000321500000000000021202 0ustar00quentinwheelimport sys import select from typing import TYPE_CHECKING from .. import _core, _subprocess assert (sys.platform != "win32" and sys.platform != "linux") or not TYPE_CHECKING async def wait_child_exiting(process: "_subprocess.Process") -> None: kqueue = _core.current_kqueue() try: from select import KQ_NOTE_EXIT except ImportError: # pragma: no cover # pypy doesn't define KQ_NOTE_EXIT: # https://bitbucket.org/pypy/pypy/issues/2921/ # I verified this value against both Darwin and FreeBSD KQ_NOTE_EXIT = 0x80000000 make_event = lambda flags: select.kevent( process.pid, filter=select.KQ_FILTER_PROC, flags=flags, fflags=KQ_NOTE_EXIT ) try: kqueue.control([make_event(select.KQ_EV_ADD | select.KQ_EV_ONESHOT)], 0) except ProcessLookupError: # pragma: no cover # This can supposedly happen if the process is in the process # of exiting, and it can even be the case that kqueue says the # process doesn't exist before waitpid(WNOHANG) says it hasn't # exited yet. See the discussion in https://chromium.googlesource.com/ # chromium/src/base/+/master/process/kill_mac.cc . # We haven't actually seen this error occur since we added # locking to prevent multiple calls to wait_child_exiting() # for the same process simultaneously, but given the explanation # in Chromium it seems we should still keep the check. return def abort(_): kqueue.control([make_event(select.KQ_EV_DELETE)], 0) return _core.Abort.SUCCEEDED await _core.wait_kevent(process.pid, select.KQ_FILTER_PROC, abort) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_subprocess_platform/waitid.py0000644000076500000000000000713000000000000021164 0ustar00quentinwheelimport errno import math import os import sys from .. import _core, _subprocess from .._sync import CapacityLimiter, Event from .._threads import to_thread_run_sync try: from os import waitid def sync_wait_reapable(pid): waitid(os.P_PID, pid, os.WEXITED | os.WNOWAIT) except ImportError: # pypy doesn't define os.waitid so we need to pull it out ourselves # using cffi: https://bitbucket.org/pypy/pypy/issues/2922/ import cffi waitid_ffi = cffi.FFI() # Believe it or not, siginfo_t starts with fields in the # same layout on both Linux and Darwin. The Linux structure # is bigger so that's what we use to size `pad`; while # there are a few extra fields in there, most of it is # true padding which would not be written by the syscall. waitid_ffi.cdef( """ typedef struct siginfo_s { int si_signo; int si_errno; int si_code; int si_pid; int si_uid; int si_status; int pad[26]; } siginfo_t; int waitid(int idtype, int id, siginfo_t* result, int options); """ ) waitid = waitid_ffi.dlopen(None).waitid def sync_wait_reapable(pid): P_PID = 1 WEXITED = 0x00000004 if sys.platform == "darwin": # pragma: no cover # waitid() is not exposed on Python on Darwin but does # work through CFFI; note that we typically won't get # here since Darwin also defines kqueue WNOWAIT = 0x00000020 else: WNOWAIT = 0x01000000 result = waitid_ffi.new("siginfo_t *") while waitid(P_PID, pid, result, WEXITED | WNOWAIT) < 0: got_errno = waitid_ffi.errno if got_errno == errno.EINTR: continue raise OSError(got_errno, os.strerror(got_errno)) # adapted from # https://github.com/python-trio/trio/issues/4#issuecomment-398967572 waitid_limiter = CapacityLimiter(math.inf) async def _waitid_system_task(pid: int, event: Event) -> None: """Spawn a thread that waits for ``pid`` to exit, then wake any tasks that were waiting on it. """ # cancellable=True: if this task is cancelled, then we abandon the # thread to keep running waitpid in the background. Since this is # always run as a system task, this will only happen if the whole # call to trio.run is shutting down. try: await to_thread_run_sync( sync_wait_reapable, pid, cancellable=True, limiter=waitid_limiter ) except OSError: # If waitid fails, waitpid will fail too, so it still makes # sense to wake up the callers of wait_process_exiting(). The # most likely reason for this error in practice is a child # exiting when wait() is not possible because SIGCHLD is # ignored. pass finally: event.set() async def wait_child_exiting(process: "_subprocess.Process") -> None: # Logic of this function: # - The first time we get called, we create an Event and start # an instance of _waitid_system_task that will set the Event # when waitid() completes. If that Event is set before # we get cancelled, we're good. # - Otherwise, a following call after the cancellation must # reuse the Event created during the first call, lest we # create an arbitrary number of threads waiting on the same # process. if process._wait_for_exit_data is None: process._wait_for_exit_data = event = Event() # type: ignore _core.spawn_system_task(_waitid_system_task, process.pid, event) assert isinstance(process._wait_for_exit_data, Event) await process._wait_for_exit_data.wait() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_subprocess_platform/windows.py0000644000076500000000000000032000000000000021367 0ustar00quentinwheelfrom .. import _subprocess from .._wait_for_object import WaitForSingleObject async def wait_child_exiting(process: "_subprocess.Process") -> None: await WaitForSingleObject(int(process._proc._handle)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_sync.py0000644000076500000000000006565500000000000014603 0ustar00quentinwheelimport math import attr import trio from . import _core from ._core import enable_ki_protection, ParkingLot from ._util import Final @attr.s(frozen=True) class _EventStatistics: tasks_waiting = attr.ib() @attr.s(repr=False, eq=False, hash=False, slots=True) class Event(metaclass=Final): """A waitable boolean value useful for inter-task synchronization, inspired by :class:`threading.Event`. An event object has an internal boolean flag, representing whether the event has happened yet. The flag is initially False, and the :meth:`wait` method waits until the flag is True. If the flag is already True, then :meth:`wait` returns immediately. (If the event has already happened, there's nothing to wait for.) The :meth:`set` method sets the flag to True, and wakes up any waiters. This behavior is useful because it helps avoid race conditions and lost wakeups: it doesn't matter whether :meth:`set` gets called just before or after :meth:`wait`. If you want a lower-level wakeup primitive that doesn't have this protection, consider :class:`Condition` or :class:`trio.lowlevel.ParkingLot`. .. note:: Unlike `threading.Event`, `trio.Event` has no `~threading.Event.clear` method. In Trio, once an `Event` has happened, it cannot un-happen. If you need to represent a series of events, consider creating a new `Event` object for each one (they're cheap!), or other synchronization methods like :ref:`channels ` or `trio.lowlevel.ParkingLot`. """ _tasks = attr.ib(factory=set, init=False) _flag = attr.ib(default=False, init=False) def is_set(self): """Return the current value of the internal flag.""" return self._flag @enable_ki_protection def set(self): """Set the internal flag value to True, and wake any waiting tasks.""" if not self._flag: self._flag = True for task in self._tasks: _core.reschedule(task) self._tasks.clear() async def wait(self): """Block until the internal flag value becomes True. If it's already True, then this method returns immediately. """ if self._flag: await trio.lowlevel.checkpoint() else: task = _core.current_task() self._tasks.add(task) def abort_fn(_): self._tasks.remove(task) return _core.Abort.SUCCEEDED await _core.wait_task_rescheduled(abort_fn) def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``tasks_waiting``: The number of tasks blocked on this event's :meth:`wait` method. """ return _EventStatistics(tasks_waiting=len(self._tasks)) def async_cm(cls): @enable_ki_protection async def __aenter__(self): await self.acquire() __aenter__.__qualname__ = cls.__qualname__ + ".__aenter__" cls.__aenter__ = __aenter__ @enable_ki_protection async def __aexit__(self, *args): self.release() __aexit__.__qualname__ = cls.__qualname__ + ".__aexit__" cls.__aexit__ = __aexit__ return cls @attr.s(frozen=True) class _CapacityLimiterStatistics: borrowed_tokens = attr.ib() total_tokens = attr.ib() borrowers = attr.ib() tasks_waiting = attr.ib() @async_cm class CapacityLimiter(metaclass=Final): """An object for controlling access to a resource with limited capacity. Sometimes you need to put a limit on how many tasks can do something at the same time. For example, you might want to use some threads to run multiple blocking I/O operations in parallel... but if you use too many threads at once, then your system can become overloaded and it'll actually make things slower. One popular solution is to impose a policy like "run up to 40 threads at the same time, but no more". But how do you implement a policy like this? That's what :class:`CapacityLimiter` is for. You can think of a :class:`CapacityLimiter` object as a sack that starts out holding some fixed number of tokens:: limit = trio.CapacityLimiter(40) Then tasks can come along and borrow a token out of the sack:: # Borrow a token: async with limit: # We are holding a token! await perform_expensive_operation() # Exiting the 'async with' block puts the token back into the sack And crucially, if you try to borrow a token but the sack is empty, then you have to wait for another task to finish what it's doing and put its token back first before you can take it and continue. Another way to think of it: a :class:`CapacityLimiter` is like a sofa with a fixed number of seats, and if they're all taken then you have to wait for someone to get up before you can sit down. By default, :func:`trio.to_thread.run_sync` uses a :class:`CapacityLimiter` to limit the number of threads running at once; see `trio.to_thread.current_default_thread_limiter` for details. If you're familiar with semaphores, then you can think of this as a restricted semaphore that's specialized for one common use case, with additional error checking. For a more traditional semaphore, see :class:`Semaphore`. .. note:: Don't confuse this with the `"leaky bucket" `__ or `"token bucket" `__ algorithms used to limit bandwidth usage on networks. The basic idea of using tokens to track a resource limit is similar, but this is a very simple sack where tokens aren't automatically created or destroyed over time; they're just borrowed and then put back. """ def __init__(self, total_tokens): self._lot = ParkingLot() self._borrowers = set() # Maps tasks attempting to acquire -> borrower, to handle on-behalf-of self._pending_borrowers = {} # invoke the property setter for validation self.total_tokens = total_tokens assert self._total_tokens == total_tokens def __repr__(self): return "".format( id(self), len(self._borrowers), self._total_tokens, len(self._lot) ) @property def total_tokens(self): """The total capacity available. You can change :attr:`total_tokens` by assigning to this attribute. If you make it larger, then the appropriate number of waiting tasks will be woken immediately to take the new tokens. If you decrease total_tokens below the number of tasks that are currently using the resource, then all current tasks will be allowed to finish as normal, but no new tasks will be allowed in until the total number of tasks drops below the new total_tokens. """ return self._total_tokens @total_tokens.setter def total_tokens(self, new_total_tokens): if not isinstance(new_total_tokens, int) and new_total_tokens != math.inf: raise TypeError("total_tokens must be an int or math.inf") if new_total_tokens < 1: raise ValueError("total_tokens must be >= 1") self._total_tokens = new_total_tokens self._wake_waiters() def _wake_waiters(self): available = self._total_tokens - len(self._borrowers) for woken in self._lot.unpark(count=available): self._borrowers.add(self._pending_borrowers.pop(woken)) @property def borrowed_tokens(self): """The amount of capacity that's currently in use.""" return len(self._borrowers) @property def available_tokens(self): """The amount of capacity that's available to use.""" return self.total_tokens - self.borrowed_tokens @enable_ki_protection def acquire_nowait(self): """Borrow a token from the sack, without blocking. Raises: WouldBlock: if no tokens are available. RuntimeError: if the current task already holds one of this sack's tokens. """ self.acquire_on_behalf_of_nowait(trio.lowlevel.current_task()) @enable_ki_protection def acquire_on_behalf_of_nowait(self, borrower): """Borrow a token from the sack on behalf of ``borrower``, without blocking. Args: borrower: A :class:`trio.lowlevel.Task` or arbitrary opaque object used to record who is borrowing this token. This is used by :func:`trio.to_thread.run_sync` to allow threads to "hold tokens", with the intention in the future of using it to `allow deadlock detection and other useful things `__ Raises: WouldBlock: if no tokens are available. RuntimeError: if ``borrower`` already holds one of this sack's tokens. """ if borrower in self._borrowers: raise RuntimeError( "this borrower is already holding one of this " "CapacityLimiter's tokens" ) if len(self._borrowers) < self._total_tokens and not self._lot: self._borrowers.add(borrower) else: raise trio.WouldBlock @enable_ki_protection async def acquire(self): """Borrow a token from the sack, blocking if necessary. Raises: RuntimeError: if the current task already holds one of this sack's tokens. """ await self.acquire_on_behalf_of(trio.lowlevel.current_task()) @enable_ki_protection async def acquire_on_behalf_of(self, borrower): """Borrow a token from the sack on behalf of ``borrower``, blocking if necessary. Args: borrower: A :class:`trio.lowlevel.Task` or arbitrary opaque object used to record who is borrowing this token; see :meth:`acquire_on_behalf_of_nowait` for details. Raises: RuntimeError: if ``borrower`` task already holds one of this sack's tokens. """ await trio.lowlevel.checkpoint_if_cancelled() try: self.acquire_on_behalf_of_nowait(borrower) except trio.WouldBlock: task = trio.lowlevel.current_task() self._pending_borrowers[task] = borrower try: await self._lot.park() except trio.Cancelled: self._pending_borrowers.pop(task) raise else: await trio.lowlevel.cancel_shielded_checkpoint() @enable_ki_protection def release(self): """Put a token back into the sack. Raises: RuntimeError: if the current task has not acquired one of this sack's tokens. """ self.release_on_behalf_of(trio.lowlevel.current_task()) @enable_ki_protection def release_on_behalf_of(self, borrower): """Put a token back into the sack on behalf of ``borrower``. Raises: RuntimeError: if the given borrower has not acquired one of this sack's tokens. """ if borrower not in self._borrowers: raise RuntimeError( "this borrower isn't holding any of this CapacityLimiter's tokens" ) self._borrowers.remove(borrower) self._wake_waiters() def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``borrowed_tokens``: The number of tokens currently borrowed from the sack. * ``total_tokens``: The total number of tokens in the sack. Usually this will be larger than ``borrowed_tokens``, but it's possibly for it to be smaller if :attr:`total_tokens` was recently decreased. * ``borrowers``: A list of all tasks or other entities that currently hold a token. * ``tasks_waiting``: The number of tasks blocked on this :class:`CapacityLimiter`\'s :meth:`acquire` or :meth:`acquire_on_behalf_of` methods. """ return _CapacityLimiterStatistics( borrowed_tokens=len(self._borrowers), total_tokens=self._total_tokens, # Use a list instead of a frozenset just in case we start to allow # one borrower to hold multiple tokens in the future borrowers=list(self._borrowers), tasks_waiting=len(self._lot), ) @async_cm class Semaphore(metaclass=Final): """A `semaphore `__. A semaphore holds an integer value, which can be incremented by calling :meth:`release` and decremented by calling :meth:`acquire` – but the value is never allowed to drop below zero. If the value is zero, then :meth:`acquire` will block until someone calls :meth:`release`. If you're looking for a :class:`Semaphore` to limit the number of tasks that can access some resource simultaneously, then consider using a :class:`CapacityLimiter` instead. This object's interface is similar to, but different from, that of :class:`threading.Semaphore`. A :class:`Semaphore` object can be used as an async context manager; it blocks on entry but not on exit. Args: initial_value (int): A non-negative integer giving semaphore's initial value. max_value (int or None): If given, makes this a "bounded" semaphore that raises an error if the value is about to exceed the given ``max_value``. """ def __init__(self, initial_value, *, max_value=None): if not isinstance(initial_value, int): raise TypeError("initial_value must be an int") if initial_value < 0: raise ValueError("initial value must be >= 0") if max_value is not None: if not isinstance(max_value, int): raise TypeError("max_value must be None or an int") if max_value < initial_value: raise ValueError("max_values must be >= initial_value") # Invariants: # bool(self._lot) implies self._value == 0 # (or equivalently: self._value > 0 implies not self._lot) self._lot = trio.lowlevel.ParkingLot() self._value = initial_value self._max_value = max_value def __repr__(self): if self._max_value is None: max_value_str = "" else: max_value_str = ", max_value={}".format(self._max_value) return "".format( self._value, max_value_str, id(self) ) @property def value(self): """The current value of the semaphore.""" return self._value @property def max_value(self): """The maximum allowed value. May be None to indicate no limit.""" return self._max_value @enable_ki_protection def acquire_nowait(self): """Attempt to decrement the semaphore value, without blocking. Raises: WouldBlock: if the value is zero. """ if self._value > 0: assert not self._lot self._value -= 1 else: raise trio.WouldBlock @enable_ki_protection async def acquire(self): """Decrement the semaphore value, blocking if necessary to avoid letting it drop below zero. """ await trio.lowlevel.checkpoint_if_cancelled() try: self.acquire_nowait() except trio.WouldBlock: await self._lot.park() else: await trio.lowlevel.cancel_shielded_checkpoint() @enable_ki_protection def release(self): """Increment the semaphore value, possibly waking a task blocked in :meth:`acquire`. Raises: ValueError: if incrementing the value would cause it to exceed :attr:`max_value`. """ if self._lot: assert self._value == 0 self._lot.unpark(count=1) else: if self._max_value is not None and self._value == self._max_value: raise ValueError("semaphore released too many times") self._value += 1 def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``tasks_waiting``: The number of tasks blocked on this semaphore's :meth:`acquire` method. """ return self._lot.statistics() @attr.s(frozen=True) class _LockStatistics: locked = attr.ib() owner = attr.ib() tasks_waiting = attr.ib() @async_cm @attr.s(eq=False, hash=False, repr=False) class _LockImpl: _lot = attr.ib(factory=ParkingLot, init=False) _owner = attr.ib(default=None, init=False) def __repr__(self): if self.locked(): s1 = "locked" s2 = " with {} waiters".format(len(self._lot)) else: s1 = "unlocked" s2 = "" return "<{} {} object at {:#x}{}>".format( s1, self.__class__.__name__, id(self), s2 ) def locked(self): """Check whether the lock is currently held. Returns: bool: True if the lock is held, False otherwise. """ return self._owner is not None @enable_ki_protection def acquire_nowait(self): """Attempt to acquire the lock, without blocking. Raises: WouldBlock: if the lock is held. """ task = trio.lowlevel.current_task() if self._owner is task: raise RuntimeError("attempt to re-acquire an already held Lock") elif self._owner is None and not self._lot: # No-one owns it self._owner = task else: raise trio.WouldBlock @enable_ki_protection async def acquire(self): """Acquire the lock, blocking if necessary.""" await trio.lowlevel.checkpoint_if_cancelled() try: self.acquire_nowait() except trio.WouldBlock: # NOTE: it's important that the contended acquire path is just # "_lot.park()", because that's how Condition.wait() acquires the # lock as well. await self._lot.park() else: await trio.lowlevel.cancel_shielded_checkpoint() @enable_ki_protection def release(self): """Release the lock. Raises: RuntimeError: if the calling task does not hold the lock. """ task = trio.lowlevel.current_task() if task is not self._owner: raise RuntimeError("can't release a Lock you don't own") if self._lot: (self._owner,) = self._lot.unpark(count=1) else: self._owner = None def statistics(self): """Return an object containing debugging information. Currently the following fields are defined: * ``locked``: boolean indicating whether the lock is held. * ``owner``: the :class:`trio.lowlevel.Task` currently holding the lock, or None if the lock is not held. * ``tasks_waiting``: The number of tasks blocked on this lock's :meth:`acquire` method. """ return _LockStatistics( locked=self.locked(), owner=self._owner, tasks_waiting=len(self._lot) ) class Lock(_LockImpl, metaclass=Final): """A classic `mutex `__. This is a non-reentrant, single-owner lock. Unlike :class:`threading.Lock`, only the owner of the lock is allowed to release it. A :class:`Lock` object can be used as an async context manager; it blocks on entry but not on exit. """ class StrictFIFOLock(_LockImpl, metaclass=Final): r"""A variant of :class:`Lock` where tasks are guaranteed to acquire the lock in strict first-come-first-served order. An example of when this is useful is if you're implementing something like :class:`trio.SSLStream` or an HTTP/2 server using `h2 `__, where you have multiple concurrent tasks that are interacting with a shared state machine, and at unpredictable moments the state machine requests that a chunk of data be sent over the network. (For example, when using h2 simply reading incoming data can occasionally `create outgoing data to send `__.) The challenge is to make sure that these chunks are sent in the correct order, without being garbled. One option would be to use a regular :class:`Lock`, and wrap it around every interaction with the state machine:: # This approach is sometimes workable but often sub-optimal; see below async with lock: state_machine.do_something() if state_machine.has_data_to_send(): await conn.sendall(state_machine.get_data_to_send()) But this can be problematic. If you're using h2 then *usually* reading incoming data doesn't create the need to send any data, so we don't want to force every task that tries to read from the network to sit and wait a potentially long time for ``sendall`` to finish. And in some situations this could even potentially cause a deadlock, if the remote peer is waiting for you to read some data before it accepts the data you're sending. :class:`StrictFIFOLock` provides an alternative. We can rewrite our example like:: # Note: no awaits between when we start using the state machine and # when we block to take the lock! state_machine.do_something() if state_machine.has_data_to_send(): # Notice that we fetch the data to send out of the state machine # *before* sleeping, so that other tasks won't see it. chunk = state_machine.get_data_to_send() async with strict_fifo_lock: await conn.sendall(chunk) First we do all our interaction with the state machine in a single scheduling quantum (notice there are no ``await``\s in there), so it's automatically atomic with respect to other tasks. And then if and only if we have data to send, we get in line to send it – and :class:`StrictFIFOLock` guarantees that each task will send its data in the same order that the state machine generated it. Currently, :class:`StrictFIFOLock` is identical to :class:`Lock`, but (a) this may not always be true in the future, especially if Trio ever implements `more sophisticated scheduling policies `__, and (b) the above code is relying on a pretty subtle property of its lock. Using a :class:`StrictFIFOLock` acts as an executable reminder that you're relying on this property. """ @attr.s(frozen=True) class _ConditionStatistics: tasks_waiting = attr.ib() lock_statistics = attr.ib() @async_cm class Condition(metaclass=Final): """A classic `condition variable `__, similar to :class:`threading.Condition`. A :class:`Condition` object can be used as an async context manager to acquire the underlying lock; it blocks on entry but not on exit. Args: lock (Lock): the lock object to use. If given, must be a :class:`trio.Lock`. If None, a new :class:`Lock` will be allocated and used. """ def __init__(self, lock=None): if lock is None: lock = Lock() if not type(lock) is Lock: raise TypeError("lock must be a trio.Lock") self._lock = lock self._lot = trio.lowlevel.ParkingLot() def locked(self): """Check whether the underlying lock is currently held. Returns: bool: True if the lock is held, False otherwise. """ return self._lock.locked() def acquire_nowait(self): """Attempt to acquire the underlying lock, without blocking. Raises: WouldBlock: if the lock is currently held. """ return self._lock.acquire_nowait() async def acquire(self): """Acquire the underlying lock, blocking if necessary.""" await self._lock.acquire() def release(self): """Release the underlying lock.""" self._lock.release() @enable_ki_protection async def wait(self): """Wait for another task to call :meth:`notify` or :meth:`notify_all`. When calling this method, you must hold the lock. It releases the lock while waiting, and then re-acquires it before waking up. There is a subtlety with how this method interacts with cancellation: when cancelled it will block to re-acquire the lock before raising :exc:`Cancelled`. This may cause cancellation to be less prompt than expected. The advantage is that it makes code like this work:: async with condition: await condition.wait() If we didn't re-acquire the lock before waking up, and :meth:`wait` were cancelled here, then we'd crash in ``condition.__aexit__`` when we tried to release the lock we no longer held. Raises: RuntimeError: if the calling task does not hold the lock. """ if trio.lowlevel.current_task() is not self._lock._owner: raise RuntimeError("must hold the lock to wait") self.release() # NOTE: we go to sleep on self._lot, but we'll wake up on # self._lock._lot. That's all that's required to acquire a Lock. try: await self._lot.park() except: with trio.CancelScope(shield=True): await self.acquire() raise def notify(self, n=1): """Wake one or more tasks that are blocked in :meth:`wait`. Args: n (int): The number of tasks to wake. Raises: RuntimeError: if the calling task does not hold the lock. """ if trio.lowlevel.current_task() is not self._lock._owner: raise RuntimeError("must hold the lock to notify") self._lot.repark(self._lock._lot, count=n) def notify_all(self): """Wake all tasks that are currently blocked in :meth:`wait`. Raises: RuntimeError: if the calling task does not hold the lock. """ if trio.lowlevel.current_task() is not self._lock._owner: raise RuntimeError("must hold the lock to notify") self._lot.repark_all(self._lock._lot) def statistics(self): r"""Return an object containing debugging information. Currently the following fields are defined: * ``tasks_waiting``: The number of tasks blocked on this condition's :meth:`wait` method. * ``lock_statistics``: The result of calling the underlying :class:`Lock`\s :meth:`~Lock.statistics` method. """ return _ConditionStatistics( tasks_waiting=len(self._lot), lock_statistics=self._lock.statistics() ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_threads.py0000644000076500000000000003170400000000000015245 0ustar00quentinwheel# coding: utf-8 import threading import queue as stdlib_queue from itertools import count import attr import inspect import outcome import trio from ._sync import CapacityLimiter from ._core import ( enable_ki_protection, disable_ki_protection, RunVar, TrioToken, start_thread_soon, ) from ._util import coroutine_or_error # Global due to Threading API, thread local storage for trio token TOKEN_LOCAL = threading.local() _limiter_local = RunVar("limiter") # I pulled this number out of the air; it isn't based on anything. Probably we # should make some kind of measurements to pick a good value. DEFAULT_LIMIT = 40 _thread_counter = count() def current_default_thread_limiter(): """Get the default `~trio.CapacityLimiter` used by `trio.to_thread.run_sync`. The most common reason to call this would be if you want to modify its :attr:`~trio.CapacityLimiter.total_tokens` attribute. """ try: limiter = _limiter_local.get() except LookupError: limiter = CapacityLimiter(DEFAULT_LIMIT) _limiter_local.set(limiter) return limiter # Eventually we might build this into a full-fledged deadlock-detection # system; see https://github.com/python-trio/trio/issues/182 # But for now we just need an object to stand in for the thread, so we can # keep track of who's holding the CapacityLimiter's token. @attr.s(frozen=True, eq=False, hash=False) class ThreadPlaceholder: name = attr.ib() @enable_ki_protection async def to_thread_run_sync(sync_fn, *args, cancellable=False, limiter=None): """Convert a blocking operation into an async operation using a thread. These two lines are equivalent:: sync_fn(*args) await trio.to_thread.run_sync(sync_fn, *args) except that if ``sync_fn`` takes a long time, then the first line will block the Trio loop while it runs, while the second line allows other Trio tasks to continue working while ``sync_fn`` runs. This is accomplished by pushing the call to ``sync_fn(*args)`` off into a worker thread. From inside the worker thread, you can get back into Trio using the functions in `trio.from_thread`. Args: sync_fn: An arbitrary synchronous callable. *args: Positional arguments to pass to sync_fn. If you need keyword arguments, use :func:`functools.partial`. cancellable (bool): Whether to allow cancellation of this operation. See discussion below. limiter (None, or CapacityLimiter-like object): An object used to limit the number of simultaneous threads. Most commonly this will be a `~trio.CapacityLimiter`, but it could be anything providing compatible :meth:`~trio.CapacityLimiter.acquire_on_behalf_of` and :meth:`~trio.CapacityLimiter.release_on_behalf_of` methods. This function will call ``acquire_on_behalf_of`` before starting the thread, and ``release_on_behalf_of`` after the thread has finished. If None (the default), uses the default `~trio.CapacityLimiter`, as returned by :func:`current_default_thread_limiter`. **Cancellation handling**: Cancellation is a tricky issue here, because neither Python nor the operating systems it runs on provide any general mechanism for cancelling an arbitrary synchronous function running in a thread. This function will always check for cancellation on entry, before starting the thread. But once the thread is running, there are two ways it can handle being cancelled: * If ``cancellable=False``, the function ignores the cancellation and keeps going, just like if we had called ``sync_fn`` synchronously. This is the default behavior. * If ``cancellable=True``, then this function immediately raises `~trio.Cancelled`. In this case **the thread keeps running in background** – we just abandon it to do whatever it's going to do, and silently discard any return value or errors that it raises. Only use this if you know that the operation is safe and side-effect free. (For example: :func:`trio.socket.getaddrinfo` uses a thread with ``cancellable=True``, because it doesn't really affect anything if a stray hostname lookup keeps running in the background.) The ``limiter`` is only released after the thread has *actually* finished – which in the case of cancellation may be some time after this function has returned. If :func:`trio.run` finishes before the thread does, then the limiter release method will never be called at all. .. warning:: You should not use this function to call long-running CPU-bound functions! In addition to the usual GIL-related reasons why using threads for CPU-bound work is not very effective in Python, there is an additional problem: on CPython, `CPU-bound threads tend to "starve out" IO-bound threads `__, so using threads for CPU-bound work is likely to adversely affect the main thread running Trio. If you need to do this, you're better off using a worker process, or perhaps PyPy (which still has a GIL, but may do a better job of fairly allocating CPU time between threads). Returns: Whatever ``sync_fn(*args)`` returns. Raises: Exception: Whatever ``sync_fn(*args)`` raises. """ await trio.lowlevel.checkpoint_if_cancelled() if limiter is None: limiter = current_default_thread_limiter() # Holds a reference to the task that's blocked in this function waiting # for the result – or None if this function was cancelled and we should # discard the result. task_register = [trio.lowlevel.current_task()] name = f"trio.to_thread.run_sync-{next(_thread_counter)}" placeholder = ThreadPlaceholder(name) # This function gets scheduled into the Trio run loop to deliver the # thread's result. def report_back_in_trio_thread_fn(result): def do_release_then_return_result(): # release_on_behalf_of is an arbitrary user-defined method, so it # might raise an error. If it does, we want that error to # replace the regular return value, and if the regular return was # already an exception then we want them to chain. try: return result.unwrap() finally: limiter.release_on_behalf_of(placeholder) result = outcome.capture(do_release_then_return_result) if task_register[0] is not None: trio.lowlevel.reschedule(task_register[0], result) current_trio_token = trio.lowlevel.current_trio_token() def worker_fn(): TOKEN_LOCAL.token = current_trio_token try: ret = sync_fn(*args) if inspect.iscoroutine(ret): # Manually close coroutine to avoid RuntimeWarnings ret.close() raise TypeError( "Trio expected a sync function, but {!r} appears to be " "asynchronous".format(getattr(sync_fn, "__qualname__", sync_fn)) ) return ret finally: del TOKEN_LOCAL.token def deliver_worker_fn_result(result): try: current_trio_token.run_sync_soon(report_back_in_trio_thread_fn, result) except trio.RunFinishedError: # The entire run finished, so the task we're trying to contact is # certainly long gone -- it must have been cancelled and abandoned # us. pass await limiter.acquire_on_behalf_of(placeholder) try: start_thread_soon(worker_fn, deliver_worker_fn_result) except: limiter.release_on_behalf_of(placeholder) raise def abort(_): if cancellable: task_register[0] = None return trio.lowlevel.Abort.SUCCEEDED else: return trio.lowlevel.Abort.FAILED return await trio.lowlevel.wait_task_rescheduled(abort) def _run_fn_as_system_task(cb, fn, *args, trio_token=None): """Helper function for from_thread.run and from_thread.run_sync. Since this internally uses TrioToken.run_sync_soon, all warnings about raised exceptions canceling all tasks should be noted. """ if trio_token and not isinstance(trio_token, TrioToken): raise RuntimeError("Passed kwarg trio_token is not of type TrioToken") if not trio_token: try: trio_token = TOKEN_LOCAL.token except AttributeError: raise RuntimeError( "this thread wasn't created by Trio, pass kwarg trio_token=..." ) # Avoid deadlock by making sure we're not called from Trio thread try: trio.lowlevel.current_task() except RuntimeError: pass else: raise RuntimeError("this is a blocking function; call it from a thread") q = stdlib_queue.Queue() trio_token.run_sync_soon(cb, q, fn, args) return q.get().unwrap() def from_thread_run(afn, *args, trio_token=None): """Run the given async function in the parent Trio thread, blocking until it is complete. Returns: Whatever ``afn(*args)`` returns. Returns or raises whatever the given function returns or raises. It can also raise exceptions of its own: Raises: RunFinishedError: if the corresponding call to :func:`trio.run` has already completed, or if the run has started its final cleanup phase and can no longer spawn new system tasks. Cancelled: if the corresponding call to :func:`trio.run` completes while ``afn(*args)`` is running, then ``afn`` is likely to raise :exc:`trio.Cancelled`, and this will propagate out into RuntimeError: if you try calling this from inside the Trio thread, which would otherwise cause a deadlock. AttributeError: if no ``trio_token`` was provided, and we can't infer one from context. TypeError: if ``afn`` is not an asynchronous function. **Locating a Trio Token**: There are two ways to specify which `trio.run` loop to reenter: - Spawn this thread from `trio.to_thread.run_sync`. Trio will automatically capture the relevant Trio token and use it when you want to re-enter Trio. - Pass a keyword argument, ``trio_token`` specifying a specific `trio.run` loop to re-enter. This is useful in case you have a "foreign" thread, spawned using some other framework, and still want to enter Trio. """ def callback(q, afn, args): @disable_ki_protection async def unprotected_afn(): coro = coroutine_or_error(afn, *args) return await coro async def await_in_trio_thread_task(): q.put_nowait(await outcome.acapture(unprotected_afn)) try: trio.lowlevel.spawn_system_task(await_in_trio_thread_task, name=afn) except RuntimeError: # system nursery is closed q.put_nowait( outcome.Error(trio.RunFinishedError("system nursery is closed")) ) return _run_fn_as_system_task(callback, afn, *args, trio_token=trio_token) def from_thread_run_sync(fn, *args, trio_token=None): """Run the given sync function in the parent Trio thread, blocking until it is complete. Returns: Whatever ``fn(*args)`` returns. Returns or raises whatever the given function returns or raises. It can also raise exceptions of its own: Raises: RunFinishedError: if the corresponding call to `trio.run` has already completed. RuntimeError: if you try calling this from inside the Trio thread, which would otherwise cause a deadlock. AttributeError: if no ``trio_token`` was provided, and we can't infer one from context. TypeError: if ``fn`` is an async function. **Locating a Trio Token**: There are two ways to specify which `trio.run` loop to reenter: - Spawn this thread from `trio.to_thread.run_sync`. Trio will automatically capture the relevant Trio token and use it when you want to re-enter Trio. - Pass a keyword argument, ``trio_token`` specifying a specific `trio.run` loop to re-enter. This is useful in case you have a "foreign" thread, spawned using some other framework, and still want to enter Trio. """ def callback(q, fn, args): @disable_ki_protection def unprotected_fn(): ret = fn(*args) if inspect.iscoroutine(ret): # Manually close coroutine to avoid RuntimeWarnings ret.close() raise TypeError( "Trio expected a sync function, but {!r} appears to be " "asynchronous".format(getattr(fn, "__qualname__", fn)) ) return ret res = outcome.capture(unprotected_fn) q.put_nowait(res) return _run_fn_as_system_task(callback, fn, *args, trio_token=trio_token) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_timeouts.py0000644000076500000000000000751100000000000015463 0ustar00quentinwheelfrom contextlib import contextmanager import trio def move_on_at(deadline): """Use as a context manager to create a cancel scope with the given absolute deadline. Args: deadline (float): The deadline. """ return trio.CancelScope(deadline=deadline) def move_on_after(seconds): """Use as a context manager to create a cancel scope whose deadline is set to now + *seconds*. Args: seconds (float): The timeout. Raises: ValueError: if timeout is less than zero. """ if seconds < 0: raise ValueError("timeout must be non-negative") return move_on_at(trio.current_time() + seconds) async def sleep_forever(): """Pause execution of the current task forever (or until cancelled). Equivalent to calling ``await sleep(math.inf)``. """ await trio.lowlevel.wait_task_rescheduled(lambda _: trio.lowlevel.Abort.SUCCEEDED) async def sleep_until(deadline): """Pause execution of the current task until the given time. The difference between :func:`sleep` and :func:`sleep_until` is that the former takes a relative time and the latter takes an absolute time according to Trio's internal clock (as returned by :func:`current_time`). Args: deadline (float): The time at which we should wake up again. May be in the past, in which case this function executes a checkpoint but does not block. """ with move_on_at(deadline): await sleep_forever() async def sleep(seconds): """Pause execution of the current task for the given number of seconds. Args: seconds (float): The number of seconds to sleep. May be zero to insert a checkpoint without actually blocking. Raises: ValueError: if *seconds* is negative. """ if seconds < 0: raise ValueError("duration must be non-negative") if seconds == 0: await trio.lowlevel.checkpoint() else: await sleep_until(trio.current_time() + seconds) class TooSlowError(Exception): """Raised by :func:`fail_after` and :func:`fail_at` if the timeout expires. """ @contextmanager def fail_at(deadline): """Creates a cancel scope with the given deadline, and raises an error if it is actually cancelled. This function and :func:`move_on_at` are similar in that both create a cancel scope with a given absolute deadline, and if the deadline expires then both will cause :exc:`Cancelled` to be raised within the scope. The difference is that when the :exc:`Cancelled` exception reaches :func:`move_on_at`, it's caught and discarded. When it reaches :func:`fail_at`, then it's caught and :exc:`TooSlowError` is raised in its place. Raises: TooSlowError: if a :exc:`Cancelled` exception is raised in this scope and caught by the context manager. """ with move_on_at(deadline) as scope: yield scope if scope.cancelled_caught: raise TooSlowError def fail_after(seconds): """Creates a cancel scope with the given timeout, and raises an error if it is actually cancelled. This function and :func:`move_on_after` are similar in that both create a cancel scope with a given timeout, and if the timeout expires then both will cause :exc:`Cancelled` to be raised within the scope. The difference is that when the :exc:`Cancelled` exception reaches :func:`move_on_after`, it's caught and discarded. When it reaches :func:`fail_after`, then it's caught and :exc:`TooSlowError` is raised in its place. Raises: TooSlowError: if a :exc:`Cancelled` exception is raised in this scope and caught by the context manager. ValueError: if *seconds* is less than zero. """ if seconds < 0: raise ValueError("timeout must be non-negative") return fail_at(trio.current_time() + seconds) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8212595 trio-0.19.0/trio/_tools/0000755000076500000000000000000000000000000014374 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_tools/__init__.py0000644000076500000000000000000000000000000016473 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_tools/gen_exports.py0000755000076500000000000001341600000000000017313 0ustar00quentinwheel#! /usr/bin/env python3 # -*- coding: utf-8 -`- """ Code generation script for class methods to be exported as public API """ import argparse import ast import astor import os from pathlib import Path import sys from textwrap import indent PREFIX = "_generated" HEADER = """# *********************************************************** # ******* WARNING: AUTOGENERATED! ALL EDITS WILL BE LOST ****** # ************************************************************* from ._run import GLOBAL_RUN_CONTEXT, _NO_SEND from ._ki import LOCALS_KEY_KI_PROTECTION_ENABLED from ._instrumentation import Instrument # fmt: off """ FOOTER = """# fmt: on """ TEMPLATE = """locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True try: return{}GLOBAL_RUN_CONTEXT.{}.{} except AttributeError: raise RuntimeError("must be called from async context") """ def is_function(node): """Check if the AST node is either a function or an async function """ if isinstance(node, ast.FunctionDef) or isinstance(node, ast.AsyncFunctionDef): return True return False def is_public(node): """Check if the AST node has a _public decorator""" if not is_function(node): return False for decorator in node.decorator_list: if isinstance(decorator, ast.Name) and decorator.id == "_public": return True return False def get_public_methods(tree): """Return a list of methods marked as public. The function walks the given tree and extracts all objects that are functions which are marked public. """ for node in ast.walk(tree): if is_public(node): yield node def create_passthrough_args(funcdef): """Given a function definition, create a string that represents taking all the arguments from the function, and passing them through to another invocation of the same function. Example input: ast.parse("def f(a, *, b): ...") Example output: "(a, b=b)" """ call_args = [] for arg in funcdef.args.args: call_args.append(arg.arg) if funcdef.args.vararg: call_args.append("*" + funcdef.args.vararg.arg) for arg in funcdef.args.kwonlyargs: call_args.append(arg.arg + "=" + arg.arg) if funcdef.args.kwarg: call_args.append("**" + funcdef.args.kwarg.arg) return "({})".format(", ".join(call_args)) def gen_public_wrappers_source(source_path: Path, lookup_path: str) -> str: """Scan the given .py file for @_public decorators, and generate wrapper functions. """ generated = [HEADER] source = astor.code_to_ast.parse_file(source_path) for method in get_public_methods(source): # Remove self from arguments assert method.args.args[0].arg == "self" del method.args.args[0] # Remove decorators method.decorator_list = [] # Create pass through arguments new_args = create_passthrough_args(method) # Remove method body without the docstring if ast.get_docstring(method) is None: del method.body[:] else: # The first entry is always the docstring del method.body[1:] # Create the function definition including the body func = astor.to_source(method, indent_with=" " * 4) # Create export function body template = TEMPLATE.format( " await " if isinstance(method, ast.AsyncFunctionDef) else " ", lookup_path, method.name + new_args, ) # Assemble function definition arguments and body snippet = func + indent(template, " " * 4) # Append the snippet to the corresponding module generated.append(snippet) generated.append(FOOTER) return "\n\n".join(generated) def matches_disk_files(new_files): for new_path, new_source in new_files.items(): if not os.path.exists(new_path): return False with open(new_path, "r", encoding="utf-8") as old_file: old_source = old_file.read() if old_source != new_source: return False return True def process(sources_and_lookups, *, do_test): new_files = {} for source_path, lookup_path in sources_and_lookups: print("Scanning:", source_path) new_source = gen_public_wrappers_source(source_path, lookup_path) dirname, basename = os.path.split(source_path) new_path = os.path.join(dirname, PREFIX + basename) new_files[new_path] = new_source if do_test: if not matches_disk_files(new_files): print("Generated sources are outdated. Please regenerate.") sys.exit(1) else: print("Generated sources are up to date.") else: for new_path, new_source in new_files.items(): with open(new_path, "w", encoding="utf-8") as f: f.write(new_source) print("Regenerated sources successfully.") # This is in fact run in CI, but only in the formatting check job, which # doesn't collect coverage. def main(): # pragma: no cover parser = argparse.ArgumentParser( description="Generate python code for public api wrappers" ) parser.add_argument( "--test", "-t", action="store_true", help="test if code is still up to date" ) parsed_args = parser.parse_args() source_root = Path.cwd() # Double-check we found the right directory assert (source_root / "LICENSE").exists() core = source_root / "trio/_core" to_wrap = [ (core / "_run.py", "runner"), (core / "_instrumentation.py", "runner.instruments"), (core / "_io_windows.py", "runner.io_manager"), (core / "_io_epoll.py", "runner.io_manager"), (core / "_io_kqueue.py", "runner.io_manager"), ] process(to_wrap, do_test=parsed_args.test) if __name__ == "__main__": # pragma: no cover main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_unix_pipes.py0000644000076500000000000001725500000000000016003 0ustar00quentinwheelimport os import errno from ._abc import Stream from ._util import ConflictDetector, Final import trio if os.name != "posix": # We raise an error here rather than gating the import in lowlevel.py # in order to keep jedi static analysis happy. raise ImportError # XX TODO: is this a good number? who knows... it does match the default Linux # pipe capacity though. DEFAULT_RECEIVE_SIZE = 65536 class _FdHolder: # This class holds onto a raw file descriptor, in non-blocking mode, and # is responsible for managing its lifecycle. In particular, it's # responsible for making sure it gets closed, and also for tracking # whether it's been closed. # # The way we track closure is to set the .fd field to -1, discarding the # original value. You might think that this is a strange idea, since it # overloads the same field to do two different things. Wouldn't it be more # natural to have a dedicated .closed field? But that would be more # error-prone. Fds are represented by small integers, and once an fd is # closed, its integer value may be reused immediately. If we accidentally # used the old fd after being closed, we might end up doing something to # another unrelated fd that happened to get assigned the same integer # value. By throwing away the integer value immediately, it becomes # impossible to make this mistake – we'll just get an EBADF. # # (This trick was copied from the stdlib socket module.) def __init__(self, fd: int): # make sure self.fd is always initialized to *something*, because even # if we error out here then __del__ will run and access it. self.fd = -1 if not isinstance(fd, int): raise TypeError("file descriptor must be an int") self.fd = fd # Store original state, and ensure non-blocking mode is enabled self._original_is_blocking = os.get_blocking(fd) os.set_blocking(fd, False) @property def closed(self): return self.fd == -1 def _raw_close(self): # This doesn't assume it's in a Trio context, so it can be called from # __del__. You should never call it from Trio context, because it # skips calling notify_fd_close. But from __del__, skipping that is # OK, because notify_fd_close just wakes up other tasks that are # waiting on this fd, and those tasks hold a reference to this object. # So if __del__ is being called, we know there aren't any tasks that # need to be woken. if self.closed: return fd = self.fd self.fd = -1 os.set_blocking(fd, self._original_is_blocking) os.close(fd) def __del__(self): self._raw_close() async def aclose(self): if not self.closed: trio.lowlevel.notify_closing(self.fd) self._raw_close() await trio.lowlevel.checkpoint() class FdStream(Stream, metaclass=Final): """ Represents a stream given the file descriptor to a pipe, TTY, etc. *fd* must refer to a file that is open for reading and/or writing and supports non-blocking I/O (pipes and TTYs will work, on-disk files probably not). The returned stream takes ownership of the fd, so closing the stream will close the fd too. As with `os.fdopen`, you should not directly use an fd after you have wrapped it in a stream using this function. To be used as a Trio stream, an open file must be placed in non-blocking mode. Unfortunately, this impacts all I/O that goes through the underlying open file, including I/O that uses a different file descriptor than the one that was passed to Trio. If other threads or processes are using file descriptors that are related through `os.dup` or inheritance across `os.fork` to the one that Trio is using, they are unlikely to be prepared to have non-blocking I/O semantics suddenly thrust upon them. For example, you can use ``FdStream(os.dup(sys.stdin.fileno()))`` to obtain a stream for reading from standard input, but it is only safe to do so with heavy caveats: your stdin must not be shared by any other processes and you must not make any calls to synchronous methods of `sys.stdin` until the stream returned by `FdStream` is closed. See `issue #174 `__ for a discussion of the challenges involved in relaxing this restriction. Args: fd (int): The fd to be wrapped. Returns: A new `FdStream` object. """ def __init__(self, fd: int): self._fd_holder = _FdHolder(fd) self._send_conflict_detector = ConflictDetector( "another task is using this stream for send" ) self._receive_conflict_detector = ConflictDetector( "another task is using this stream for receive" ) async def send_all(self, data: bytes): with self._send_conflict_detector: # have to check up front, because send_all(b"") on a closed pipe # should raise if self._fd_holder.closed: raise trio.ClosedResourceError("file was already closed") await trio.lowlevel.checkpoint() length = len(data) # adapted from the SocketStream code with memoryview(data) as view: sent = 0 while sent < length: with view[sent:] as remaining: try: sent += os.write(self._fd_holder.fd, remaining) except BlockingIOError: await trio.lowlevel.wait_writable(self._fd_holder.fd) except OSError as e: if e.errno == errno.EBADF: raise trio.ClosedResourceError( "file was already closed" ) from None else: raise trio.BrokenResourceError from e async def wait_send_all_might_not_block(self) -> None: with self._send_conflict_detector: if self._fd_holder.closed: raise trio.ClosedResourceError("file was already closed") try: await trio.lowlevel.wait_writable(self._fd_holder.fd) except BrokenPipeError as e: # kqueue: raises EPIPE on wait_writable instead # of sending, which is annoying raise trio.BrokenResourceError from e async def receive_some(self, max_bytes=None) -> bytes: with self._receive_conflict_detector: if max_bytes is None: max_bytes = DEFAULT_RECEIVE_SIZE else: if not isinstance(max_bytes, int): raise TypeError("max_bytes must be integer >= 1") if max_bytes < 1: raise ValueError("max_bytes must be integer >= 1") await trio.lowlevel.checkpoint() while True: try: data = os.read(self._fd_holder.fd, max_bytes) except BlockingIOError: await trio.lowlevel.wait_readable(self._fd_holder.fd) except OSError as e: if e.errno == errno.EBADF: raise trio.ClosedResourceError( "file was already closed" ) from None else: raise trio.BrokenResourceError from e else: break return data async def aclose(self): await self._fd_holder.aclose() def fileno(self): return self._fd_holder.fd ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_util.py0000644000076500000000000003054100000000000014566 0ustar00quentinwheel# coding: utf-8 # Little utilities we use internally from abc import ABCMeta import os import signal import sys import pathlib from functools import wraps, update_wrapper import typing as t import threading import collections from async_generator import isasyncgen import trio # Equivalent to the C function raise(), which Python doesn't wrap if os.name == "nt": # On windows, os.kill exists but is really weird. # # If you give it CTRL_C_EVENT or CTRL_BREAK_EVENT, it tries to deliver # those using GenerateConsoleCtrlEvent. But I found that when I tried # to run my test normally, it would freeze waiting... unless I added # print statements, in which case the test suddenly worked. So I guess # these signals are only delivered if/when you access the console? I # don't really know what was going on there. From reading the # GenerateConsoleCtrlEvent docs I don't know how it worked at all. # # I later spent a bunch of time trying to make GenerateConsoleCtrlEvent # work for creating synthetic control-C events, and... failed # utterly. There are lots of details in the code and comments # removed/added at this commit: # https://github.com/python-trio/trio/commit/95843654173e3e826c34d70a90b369ba6edf2c23 # # OTOH, if you pass os.kill any *other* signal number... then CPython # just calls TerminateProcess (wtf). # # So, anyway, os.kill is not so useful for testing purposes. Instead # we use raise(): # # https://msdn.microsoft.com/en-us/library/dwwzkt4c.aspx # # Have to import cffi inside the 'if os.name' block because we don't # depend on cffi on non-Windows platforms. (It would be easy to switch # this to ctypes though if we ever remove the cffi dependency.) # # Some more information: # https://bugs.python.org/issue26350 # # Anyway, we use this for two things: # - redelivering unhandled signals # - generating synthetic signals for tests # and for both of those purposes, 'raise' works fine. import cffi _ffi = cffi.FFI() _ffi.cdef("int raise(int);") _lib = _ffi.dlopen("api-ms-win-crt-runtime-l1-1-0.dll") signal_raise = getattr(_lib, "raise") else: def signal_raise(signum): signal.pthread_kill(threading.get_ident(), signum) # See: #461 as to why this is needed. # The gist is that threading.main_thread() has the capability to lie to us # if somebody else edits the threading ident cache to replace the main # thread; causing threading.current_thread() to return a _DummyThread, # causing the C-c check to fail, and so on. # Trying to use signal out of the main thread will fail, so we can then # reliably check if this is the main thread without relying on a # potentially modified threading. def is_main_thread(): """Attempt to reliably check if we are in the main thread.""" try: signal.signal(signal.SIGINT, signal.getsignal(signal.SIGINT)) return True except ValueError: return False ###### # Call the function and get the coroutine object, while giving helpful # errors for common mistakes. Returns coroutine object. ###### def coroutine_or_error(async_fn, *args): def _return_value_looks_like_wrong_library(value): # Returned by legacy @asyncio.coroutine functions, which includes # a surprising proportion of asyncio builtins. if isinstance(value, collections.abc.Generator): return True # The protocol for detecting an asyncio Future-like object if getattr(value, "_asyncio_future_blocking", None) is not None: return True # This janky check catches tornado Futures and twisted Deferreds. # By the time we're calling this function, we already know # something has gone wrong, so a heuristic is pretty safe. if value.__class__.__name__ in ("Future", "Deferred"): return True return False try: coro = async_fn(*args) except TypeError: # Give good error for: nursery.start_soon(trio.sleep(1)) if isinstance(async_fn, collections.abc.Coroutine): # explicitly close coroutine to avoid RuntimeWarning async_fn.close() raise TypeError( "Trio was expecting an async function, but instead it got " "a coroutine object {async_fn!r}\n" "\n" "Probably you did something like:\n" "\n" " trio.run({async_fn.__name__}(...)) # incorrect!\n" " nursery.start_soon({async_fn.__name__}(...)) # incorrect!\n" "\n" "Instead, you want (notice the parentheses!):\n" "\n" " trio.run({async_fn.__name__}, ...) # correct!\n" " nursery.start_soon({async_fn.__name__}, ...) # correct!".format( async_fn=async_fn ) ) from None # Give good error for: nursery.start_soon(future) if _return_value_looks_like_wrong_library(async_fn): raise TypeError( "Trio was expecting an async function, but instead it got " "{!r} – are you trying to use a library written for " "asyncio/twisted/tornado or similar? That won't work " "without some sort of compatibility shim.".format(async_fn) ) from None raise # We can't check iscoroutinefunction(async_fn), because that will fail # for things like functools.partial objects wrapping an async # function. So we have to just call it and then check whether the # return value is a coroutine object. if not isinstance(coro, collections.abc.Coroutine): # Give good error for: nursery.start_soon(func_returning_future) if _return_value_looks_like_wrong_library(coro): raise TypeError( "Trio got unexpected {!r} – are you trying to use a " "library written for asyncio/twisted/tornado or similar? " "That won't work without some sort of compatibility shim.".format(coro) ) if isasyncgen(coro): raise TypeError( "start_soon expected an async function but got an async " "generator {!r}".format(coro) ) # Give good error for: nursery.start_soon(some_sync_fn) raise TypeError( "Trio expected an async function, but {!r} appears to be " "synchronous".format(getattr(async_fn, "__qualname__", async_fn)) ) return coro class ConflictDetector: """Detect when two tasks are about to perform operations that would conflict. Use as a synchronous context manager; if two tasks enter it at the same time then the second one raises an error. You can use it when there are two pieces of code that *would* collide and need a lock if they ever were called at the same time, but that should never happen. We use this in particular for things like, making sure that two different tasks don't call sendall simultaneously on the same stream. """ def __init__(self, msg): self._msg = msg self._held = False def __enter__(self): if self._held: raise trio.BusyResourceError(self._msg) else: self._held = True def __exit__(self, *args): self._held = False def async_wraps(cls, wrapped_cls, attr_name): """Similar to wraps, but for async wrappers of non-async functions.""" def decorator(func): func.__name__ = attr_name func.__qualname__ = ".".join((cls.__qualname__, attr_name)) func.__doc__ = """Like :meth:`~{}.{}.{}`, but async. """.format( wrapped_cls.__module__, wrapped_cls.__qualname__, attr_name ) return func return decorator def fixup_module_metadata(module_name, namespace): seen_ids = set() def fix_one(qualname, name, obj): # avoid infinite recursion (relevant when using # typing.Generic, for example) if id(obj) in seen_ids: return seen_ids.add(id(obj)) mod = getattr(obj, "__module__", None) if mod is not None and mod.startswith("trio."): obj.__module__ = module_name # Modules, unlike everything else in Python, put fully-qualitied # names into their __name__ attribute. We check for "." to avoid # rewriting these. if hasattr(obj, "__name__") and "." not in obj.__name__: obj.__name__ = name obj.__qualname__ = qualname if isinstance(obj, type): for attr_name, attr_value in obj.__dict__.items(): fix_one(objname + "." + attr_name, attr_name, attr_value) for objname, obj in namespace.items(): if not objname.startswith("_"): # ignore private attributes fix_one(objname, objname, obj) class generic_function: """Decorator that makes a function indexable, to communicate non-inferrable generic type parameters to a static type checker. If you write:: @generic_function def open_memory_channel(max_buffer_size: int) -> Tuple[ SendChannel[T], ReceiveChannel[T] ]: ... it is valid at runtime to say ``open_memory_channel[bytes](5)``. This behaves identically to ``open_memory_channel(5)`` at runtime, and currently won't type-check without a mypy plugin or clever stubs, but at least it becomes possible to write those. """ def __init__(self, fn): update_wrapper(self, fn) self._fn = fn def __call__(self, *args, **kwargs): return self._fn(*args, **kwargs) def __getitem__(self, _): return self # If a new class inherits from any ABC, then the new class's metaclass has to # inherit from ABCMeta. If a new class inherits from typing.Generic, and # you're using Python 3.6, then the new class's metaclass has to # inherit from typing.GenericMeta. Some of the classes that want to use Final # or NoPublicConstructor inherit from ABCs and generics, so Final has to # inherit from these metaclasses. Fortunately, GenericMeta inherits from # ABCMeta, so inheriting from GenericMeta alone is sufficient (when it # exists at all). if not t.TYPE_CHECKING and hasattr(t, "GenericMeta"): BaseMeta = t.GenericMeta else: BaseMeta = ABCMeta class Final(BaseMeta): """Metaclass that enforces a class to be final (i.e., subclass not allowed). If a class uses this metaclass like this:: class SomeClass(metaclass=Final): pass The metaclass will ensure that no sub class can be created. Raises ------ - TypeError if a sub class is created """ def __new__(cls, name, bases, cls_namespace): for base in bases: if isinstance(base, Final): raise TypeError( f"{base.__module__}.{base.__qualname__} does not support subclassing" ) return super().__new__(cls, name, bases, cls_namespace) T = t.TypeVar("T") class NoPublicConstructor(Final): """Metaclass that enforces a class to be final (i.e., subclass not allowed) and ensures a private constructor. If a class uses this metaclass like this:: class SomeClass(metaclass=NoPublicConstructor): pass The metaclass will ensure that no sub class can be created, and that no instance can be initialized. If you try to instantiate your class (SomeClass()), a TypeError will be thrown. Raises ------ - TypeError if a sub class or an instance is created. """ def __call__(cls, *args, **kwargs): raise TypeError( f"{cls.__module__}.{cls.__qualname__} has no public constructor" ) def _create(cls: t.Type[T], *args: t.Any, **kwargs: t.Any) -> T: return super().__call__(*args, **kwargs) # type: ignore def name_asyncgen(agen): """Return the fully-qualified name of the async generator function that produced the async generator iterator *agen*. """ if not hasattr(agen, "ag_code"): # pragma: no cover return repr(agen) try: module = agen.ag_frame.f_globals["__name__"] except (AttributeError, KeyError): module = "<{}>".format(agen.ag_code.co_filename) try: qualname = agen.__qualname__ except AttributeError: qualname = agen.ag_code.co_name return f"{module}.{qualname}" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751808.0 trio-0.19.0/trio/_version.py0000644000076500000000000000013200000000000015267 0ustar00quentinwheel# This file is imported from __init__.py and exec'd from setup.py __version__ = "0.19.0" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_wait_for_object.py0000644000076500000000000000372600000000000016756 0ustar00quentinwheelimport math from . import _timeouts import trio from ._core._windows_cffi import ( ffi, kernel32, ErrorCodes, raise_winerror, _handle, ) async def WaitForSingleObject(obj): """Async and cancellable variant of WaitForSingleObject. Windows only. Args: handle: A Win32 handle, as a Python integer. Raises: OSError: If the handle is invalid, e.g. when it is already closed. """ # Allow ints or whatever we can convert to a win handle handle = _handle(obj) # Quick check; we might not even need to spawn a thread. The zero # means a zero timeout; this call never blocks. We also exit here # if the handle is already closed for some reason. retcode = kernel32.WaitForSingleObject(handle, 0) if retcode == ErrorCodes.WAIT_FAILED: raise_winerror() elif retcode != ErrorCodes.WAIT_TIMEOUT: return # Wait for a thread that waits for two handles: the handle plus a handle # that we can use to cancel the thread. cancel_handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) try: await trio.to_thread.run_sync( WaitForMultipleObjects_sync, handle, cancel_handle, cancellable=True, limiter=trio.CapacityLimiter(math.inf), ) finally: # Clean up our cancel handle. In case we get here because this task was # cancelled, we also want to set the cancel_handle to stop the thread. kernel32.SetEvent(cancel_handle) kernel32.CloseHandle(cancel_handle) def WaitForMultipleObjects_sync(*handles): """Wait for any of the given Windows handles to be signaled.""" n = len(handles) handle_arr = ffi.new("HANDLE[{}]".format(n)) for i in range(n): handle_arr[i] = handles[i] timeout = 0xFFFFFFFF # INFINITE retcode = kernel32.WaitForMultipleObjects(n, handle_arr, False, timeout) # blocking if retcode == ErrorCodes.WAIT_FAILED: raise_winerror() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/_windows_pipes.py0000644000076500000000000001112100000000000016474 0ustar00quentinwheelimport sys from typing import TYPE_CHECKING from . import _core from ._abc import SendStream, ReceiveStream from ._util import ConflictDetector, Final from ._core._windows_cffi import _handle, raise_winerror, kernel32, ffi assert sys.platform == "win32" or not TYPE_CHECKING # XX TODO: don't just make this up based on nothing. DEFAULT_RECEIVE_SIZE = 65536 # See the comments on _unix_pipes._FdHolder for discussion of why we set the # handle to -1 when it's closed. class _HandleHolder: def __init__(self, handle: int) -> None: self.handle = -1 if not isinstance(handle, int): raise TypeError("handle must be an int") self.handle = handle _core.register_with_iocp(self.handle) @property def closed(self): return self.handle == -1 def _close(self): if self.closed: return handle = self.handle self.handle = -1 if not kernel32.CloseHandle(_handle(handle)): raise_winerror() async def aclose(self): self._close() await _core.checkpoint() def __del__(self): self._close() class PipeSendStream(SendStream, metaclass=Final): """Represents a send stream over a Windows named pipe that has been opened in OVERLAPPED mode. """ def __init__(self, handle: int) -> None: self._handle_holder = _HandleHolder(handle) self._conflict_detector = ConflictDetector( "another task is currently using this pipe" ) async def send_all(self, data: bytes): with self._conflict_detector: if self._handle_holder.closed: raise _core.ClosedResourceError("this pipe is already closed") if not data: await _core.checkpoint() return try: written = await _core.write_overlapped(self._handle_holder.handle, data) except BrokenPipeError as ex: raise _core.BrokenResourceError from ex # By my reading of MSDN, this assert is guaranteed to pass so long # as the pipe isn't in nonblocking mode, but... let's just # double-check. assert written == len(data) async def wait_send_all_might_not_block(self) -> None: with self._conflict_detector: if self._handle_holder.closed: raise _core.ClosedResourceError("This pipe is already closed") # not implemented yet, and probably not needed await _core.checkpoint() async def aclose(self): await self._handle_holder.aclose() class PipeReceiveStream(ReceiveStream, metaclass=Final): """Represents a receive stream over an os.pipe object.""" def __init__(self, handle: int) -> None: self._handle_holder = _HandleHolder(handle) self._conflict_detector = ConflictDetector( "another task is currently using this pipe" ) async def receive_some(self, max_bytes=None) -> bytes: with self._conflict_detector: if self._handle_holder.closed: raise _core.ClosedResourceError("this pipe is already closed") if max_bytes is None: max_bytes = DEFAULT_RECEIVE_SIZE else: if not isinstance(max_bytes, int): raise TypeError("max_bytes must be integer >= 1") if max_bytes < 1: raise ValueError("max_bytes must be integer >= 1") buffer = bytearray(max_bytes) try: size = await _core.readinto_overlapped( self._handle_holder.handle, buffer ) except BrokenPipeError: if self._handle_holder.closed: raise _core.ClosedResourceError( "another task closed this pipe" ) from None # Windows raises BrokenPipeError on one end of a pipe # whenever the other end closes, regardless of direction. # Convert this to the Unix behavior of returning EOF to the # reader when the writer closes. # # And since we're not raising an exception, we have to # checkpoint. But readinto_overlapped did raise an exception, # so it might not have checkpointed for us. So we have to # checkpoint manually. await _core.checkpoint() return b"" else: del buffer[size:] return buffer async def aclose(self): await self._handle_holder.aclose() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/abc.py0000644000076500000000000000116000000000000014172 0ustar00quentinwheel# This is a public namespace, so we don't want to expose any non-underscored # attributes that aren't actually part of our public API. But it's very # annoying to carefully always use underscored names for module-level # temporaries, imports, etc. when implementing the module. So we put the # implementation in an underscored module, and then re-export the public parts # here. from ._abc import ( Clock, Instrument, AsyncResource, SendStream, ReceiveStream, Stream, HalfCloseableStream, SocketFactory, HostnameResolver, Listener, SendChannel, ReceiveChannel, Channel, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/from_thread.py0000644000076500000000000000040700000000000015742 0ustar00quentinwheel""" This namespace represents special functions that can call back into Trio from an external thread by means of a Trio Token present in Thread Local Storage """ from ._threads import from_thread_run as run from ._threads import from_thread_run_sync as run_sync ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/lowlevel.py0000644000076500000000000000350200000000000015300 0ustar00quentinwheel""" This namespace represents low-level functionality not intended for daily use, but useful for extending Trio's functionality. """ import select as _select import sys import typing as _t # This is the union of a subset of trio/_core/ and some things from trio/*.py. # See comments in trio/__init__.py for details. To make static analysis easier, # this lists all possible symbols from trio._core, and then we prune those that # aren't available on this system. After that we add some symbols from trio/*.py. # Generally available symbols from ._core import ( cancel_shielded_checkpoint, Abort, wait_task_rescheduled, enable_ki_protection, disable_ki_protection, currently_ki_protected, Task, checkpoint, current_task, ParkingLot, UnboundedQueue, RunVar, TrioToken, current_trio_token, temporarily_detach_coroutine_object, permanently_detach_coroutine_object, reattach_detached_coroutine_object, current_statistics, reschedule, remove_instrument, add_instrument, current_clock, current_root_task, checkpoint_if_cancelled, spawn_system_task, wait_readable, wait_writable, notify_closing, start_thread_soon, start_guest_run, ) if sys.platform == "win32": # Windows symbols from ._core import ( current_iocp, register_with_iocp, wait_overlapped, monitor_completion_key, readinto_overlapped, write_overlapped, ) from ._wait_for_object import WaitForSingleObject else: # Unix symbols from ._unix_pipes import FdStream # Kqueue-specific symbols if sys.platform != "linux" and (_t.TYPE_CHECKING or not hasattr(_select, "epoll")): from ._core import ( current_kqueue, monitor_kevent, wait_kevent, ) del sys ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/socket.py0000644000076500000000000002404600000000000014745 0ustar00quentinwheel# This is a public namespace, so we don't want to expose any non-underscored # attributes that aren't actually part of our public API. But it's very # annoying to carefully always use underscored names for module-level # temporaries, imports, etc. when implementing the module. So we put the # implementation in an underscored module, and then re-export the public parts # here. # We still have some underscore names though but only a few. from . import _socket import sys import typing as _t # The socket module exports a bunch of platform-specific constants. We want to # re-export them. Since the exact set of constants varies depending on Python # version, platform, the libc installed on the system where Python was built, # etc., we figure out which constants to re-export dynamically at runtime (see # below). But that confuses static analysis tools like jedi and mypy. So this # import statement statically lists every constant that *could* be # exported. It always fails at runtime, since no single Python build exports # all these constants, but it lets static analysis tools understand what's # going on. There's a test in test_exports.py to make sure that the list is # kept up to date. try: # fmt: off from socket import ( # type: ignore CMSG_LEN, CMSG_SPACE, CAPI, AF_UNSPEC, AF_INET, AF_UNIX, AF_IPX, AF_APPLETALK, AF_INET6, AF_ROUTE, AF_LINK, AF_SNA, PF_SYSTEM, AF_SYSTEM, SOCK_STREAM, SOCK_DGRAM, SOCK_RAW, SOCK_SEQPACKET, SOCK_RDM, SO_DEBUG, SO_ACCEPTCONN, SO_REUSEADDR, SO_KEEPALIVE, SO_DONTROUTE, SO_BROADCAST, SO_USELOOPBACK, SO_LINGER, SO_OOBINLINE, SO_REUSEPORT, SO_SNDBUF, SO_RCVBUF, SO_SNDLOWAT, SO_RCVLOWAT, SO_SNDTIMEO, SO_RCVTIMEO, SO_ERROR, SO_TYPE, LOCAL_PEERCRED, SOMAXCONN, SCM_RIGHTS, SCM_CREDS, MSG_OOB, MSG_PEEK, MSG_DONTROUTE, MSG_DONTWAIT, MSG_EOR, MSG_TRUNC, MSG_CTRUNC, MSG_WAITALL, MSG_EOF, SOL_SOCKET, SOL_IP, SOL_TCP, SOL_UDP, IPPROTO_IP, IPPROTO_HOPOPTS, IPPROTO_ICMP, IPPROTO_IGMP, IPPROTO_GGP, IPPROTO_IPV4, IPPROTO_IPIP, IPPROTO_TCP, IPPROTO_EGP, IPPROTO_PUP, IPPROTO_UDP, IPPROTO_IDP, IPPROTO_HELLO, IPPROTO_ND, IPPROTO_TP, IPPROTO_ROUTING, IPPROTO_FRAGMENT, IPPROTO_RSVP, IPPROTO_GRE, IPPROTO_ESP, IPPROTO_AH, IPPROTO_ICMPV6, IPPROTO_NONE, IPPROTO_DSTOPTS, IPPROTO_XTP, IPPROTO_EON, IPPROTO_PIM, IPPROTO_IPCOMP, IPPROTO_SCTP, IPPROTO_RAW, IPPROTO_MAX, SYSPROTO_CONTROL, IPPORT_RESERVED, IPPORT_USERRESERVED, INADDR_ANY, INADDR_BROADCAST, INADDR_LOOPBACK, INADDR_UNSPEC_GROUP, INADDR_ALLHOSTS_GROUP, INADDR_MAX_LOCAL_GROUP, INADDR_NONE, IP_OPTIONS, IP_HDRINCL, IP_TOS, IP_TTL, IP_RECVOPTS, IP_RECVRETOPTS, IP_RECVDSTADDR, IP_RETOPTS, IP_MULTICAST_IF, IP_MULTICAST_TTL, IP_MULTICAST_LOOP, IP_ADD_MEMBERSHIP, IP_DROP_MEMBERSHIP, IP_DEFAULT_MULTICAST_TTL, IP_DEFAULT_MULTICAST_LOOP, IP_MAX_MEMBERSHIPS, IPV6_JOIN_GROUP, IPV6_LEAVE_GROUP, IPV6_MULTICAST_HOPS, IPV6_MULTICAST_IF, IPV6_MULTICAST_LOOP, IPV6_UNICAST_HOPS, IPV6_V6ONLY, IPV6_CHECKSUM, IPV6_RECVTCLASS, IPV6_RTHDR_TYPE_0, IPV6_TCLASS, TCP_NODELAY, TCP_MAXSEG, TCP_KEEPINTVL, TCP_KEEPCNT, TCP_FASTOPEN, TCP_NOTSENT_LOWAT, EAI_ADDRFAMILY, EAI_AGAIN, EAI_BADFLAGS, EAI_FAIL, EAI_FAMILY, EAI_MEMORY, EAI_NODATA, EAI_NONAME, EAI_OVERFLOW, EAI_SERVICE, EAI_SOCKTYPE, EAI_SYSTEM, EAI_BADHINTS, EAI_PROTOCOL, EAI_MAX, AI_PASSIVE, AI_CANONNAME, AI_NUMERICHOST, AI_NUMERICSERV, AI_MASK, AI_ALL, AI_V4MAPPED_CFG, AI_ADDRCONFIG, AI_V4MAPPED, AI_DEFAULT, NI_MAXHOST, NI_MAXSERV, NI_NOFQDN, NI_NUMERICHOST, NI_NAMEREQD, NI_NUMERICSERV, NI_DGRAM, SHUT_RD, SHUT_WR, SHUT_RDWR, EBADF, EAGAIN, EWOULDBLOCK, AF_ASH, AF_ATMPVC, AF_ATMSVC, AF_AX25, AF_BLUETOOTH, AF_BRIDGE, AF_ECONET, AF_IRDA, AF_KEY, AF_LLC, AF_NETBEUI, AF_NETLINK, AF_NETROM, AF_PACKET, AF_PPPOX, AF_ROSE, AF_SECURITY, AF_WANPIPE, AF_X25, BDADDR_ANY, BDADDR_LOCAL, FD_SETSIZE, IPV6_DSTOPTS, IPV6_HOPLIMIT, IPV6_HOPOPTS, IPV6_NEXTHOP, IPV6_PKTINFO, IPV6_RECVDSTOPTS, IPV6_RECVHOPLIMIT, IPV6_RECVHOPOPTS, IPV6_RECVPKTINFO, IPV6_RECVRTHDR, IPV6_RTHDR, IPV6_RTHDRDSTOPTS, MSG_ERRQUEUE, NETLINK_DNRTMSG, NETLINK_FIREWALL, NETLINK_IP6_FW, NETLINK_NFLOG, NETLINK_ROUTE, NETLINK_USERSOCK, NETLINK_XFRM, PACKET_BROADCAST, PACKET_FASTROUTE, PACKET_HOST, PACKET_LOOPBACK, PACKET_MULTICAST, PACKET_OTHERHOST, PACKET_OUTGOING, POLLERR, POLLHUP, POLLIN, POLLMSG, POLLNVAL, POLLOUT, POLLPRI, POLLRDBAND, POLLRDNORM, POLLWRNORM, SIOCGIFINDEX, SIOCGIFNAME, SOCK_CLOEXEC, TCP_CORK, TCP_DEFER_ACCEPT, TCP_INFO, TCP_KEEPIDLE, TCP_LINGER2, TCP_QUICKACK, TCP_SYNCNT, TCP_WINDOW_CLAMP, AF_ALG, AF_CAN, AF_RDS, AF_TIPC, AF_VSOCK, ALG_OP_DECRYPT, ALG_OP_ENCRYPT, ALG_OP_SIGN, ALG_OP_VERIFY, ALG_SET_AEAD_ASSOCLEN, ALG_SET_AEAD_AUTHSIZE, ALG_SET_IV, ALG_SET_KEY, ALG_SET_OP, ALG_SET_PUBKEY, CAN_BCM, CAN_BCM_RX_CHANGED, CAN_BCM_RX_DELETE, CAN_BCM_RX_READ, CAN_BCM_RX_SETUP, CAN_BCM_RX_STATUS, CAN_BCM_RX_TIMEOUT, CAN_BCM_TX_DELETE, CAN_BCM_TX_EXPIRED, CAN_BCM_TX_READ, CAN_BCM_TX_SEND, CAN_BCM_TX_SETUP, CAN_BCM_TX_STATUS, CAN_EFF_FLAG, CAN_EFF_MASK, CAN_ERR_FLAG, CAN_ERR_MASK, CAN_ISOTP, CAN_RAW, CAN_RAW_ERR_FILTER, CAN_RAW_FD_FRAMES, CAN_RAW_FILTER, CAN_RAW_LOOPBACK, CAN_RAW_RECV_OWN_MSGS, CAN_RTR_FLAG, CAN_SFF_MASK, IOCTL_VM_SOCKETS_GET_LOCAL_CID, IPV6_DONTFRAG, IPV6_PATHMTU, IPV6_RECVPATHMTU, IP_TRANSPARENT, MSG_CMSG_CLOEXEC, MSG_CONFIRM, MSG_FASTOPEN, MSG_MORE, MSG_NOSIGNAL, NETLINK_CRYPTO, PF_CAN, PF_PACKET, PF_RDS, SCM_CREDENTIALS, SOCK_NONBLOCK, SOL_ALG, SOL_CAN_BASE, SOL_CAN_RAW, SOL_TIPC, SO_BINDTODEVICE, SO_DOMAIN, SO_MARK, SO_PASSCRED, SO_PASSSEC, SO_PEERCRED, SO_PEERSEC, SO_PRIORITY, SO_PROTOCOL, SO_VM_SOCKETS_BUFFER_MAX_SIZE, SO_VM_SOCKETS_BUFFER_MIN_SIZE, SO_VM_SOCKETS_BUFFER_SIZE, TCP_CONGESTION, TCP_USER_TIMEOUT, TIPC_ADDR_ID, TIPC_ADDR_NAME, TIPC_ADDR_NAMESEQ, TIPC_CFG_SRV, TIPC_CLUSTER_SCOPE, TIPC_CONN_TIMEOUT, TIPC_CRITICAL_IMPORTANCE, TIPC_DEST_DROPPABLE, TIPC_HIGH_IMPORTANCE, TIPC_IMPORTANCE, TIPC_LOW_IMPORTANCE, TIPC_MEDIUM_IMPORTANCE, TIPC_NODE_SCOPE, TIPC_PUBLISHED, TIPC_SRC_DROPPABLE, TIPC_SUBSCR_TIMEOUT, TIPC_SUB_CANCEL, TIPC_SUB_PORTS, TIPC_SUB_SERVICE, TIPC_TOP_SRV, TIPC_WAIT_FOREVER, TIPC_WITHDRAWN, TIPC_ZONE_SCOPE, VMADDR_CID_ANY, VMADDR_CID_HOST, VMADDR_PORT_ANY, VM_SOCKETS_INVALID_VERSION, MSG_BCAST, MSG_MCAST, RCVALL_MAX, RCVALL_OFF, RCVALL_ON, RCVALL_SOCKETLEVELONLY, SIO_KEEPALIVE_VALS, SIO_LOOPBACK_FAST_PATH, SIO_RCVALL, SO_EXCLUSIVEADDRUSE, HCI_FILTER, BTPROTO_SCO, BTPROTO_HCI, HCI_TIME_STAMP, SOL_RDS, BTPROTO_L2CAP, BTPROTO_RFCOMM, HCI_DATA_DIR, SOL_HCI, CAN_BCM_RX_ANNOUNCE_RESUME, CAN_BCM_RX_CHECK_DLC, CAN_BCM_RX_FILTER_ID, CAN_BCM_RX_NO_AUTOTIMER, CAN_BCM_RX_RTR_FRAME, CAN_BCM_SETTIMER, CAN_BCM_STARTTIMER, CAN_BCM_TX_ANNOUNCE, CAN_BCM_TX_COUNTEVT, CAN_BCM_TX_CP_CAN_ID, CAN_BCM_TX_RESET_MULTI_IDX, IPPROTO_CBT, IPPROTO_ICLFXBM, IPPROTO_IGP, IPPROTO_L2TP, IPPROTO_PGM, IPPROTO_RDP, IPPROTO_ST, AF_QIPCRTR, CAN_BCM_CAN_FD_FRAME, IPPROTO_MOBILE, IPV6_USE_MIN_MTU, MSG_NOTIFICATION, SO_SETFIB, CAN_J1939, CAN_RAW_JOIN_FILTERS, IPPROTO_UDPLITE, J1939_EE_INFO_NONE, J1939_EE_INFO_TX_ABORT, J1939_FILTER_MAX, J1939_IDLE_ADDR, J1939_MAX_UNICAST_ADDR, J1939_NLA_BYTES_ACKED, J1939_NLA_PAD, J1939_NO_ADDR, J1939_NO_NAME, J1939_NO_PGN, J1939_PGN_ADDRESS_CLAIMED, J1939_PGN_ADDRESS_COMMANDED, J1939_PGN_MAX, J1939_PGN_PDU1_MAX, J1939_PGN_REQUEST, SCM_J1939_DEST_ADDR, SCM_J1939_DEST_NAME, SCM_J1939_ERRQUEUE, SCM_J1939_PRIO, SO_J1939_ERRQUEUE, SO_J1939_FILTER, SO_J1939_PROMISC, SO_J1939_SEND_PRIO, UDPLITE_RECV_CSCOV, UDPLITE_SEND_CSCOV ) # fmt: on except ImportError: pass # Dynamically re-export whatever constants this particular Python happens to # have: import socket as _stdlib_socket _bad_symbols: _t.Set[str] = set() if sys.platform == "win32": # See https://github.com/python-trio/trio/issues/39 # Do not import for windows platform # (you can still get it from stdlib socket, of course, if you want it) _bad_symbols.add("SO_REUSEADDR") globals().update( { _name: getattr(_stdlib_socket, _name) for _name in _stdlib_socket.__all__ # type: ignore if _name.isupper() and _name not in _bad_symbols } ) # import the overwrites from ._socket import ( fromfd, from_stdlib_socket, getprotobyname, socketpair, getnameinfo, socket, getaddrinfo, set_custom_hostname_resolver, set_custom_socket_factory, SocketType, ) # not always available so expose only if if sys.platform == "win32" or not _t.TYPE_CHECKING: try: from ._socket import fromshare except ImportError: pass # expose these functions to trio.socket from socket import ( gaierror, herror, gethostname, ntohs, htonl, htons, inet_aton, inet_ntoa, inet_pton, inet_ntop, ) # not always available so expose only if if sys.platform != "win32" or not _t.TYPE_CHECKING: try: from socket import sethostname, if_nameindex, if_nametoindex, if_indextoname except ImportError: pass # get names used by Trio that we define on our own from ._socket import IPPROTO_IPV6 # Not defined in all python versions and platforms but sometimes needed if not _t.TYPE_CHECKING: try: TCP_NOTSENT_LOWAT except NameError: # Hopefully will show up in 3.7: # https://github.com/python/cpython/pull/477 if sys.platform == "darwin": TCP_NOTSENT_LOWAT = 0x201 elif sys.platform == "linux": TCP_NOTSENT_LOWAT = 25 if _t.TYPE_CHECKING: IP_BIND_ADDRESS_NO_PORT: int else: try: IP_BIND_ADDRESS_NO_PORT except NameError: if sys.platform == "linux": IP_BIND_ADDRESS_NO_PORT = 24 del sys ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8305836 trio-0.19.0/trio/testing/0000755000076500000000000000000000000000000014552 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/__init__.py0000644000076500000000000000137500000000000016671 0ustar00quentinwheelfrom .._core import wait_all_tasks_blocked, MockClock from ._trio_test import trio_test from ._checkpoints import assert_checkpoints, assert_no_checkpoints from ._sequencer import Sequencer from ._check_streams import ( check_one_way_stream, check_two_way_stream, check_half_closeable_stream, ) from ._memory_streams import ( MemorySendStream, MemoryReceiveStream, memory_stream_pump, memory_stream_one_way_pair, memory_stream_pair, lockstep_stream_one_way_pair, lockstep_stream_pair, ) from ._network import open_stream_to_socket_listener ################################################################ from .._util import fixup_module_metadata fixup_module_metadata(__name__, globals()) del fixup_module_metadata ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_check_streams.py0000644000076500000000000004716200000000000020110 0ustar00quentinwheel# Generic stream tests from contextlib import contextmanager import random from .. import _core from .._highlevel_generic import aclose_forcefully from .._abc import SendStream, ReceiveStream, Stream, HalfCloseableStream from ._checkpoints import assert_checkpoints class _ForceCloseBoth: def __init__(self, both): self._both = list(both) async def __aenter__(self): return self._both async def __aexit__(self, *args): try: await aclose_forcefully(self._both[0]) finally: await aclose_forcefully(self._both[1]) @contextmanager def _assert_raises(exc): __tracebackhide__ = True try: yield except exc: pass else: raise AssertionError("expected exception: {}".format(exc)) async def check_one_way_stream(stream_maker, clogged_stream_maker): """Perform a number of generic tests on a custom one-way stream implementation. Args: stream_maker: An async (!) function which returns a connected (:class:`~trio.abc.SendStream`, :class:`~trio.abc.ReceiveStream`) pair. clogged_stream_maker: Either None, or an async function similar to stream_maker, but with the extra property that the returned stream is in a state where ``send_all`` and ``wait_send_all_might_not_block`` will block until ``receive_some`` has been called. This allows for more thorough testing of some edge cases, especially around ``wait_send_all_might_not_block``. Raises: AssertionError: if a test fails. """ async with _ForceCloseBoth(await stream_maker()) as (s, r): assert isinstance(s, SendStream) assert isinstance(r, ReceiveStream) async def do_send_all(data): with assert_checkpoints(): assert await s.send_all(data) is None async def do_receive_some(*args): with assert_checkpoints(): return await r.receive_some(*args) async def checked_receive_1(expected): assert await do_receive_some(1) == expected async def do_aclose(resource): with assert_checkpoints(): await resource.aclose() # Simple sending/receiving async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all, b"x") nursery.start_soon(checked_receive_1, b"x") async def send_empty_then_y(): # Streams should tolerate sending b"" without giving it any # special meaning. await do_send_all(b"") await do_send_all(b"y") async with _core.open_nursery() as nursery: nursery.start_soon(send_empty_then_y) nursery.start_soon(checked_receive_1, b"y") # ---- Checking various argument types ---- # send_all accepts bytearray and memoryview async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all, bytearray(b"1")) nursery.start_soon(checked_receive_1, b"1") async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all, memoryview(b"2")) nursery.start_soon(checked_receive_1, b"2") # max_bytes must be a positive integer with _assert_raises(ValueError): await r.receive_some(-1) with _assert_raises(ValueError): await r.receive_some(0) with _assert_raises(TypeError): await r.receive_some(1.5) # it can also be missing or None async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all, b"x") assert await do_receive_some() == b"x" async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all, b"x") assert await do_receive_some(None) == b"x" with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(do_receive_some, 1) nursery.start_soon(do_receive_some, 1) # Method always has to exist, and an empty stream with a blocked # receive_some should *always* allow send_all. (Technically it's legal # for send_all to wait until receive_some is called to run, though; a # stream doesn't *have* to have any internal buffering. That's why we # start a concurrent receive_some call, then cancel it.) async def simple_check_wait_send_all_might_not_block(scope): with assert_checkpoints(): await s.wait_send_all_might_not_block() scope.cancel() async with _core.open_nursery() as nursery: nursery.start_soon( simple_check_wait_send_all_might_not_block, nursery.cancel_scope ) nursery.start_soon(do_receive_some, 1) # closing the r side leads to BrokenResourceError on the s side # (eventually) async def expect_broken_stream_on_send(): with _assert_raises(_core.BrokenResourceError): while True: await do_send_all(b"x" * 100) async with _core.open_nursery() as nursery: nursery.start_soon(expect_broken_stream_on_send) nursery.start_soon(do_aclose, r) # once detected, the stream stays broken with _assert_raises(_core.BrokenResourceError): await do_send_all(b"x" * 100) # r closed -> ClosedResourceError on the receive side with _assert_raises(_core.ClosedResourceError): await do_receive_some(4096) # we can close the same stream repeatedly, it's fine await do_aclose(r) await do_aclose(r) # closing the sender side await do_aclose(s) # now trying to send raises ClosedResourceError with _assert_raises(_core.ClosedResourceError): await do_send_all(b"x" * 100) # even if it's an empty send with _assert_raises(_core.ClosedResourceError): await do_send_all(b"") # ditto for wait_send_all_might_not_block with _assert_raises(_core.ClosedResourceError): with assert_checkpoints(): await s.wait_send_all_might_not_block() # and again, repeated closing is fine await do_aclose(s) await do_aclose(s) async with _ForceCloseBoth(await stream_maker()) as (s, r): # if send-then-graceful-close, receiver gets data then b"" async def send_then_close(): await do_send_all(b"y") await do_aclose(s) async def receive_send_then_close(): # We want to make sure that if the sender closes the stream before # we read anything, then we still get all the data. But some # streams might block on the do_send_all call. So we let the # sender get as far as it can, then we receive. await _core.wait_all_tasks_blocked() await checked_receive_1(b"y") await checked_receive_1(b"") await do_aclose(r) async with _core.open_nursery() as nursery: nursery.start_soon(send_then_close) nursery.start_soon(receive_send_then_close) async with _ForceCloseBoth(await stream_maker()) as (s, r): await aclose_forcefully(r) with _assert_raises(_core.BrokenResourceError): while True: await do_send_all(b"x" * 100) with _assert_raises(_core.ClosedResourceError): await do_receive_some(4096) async with _ForceCloseBoth(await stream_maker()) as (s, r): await aclose_forcefully(s) with _assert_raises(_core.ClosedResourceError): await do_send_all(b"123") # after the sender does a forceful close, the receiver might either # get BrokenResourceError or a clean b""; either is OK. Not OK would be # if it freezes, or returns data. try: await checked_receive_1(b"") except _core.BrokenResourceError: pass # cancelled aclose still closes async with _ForceCloseBoth(await stream_maker()) as (s, r): with _core.CancelScope() as scope: scope.cancel() await r.aclose() with _core.CancelScope() as scope: scope.cancel() await s.aclose() with _assert_raises(_core.ClosedResourceError): await do_send_all(b"123") with _assert_raises(_core.ClosedResourceError): await do_receive_some(4096) # Check that we can still gracefully close a stream after an operation has # been cancelled. This can be challenging if cancellation can leave the # stream internals in an inconsistent state, e.g. for # SSLStream. Unfortunately this test isn't very thorough; the really # challenging case for something like SSLStream is it gets cancelled # *while* it's sending data on the underlying, not before. But testing # that requires some special-case handling of the particular stream setup; # we can't do it here. Maybe we could do a bit better with # https://github.com/python-trio/trio/issues/77 async with _ForceCloseBoth(await stream_maker()) as (s, r): async def expect_cancelled(afn, *args): with _assert_raises(_core.Cancelled): await afn(*args) with _core.CancelScope() as scope: scope.cancel() async with _core.open_nursery() as nursery: nursery.start_soon(expect_cancelled, do_send_all, b"x") nursery.start_soon(expect_cancelled, do_receive_some, 1) async with _core.open_nursery() as nursery: nursery.start_soon(do_aclose, s) nursery.start_soon(do_aclose, r) # Check that if a task is blocked in receive_some, then closing the # receive stream causes it to wake up. async with _ForceCloseBoth(await stream_maker()) as (s, r): async def receive_expecting_closed(): with _assert_raises(_core.ClosedResourceError): await r.receive_some(10) async with _core.open_nursery() as nursery: nursery.start_soon(receive_expecting_closed) await _core.wait_all_tasks_blocked() await aclose_forcefully(r) # check wait_send_all_might_not_block, if we can if clogged_stream_maker is not None: async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): record = [] async def waiter(cancel_scope): record.append("waiter sleeping") with assert_checkpoints(): await s.wait_send_all_might_not_block() record.append("waiter wokeup") cancel_scope.cancel() async def receiver(): # give wait_send_all_might_not_block a chance to block await _core.wait_all_tasks_blocked() record.append("receiver starting") while True: await r.receive_some(16834) async with _core.open_nursery() as nursery: nursery.start_soon(waiter, nursery.cancel_scope) await _core.wait_all_tasks_blocked() nursery.start_soon(receiver) assert record == [ "waiter sleeping", "receiver starting", "waiter wokeup", ] async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): # simultaneous wait_send_all_might_not_block fails with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(s.wait_send_all_might_not_block) nursery.start_soon(s.wait_send_all_might_not_block) # and simultaneous send_all and wait_send_all_might_not_block (NB # this test might destroy the stream b/c we end up cancelling # send_all and e.g. SSLStream can't handle that, so we have to # recreate afterwards) with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(s.wait_send_all_might_not_block) nursery.start_soon(s.send_all, b"123") async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): # send_all and send_all blocked simultaneously should also raise # (but again this might destroy the stream) with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(s.send_all, b"123") nursery.start_soon(s.send_all, b"123") # closing the receiver causes wait_send_all_might_not_block to return, # with or without an exception async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): async def sender(): try: with assert_checkpoints(): await s.wait_send_all_might_not_block() except _core.BrokenResourceError: # pragma: no cover pass async def receiver(): await _core.wait_all_tasks_blocked() await aclose_forcefully(r) async with _core.open_nursery() as nursery: nursery.start_soon(sender) nursery.start_soon(receiver) # and again with the call starting after the close async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): await aclose_forcefully(r) try: with assert_checkpoints(): await s.wait_send_all_might_not_block() except _core.BrokenResourceError: # pragma: no cover pass # Check that if a task is blocked in a send-side method, then closing # the send stream causes it to wake up. async def close_soon(s): await _core.wait_all_tasks_blocked() await aclose_forcefully(s) async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): async with _core.open_nursery() as nursery: nursery.start_soon(close_soon, s) with _assert_raises(_core.ClosedResourceError): await s.send_all(b"xyzzy") async with _ForceCloseBoth(await clogged_stream_maker()) as (s, r): async with _core.open_nursery() as nursery: nursery.start_soon(close_soon, s) with _assert_raises(_core.ClosedResourceError): await s.wait_send_all_might_not_block() async def check_two_way_stream(stream_maker, clogged_stream_maker): """Perform a number of generic tests on a custom two-way stream implementation. This is similar to :func:`check_one_way_stream`, except that the maker functions are expected to return objects implementing the :class:`~trio.abc.Stream` interface. This function tests a *superset* of what :func:`check_one_way_stream` checks – if you call this, then you don't need to also call :func:`check_one_way_stream`. """ await check_one_way_stream(stream_maker, clogged_stream_maker) async def flipped_stream_maker(): return reversed(await stream_maker()) if clogged_stream_maker is not None: async def flipped_clogged_stream_maker(): return reversed(await clogged_stream_maker()) else: flipped_clogged_stream_maker = None await check_one_way_stream(flipped_stream_maker, flipped_clogged_stream_maker) async with _ForceCloseBoth(await stream_maker()) as (s1, s2): assert isinstance(s1, Stream) assert isinstance(s2, Stream) # Duplex can be a bit tricky, might as well check it as well DUPLEX_TEST_SIZE = 2 ** 20 CHUNK_SIZE_MAX = 2 ** 14 r = random.Random(0) i = r.getrandbits(8 * DUPLEX_TEST_SIZE) test_data = i.to_bytes(DUPLEX_TEST_SIZE, "little") async def sender(s, data, seed): r = random.Random(seed) m = memoryview(data) while m: chunk_size = r.randint(1, CHUNK_SIZE_MAX) await s.send_all(m[:chunk_size]) m = m[chunk_size:] async def receiver(s, data, seed): r = random.Random(seed) got = bytearray() while len(got) < len(data): chunk = await s.receive_some(r.randint(1, CHUNK_SIZE_MAX)) assert chunk got += chunk assert got == data async with _core.open_nursery() as nursery: nursery.start_soon(sender, s1, test_data, 0) nursery.start_soon(sender, s2, test_data[::-1], 1) nursery.start_soon(receiver, s1, test_data[::-1], 2) nursery.start_soon(receiver, s2, test_data, 3) async def expect_receive_some_empty(): assert await s2.receive_some(10) == b"" await s2.aclose() async with _core.open_nursery() as nursery: nursery.start_soon(expect_receive_some_empty) nursery.start_soon(s1.aclose) async def check_half_closeable_stream(stream_maker, clogged_stream_maker): """Perform a number of generic tests on a custom half-closeable stream implementation. This is similar to :func:`check_two_way_stream`, except that the maker functions are expected to return objects that implement the :class:`~trio.abc.HalfCloseableStream` interface. This function tests a *superset* of what :func:`check_two_way_stream` checks – if you call this, then you don't need to also call :func:`check_two_way_stream`. """ await check_two_way_stream(stream_maker, clogged_stream_maker) async with _ForceCloseBoth(await stream_maker()) as (s1, s2): assert isinstance(s1, HalfCloseableStream) assert isinstance(s2, HalfCloseableStream) async def send_x_then_eof(s): await s.send_all(b"x") with assert_checkpoints(): await s.send_eof() async def expect_x_then_eof(r): await _core.wait_all_tasks_blocked() assert await r.receive_some(10) == b"x" assert await r.receive_some(10) == b"" async with _core.open_nursery() as nursery: nursery.start_soon(send_x_then_eof, s1) nursery.start_soon(expect_x_then_eof, s2) # now sending is disallowed with _assert_raises(_core.ClosedResourceError): await s1.send_all(b"y") # but we can do send_eof again with assert_checkpoints(): await s1.send_eof() # and we can still send stuff back the other way async with _core.open_nursery() as nursery: nursery.start_soon(send_x_then_eof, s2) nursery.start_soon(expect_x_then_eof, s1) if clogged_stream_maker is not None: async with _ForceCloseBoth(await clogged_stream_maker()) as (s1, s2): # send_all and send_eof simultaneously is not ok with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(s1.send_all, b"x") await _core.wait_all_tasks_blocked() nursery.start_soon(s1.send_eof) async with _ForceCloseBoth(await clogged_stream_maker()) as (s1, s2): # wait_send_all_might_not_block and send_eof simultaneously is not # ok either with _assert_raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(s1.wait_send_all_might_not_block) await _core.wait_all_tasks_blocked() nursery.start_soon(s1.send_eof) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_checkpoints.py0000644000076500000000000000343000000000000017575 0ustar00quentinwheelfrom contextlib import contextmanager from .. import _core @contextmanager def _assert_yields_or_not(expected): __tracebackhide__ = True task = _core.current_task() orig_cancel = task._cancel_points orig_schedule = task._schedule_points try: yield if expected and ( task._cancel_points == orig_cancel or task._schedule_points == orig_schedule ): raise AssertionError("assert_checkpoints block did not yield!") finally: if not expected and ( task._cancel_points != orig_cancel or task._schedule_points != orig_schedule ): raise AssertionError("assert_no_checkpoints block yielded!") def assert_checkpoints(): """Use as a context manager to check that the code inside the ``with`` block either exits with an exception or executes at least one :ref:`checkpoint `. Raises: AssertionError: if no checkpoint was executed. Example: Check that :func:`trio.sleep` is a checkpoint, even if it doesn't block:: with trio.testing.assert_checkpoints(): await trio.sleep(0) """ __tracebackhide__ = True return _assert_yields_or_not(True) def assert_no_checkpoints(): """Use as a context manager to check that the code inside the ``with`` block does not execute any :ref:`checkpoints `. Raises: AssertionError: if a checkpoint was executed. Example: Synchronous code never contains any checkpoints, but we can double-check that:: send_channel, receive_channel = trio.open_memory_channel(10) with trio.testing.assert_no_checkpoints(): send_channel.send_nowait(None) """ __tracebackhide__ = True return _assert_yields_or_not(False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_memory_streams.py0000644000076500000000000005154600000000000020344 0ustar00quentinwheelimport operator from .. import _core from .._highlevel_generic import StapledStream from .. import _util from ..abc import SendStream, ReceiveStream ################################################################ # In-memory streams - Unbounded buffer version ################################################################ class _UnboundedByteQueue: def __init__(self): self._data = bytearray() self._closed = False self._lot = _core.ParkingLot() self._fetch_lock = _util.ConflictDetector( "another task is already fetching data" ) # This object treats "close" as being like closing the send side of a # channel: so after close(), calling put() raises ClosedResourceError, and # calling the get() variants drains the buffer and then returns an empty # bytearray. def close(self): self._closed = True self._lot.unpark_all() def close_and_wipe(self): self._data = bytearray() self.close() def put(self, data): if self._closed: raise _core.ClosedResourceError("virtual connection closed") self._data += data self._lot.unpark_all() def _check_max_bytes(self, max_bytes): if max_bytes is None: return max_bytes = operator.index(max_bytes) if max_bytes < 1: raise ValueError("max_bytes must be >= 1") def _get_impl(self, max_bytes): assert self._closed or self._data if max_bytes is None: max_bytes = len(self._data) if self._data: chunk = self._data[:max_bytes] del self._data[:max_bytes] assert chunk return chunk else: return bytearray() def get_nowait(self, max_bytes=None): with self._fetch_lock: self._check_max_bytes(max_bytes) if not self._closed and not self._data: raise _core.WouldBlock return self._get_impl(max_bytes) async def get(self, max_bytes=None): with self._fetch_lock: self._check_max_bytes(max_bytes) if not self._closed and not self._data: await self._lot.park() else: await _core.checkpoint() return self._get_impl(max_bytes) class MemorySendStream(SendStream, metaclass=_util.Final): """An in-memory :class:`~trio.abc.SendStream`. Args: send_all_hook: An async function, or None. Called from :meth:`send_all`. Can do whatever you like. wait_send_all_might_not_block_hook: An async function, or None. Called from :meth:`wait_send_all_might_not_block`. Can do whatever you like. close_hook: A synchronous function, or None. Called from :meth:`close` and :meth:`aclose`. Can do whatever you like. .. attribute:: send_all_hook wait_send_all_might_not_block_hook close_hook All of these hooks are also exposed as attributes on the object, and you can change them at any time. """ def __init__( self, send_all_hook=None, wait_send_all_might_not_block_hook=None, close_hook=None, ): self._conflict_detector = _util.ConflictDetector( "another task is using this stream" ) self._outgoing = _UnboundedByteQueue() self.send_all_hook = send_all_hook self.wait_send_all_might_not_block_hook = wait_send_all_might_not_block_hook self.close_hook = close_hook async def send_all(self, data): """Places the given data into the object's internal buffer, and then calls the :attr:`send_all_hook` (if any). """ # Execute two checkpoints so we have more of a chance to detect # buggy user code that calls this twice at the same time. with self._conflict_detector: await _core.checkpoint() await _core.checkpoint() self._outgoing.put(data) if self.send_all_hook is not None: await self.send_all_hook() async def wait_send_all_might_not_block(self): """Calls the :attr:`wait_send_all_might_not_block_hook` (if any), and then returns immediately. """ # Execute two checkpoints so we have more of a chance to detect # buggy user code that calls this twice at the same time. with self._conflict_detector: await _core.checkpoint() await _core.checkpoint() # check for being closed: self._outgoing.put(b"") if self.wait_send_all_might_not_block_hook is not None: await self.wait_send_all_might_not_block_hook() def close(self): """Marks this stream as closed, and then calls the :attr:`close_hook` (if any). """ # XXX should this cancel any pending calls to the send_all_hook and # wait_send_all_might_not_block_hook? Those are the only places where # send_all and wait_send_all_might_not_block can be blocked. # # The way we set things up, send_all_hook is memory_stream_pump, and # wait_send_all_might_not_block_hook is unset. memory_stream_pump is # synchronous. So normally, send_all and wait_send_all_might_not_block # cannot block at all. self._outgoing.close() if self.close_hook is not None: self.close_hook() async def aclose(self): """Same as :meth:`close`, but async.""" self.close() await _core.checkpoint() async def get_data(self, max_bytes=None): """Retrieves data from the internal buffer, blocking if necessary. Args: max_bytes (int or None): The maximum amount of data to retrieve. None (the default) means to retrieve all the data that's present (but still blocks until at least one byte is available). Returns: If this stream has been closed, an empty bytearray. Otherwise, the requested data. """ return await self._outgoing.get(max_bytes) def get_data_nowait(self, max_bytes=None): """Retrieves data from the internal buffer, but doesn't block. See :meth:`get_data` for details. Raises: trio.WouldBlock: if no data is available to retrieve. """ return self._outgoing.get_nowait(max_bytes) class MemoryReceiveStream(ReceiveStream, metaclass=_util.Final): """An in-memory :class:`~trio.abc.ReceiveStream`. Args: receive_some_hook: An async function, or None. Called from :meth:`receive_some`. Can do whatever you like. close_hook: A synchronous function, or None. Called from :meth:`close` and :meth:`aclose`. Can do whatever you like. .. attribute:: receive_some_hook close_hook Both hooks are also exposed as attributes on the object, and you can change them at any time. """ def __init__(self, receive_some_hook=None, close_hook=None): self._conflict_detector = _util.ConflictDetector( "another task is using this stream" ) self._incoming = _UnboundedByteQueue() self._closed = False self.receive_some_hook = receive_some_hook self.close_hook = close_hook async def receive_some(self, max_bytes=None): """Calls the :attr:`receive_some_hook` (if any), and then retrieves data from the internal buffer, blocking if necessary. """ # Execute two checkpoints so we have more of a chance to detect # buggy user code that calls this twice at the same time. with self._conflict_detector: await _core.checkpoint() await _core.checkpoint() if self._closed: raise _core.ClosedResourceError if self.receive_some_hook is not None: await self.receive_some_hook() # self._incoming's closure state tracks whether we got an EOF. # self._closed tracks whether we, ourselves, are closed. # self.close() sends an EOF to wake us up and sets self._closed, # so after we wake up we have to check self._closed again. data = await self._incoming.get(max_bytes) if self._closed: raise _core.ClosedResourceError return data def close(self): """Discards any pending data from the internal buffer, and marks this stream as closed. """ self._closed = True self._incoming.close_and_wipe() if self.close_hook is not None: self.close_hook() async def aclose(self): """Same as :meth:`close`, but async.""" self.close() await _core.checkpoint() def put_data(self, data): """Appends the given data to the internal buffer.""" self._incoming.put(data) def put_eof(self): """Adds an end-of-file marker to the internal buffer.""" self._incoming.close() def memory_stream_pump(memory_send_stream, memory_receive_stream, *, max_bytes=None): """Take data out of the given :class:`MemorySendStream`'s internal buffer, and put it into the given :class:`MemoryReceiveStream`'s internal buffer. Args: memory_send_stream (MemorySendStream): The stream to get data from. memory_receive_stream (MemoryReceiveStream): The stream to put data into. max_bytes (int or None): The maximum amount of data to transfer in this call, or None to transfer all available data. Returns: True if it successfully transferred some data, or False if there was no data to transfer. This is used to implement :func:`memory_stream_one_way_pair` and :func:`memory_stream_pair`; see the latter's docstring for an example of how you might use it yourself. """ try: data = memory_send_stream.get_data_nowait(max_bytes) except _core.WouldBlock: return False try: if not data: memory_receive_stream.put_eof() else: memory_receive_stream.put_data(data) except _core.ClosedResourceError: raise _core.BrokenResourceError("MemoryReceiveStream was closed") return True def memory_stream_one_way_pair(): """Create a connected, pure-Python, unidirectional stream with infinite buffering and flexible configuration options. You can think of this as being a no-operating-system-involved Trio-streamsified version of :func:`os.pipe` (except that :func:`os.pipe` returns the streams in the wrong order – we follow the superior convention that data flows from left to right). Returns: A tuple (:class:`MemorySendStream`, :class:`MemoryReceiveStream`), where the :class:`MemorySendStream` has its hooks set up so that it calls :func:`memory_stream_pump` from its :attr:`~MemorySendStream.send_all_hook` and :attr:`~MemorySendStream.close_hook`. The end result is that data automatically flows from the :class:`MemorySendStream` to the :class:`MemoryReceiveStream`. But you're also free to rearrange things however you like. For example, you can temporarily set the :attr:`~MemorySendStream.send_all_hook` to None if you want to simulate a stall in data transmission. Or see :func:`memory_stream_pair` for a more elaborate example. """ send_stream = MemorySendStream() recv_stream = MemoryReceiveStream() def pump_from_send_stream_to_recv_stream(): memory_stream_pump(send_stream, recv_stream) async def async_pump_from_send_stream_to_recv_stream(): pump_from_send_stream_to_recv_stream() send_stream.send_all_hook = async_pump_from_send_stream_to_recv_stream send_stream.close_hook = pump_from_send_stream_to_recv_stream return send_stream, recv_stream def _make_stapled_pair(one_way_pair): pipe1_send, pipe1_recv = one_way_pair() pipe2_send, pipe2_recv = one_way_pair() stream1 = StapledStream(pipe1_send, pipe2_recv) stream2 = StapledStream(pipe2_send, pipe1_recv) return stream1, stream2 def memory_stream_pair(): """Create a connected, pure-Python, bidirectional stream with infinite buffering and flexible configuration options. This is a convenience function that creates two one-way streams using :func:`memory_stream_one_way_pair`, and then uses :class:`~trio.StapledStream` to combine them into a single bidirectional stream. This is like a no-operating-system-involved, Trio-streamsified version of :func:`socket.socketpair`. Returns: A pair of :class:`~trio.StapledStream` objects that are connected so that data automatically flows from one to the other in both directions. After creating a stream pair, you can send data back and forth, which is enough for simple tests:: left, right = memory_stream_pair() await left.send_all(b"123") assert await right.receive_some() == b"123" await right.send_all(b"456") assert await left.receive_some() == b"456" But if you read the docs for :class:`~trio.StapledStream` and :func:`memory_stream_one_way_pair`, you'll see that all the pieces involved in wiring this up are public APIs, so you can adjust to suit the requirements of your tests. For example, here's how to tweak a stream so that data flowing from left to right trickles in one byte at a time (but data flowing from right to left proceeds at full speed):: left, right = memory_stream_pair() async def trickle(): # left is a StapledStream, and left.send_stream is a MemorySendStream # right is a StapledStream, and right.recv_stream is a MemoryReceiveStream while memory_stream_pump(left.send_stream, right.recv_stream, max_bytes=1): # Pause between each byte await trio.sleep(1) # Normally this send_all_hook calls memory_stream_pump directly without # passing in a max_bytes. We replace it with our custom version: left.send_stream.send_all_hook = trickle And here's a simple test using our modified stream objects:: async def sender(): await left.send_all(b"12345") await left.send_eof() async def receiver(): async for data in right: print(data) async with trio.open_nursery() as nursery: nursery.start_soon(sender) nursery.start_soon(receiver) By default, this will print ``b"12345"`` and then immediately exit; with our trickle stream it instead sleeps 1 second, then prints ``b"1"``, then sleeps 1 second, then prints ``b"2"``, etc. Pro-tip: you can insert sleep calls (like in our example above) to manipulate the flow of data across tasks... and then use :class:`MockClock` and its :attr:`~MockClock.autojump_threshold` functionality to keep your test suite running quickly. If you want to stress test a protocol implementation, one nice trick is to use the :mod:`random` module (preferably with a fixed seed) to move random numbers of bytes at a time, and insert random sleeps in between them. You can also set up a custom :attr:`~MemoryReceiveStream.receive_some_hook` if you want to manipulate things on the receiving side, and not just the sending side. """ return _make_stapled_pair(memory_stream_one_way_pair) ################################################################ # In-memory streams - Lockstep version ################################################################ class _LockstepByteQueue: def __init__(self): self._data = bytearray() self._sender_closed = False self._receiver_closed = False self._receiver_waiting = False self._waiters = _core.ParkingLot() self._send_conflict_detector = _util.ConflictDetector( "another task is already sending" ) self._receive_conflict_detector = _util.ConflictDetector( "another task is already receiving" ) def _something_happened(self): self._waiters.unpark_all() # Always wakes up when one side is closed, because everyone always reacts # to that. async def _wait_for(self, fn): while True: if fn(): break if self._sender_closed or self._receiver_closed: break await self._waiters.park() await _core.checkpoint() def close_sender(self): self._sender_closed = True self._something_happened() def close_receiver(self): self._receiver_closed = True self._something_happened() async def send_all(self, data): with self._send_conflict_detector: if self._sender_closed: raise _core.ClosedResourceError if self._receiver_closed: raise _core.BrokenResourceError assert not self._data self._data += data self._something_happened() await self._wait_for(lambda: not self._data) if self._sender_closed: raise _core.ClosedResourceError if self._data and self._receiver_closed: raise _core.BrokenResourceError async def wait_send_all_might_not_block(self): with self._send_conflict_detector: if self._sender_closed: raise _core.ClosedResourceError if self._receiver_closed: await _core.checkpoint() return await self._wait_for(lambda: self._receiver_waiting) if self._sender_closed: raise _core.ClosedResourceError async def receive_some(self, max_bytes=None): with self._receive_conflict_detector: # Argument validation if max_bytes is not None: max_bytes = operator.index(max_bytes) if max_bytes < 1: raise ValueError("max_bytes must be >= 1") # State validation if self._receiver_closed: raise _core.ClosedResourceError # Wake wait_send_all_might_not_block and wait for data self._receiver_waiting = True self._something_happened() try: await self._wait_for(lambda: self._data) finally: self._receiver_waiting = False if self._receiver_closed: raise _core.ClosedResourceError # Get data, possibly waking send_all if self._data: # Neat trick: if max_bytes is None, then obj[:max_bytes] is # the same as obj[:]. got = self._data[:max_bytes] del self._data[:max_bytes] self._something_happened() return got else: assert self._sender_closed return b"" class _LockstepSendStream(SendStream): def __init__(self, lbq): self._lbq = lbq def close(self): self._lbq.close_sender() async def aclose(self): self.close() await _core.checkpoint() async def send_all(self, data): await self._lbq.send_all(data) async def wait_send_all_might_not_block(self): await self._lbq.wait_send_all_might_not_block() class _LockstepReceiveStream(ReceiveStream): def __init__(self, lbq): self._lbq = lbq def close(self): self._lbq.close_receiver() async def aclose(self): self.close() await _core.checkpoint() async def receive_some(self, max_bytes=None): return await self._lbq.receive_some(max_bytes) def lockstep_stream_one_way_pair(): """Create a connected, pure Python, unidirectional stream where data flows in lockstep. Returns: A tuple (:class:`~trio.abc.SendStream`, :class:`~trio.abc.ReceiveStream`). This stream has *absolutely no* buffering. Each call to :meth:`~trio.abc.SendStream.send_all` will block until all the given data has been returned by a call to :meth:`~trio.abc.ReceiveStream.receive_some`. This can be useful for testing flow control mechanisms in an extreme case, or for setting up "clogged" streams to use with :func:`check_one_way_stream` and friends. In addition to fulfilling the :class:`~trio.abc.SendStream` and :class:`~trio.abc.ReceiveStream` interfaces, the return objects also have a synchronous ``close`` method. """ lbq = _LockstepByteQueue() return _LockstepSendStream(lbq), _LockstepReceiveStream(lbq) def lockstep_stream_pair(): """Create a connected, pure-Python, bidirectional stream where data flows in lockstep. Returns: A tuple (:class:`~trio.StapledStream`, :class:`~trio.StapledStream`). This is a convenience function that creates two one-way streams using :func:`lockstep_stream_one_way_pair`, and then uses :class:`~trio.StapledStream` to combine them into a single bidirectional stream. """ return _make_stapled_pair(lockstep_stream_one_way_pair) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_network.py0000644000076500000000000000213400000000000016754 0ustar00quentinwheelfrom .. import socket as tsocket from .._highlevel_socket import SocketStream async def open_stream_to_socket_listener(socket_listener): """Connect to the given :class:`~trio.SocketListener`. This is particularly useful in tests when you want to let a server pick its own port, and then connect to it:: listeners = await trio.open_tcp_listeners(0) client = await trio.testing.open_stream_to_socket_listener(listeners[0]) Args: socket_listener (~trio.SocketListener): The :class:`~trio.SocketListener` to connect to. Returns: SocketStream: a stream connected to the given listener. """ family = socket_listener.socket.family sockaddr = socket_listener.socket.getsockname() if family in (tsocket.AF_INET, tsocket.AF_INET6): sockaddr = list(sockaddr) if sockaddr[0] == "0.0.0.0": sockaddr[0] = "127.0.0.1" if sockaddr[0] == "::": sockaddr[0] = "::1" sockaddr = tuple(sockaddr) sock = tsocket.socket(family=family) await sock.connect(sockaddr) return SocketStream(sock) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_sequencer.py0000644000076500000000000000513200000000000017256 0ustar00quentinwheelfrom collections import defaultdict import attr from async_generator import asynccontextmanager from .. import _core from .. import _util from .. import Event if False: from typing import DefaultDict, Set @attr.s(eq=False, hash=False) class Sequencer(metaclass=_util.Final): """A convenience class for forcing code in different tasks to run in an explicit linear order. Instances of this class implement a ``__call__`` method which returns an async context manager. The idea is that you pass a sequence number to ``__call__`` to say where this block of code should go in the linear sequence. Block 0 starts immediately, and then block N doesn't start until block N-1 has finished. Example: An extremely elaborate way to print the numbers 0-5, in order:: async def worker1(seq): async with seq(0): print(0) async with seq(4): print(4) async def worker2(seq): async with seq(2): print(2) async with seq(5): print(5) async def worker3(seq): async with seq(1): print(1) async with seq(3): print(3) async def main(): seq = trio.testing.Sequencer() async with trio.open_nursery() as nursery: nursery.start_soon(worker1, seq) nursery.start_soon(worker2, seq) nursery.start_soon(worker3, seq) """ _sequence_points = attr.ib( factory=lambda: defaultdict(Event), init=False ) # type: DefaultDict[int, Event] _claimed = attr.ib(factory=set, init=False) # type: Set[int] _broken = attr.ib(default=False, init=False) @asynccontextmanager async def __call__(self, position: int): if position in self._claimed: raise RuntimeError("Attempted to re-use sequence point {}".format(position)) if self._broken: raise RuntimeError("sequence broken!") self._claimed.add(position) if position != 0: try: await self._sequence_points[position].wait() except _core.Cancelled: self._broken = True for event in self._sequence_points.values(): event.set() raise RuntimeError("Sequencer wait cancelled -- sequence broken") else: if self._broken: raise RuntimeError("sequence broken!") try: yield finally: self._sequence_points[position + 1].set() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/testing/_trio_test.py0000644000076500000000000000147400000000000017305 0ustar00quentinwheelfrom functools import wraps, partial from .. import _core from ..abc import Clock, Instrument # Use: # # @trio_test # async def test_whatever(): # await ... # # Also: if a pytest fixture is passed in that subclasses the Clock abc, then # that clock is passed to trio.run(). def trio_test(fn): @wraps(fn) def wrapper(**kwargs): __tracebackhide__ = True clocks = [c for c in kwargs.values() if isinstance(c, Clock)] if not clocks: clock = None elif len(clocks) == 1: clock = clocks[0] else: raise ValueError("too many clocks spoil the broth!") instruments = [i for i in kwargs.values() if isinstance(i, Instrument)] return _core.run(partial(fn, **kwargs), clock=clock, instruments=instruments) return wrapper ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1623751952.8713932 trio-0.19.0/trio/tests/0000755000076500000000000000000000000000000014237 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/__init__.py0000644000076500000000000000000000000000000016336 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/conftest.py0000644000076500000000000000212200000000000016433 0ustar00quentinwheel# XX this does not belong here -- b/c it's here, these things only apply to # the tests in trio/_core/tests, not in trio/tests. For now there's some # copy-paste... # # this stuff should become a proper pytest plugin import pytest import inspect from ..testing import trio_test, MockClock RUN_SLOW = True def pytest_addoption(parser): parser.addoption("--run-slow", action="store_true", help="run slow tests") def pytest_configure(config): global RUN_SLOW RUN_SLOW = config.getoption("--run-slow", True) @pytest.fixture def mock_clock(): return MockClock() @pytest.fixture def autojump_clock(): return MockClock(autojump_threshold=0) # FIXME: split off into a package (or just make part of Trio's public # interface?), with config file to enable? and I guess a mark option too; I # guess it's useful with the class- and file-level marking machinery (where # the raw @trio_test decorator isn't enough). @pytest.hookimpl(tryfirst=True) def pytest_pyfunc_call(pyfuncitem): if inspect.iscoroutinefunction(pyfuncitem.obj): pyfuncitem.obj = trio_test(pyfuncitem.obj) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/module_with_deprecations.py0000644000076500000000000000113100000000000021665 0ustar00quentinwheelregular = "hi" from .. import _deprecate _deprecate.enable_attribute_deprecations(__name__) # Make sure that we don't trigger infinite recursion when accessing module # attributes in between calling enable_attribute_deprecations and defining # __deprecated_attributes__: import sys this_mod = sys.modules[__name__] assert this_mod.regular == "hi" assert not hasattr(this_mod, "dep1") __deprecated_attributes__ = { "dep1": _deprecate.DeprecatedAttribute("value1", "1.1", issue=1), "dep2": _deprecate.DeprecatedAttribute( "value2", "1.2", issue=1, instead="instead-string" ), } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_abc.py0000644000076500000000000000243600000000000016402 0ustar00quentinwheelimport pytest import attr from ..testing import assert_checkpoints from .. import abc as tabc async def test_AsyncResource_defaults(): @attr.s class MyAR(tabc.AsyncResource): record = attr.ib(factory=list) async def aclose(self): self.record.append("ac") async with MyAR() as myar: assert isinstance(myar, MyAR) assert myar.record == [] assert myar.record == ["ac"] def test_abc_generics(): # Pythons below 3.5.2 had a typing.Generic that would throw # errors when instantiating or subclassing a parameterized # version of a class with any __slots__. This is why RunVar # (which has slots) is not generic. This tests that # the generic ABCs are fine, because while they are slotted # they don't actually define any slots. class SlottedChannel(tabc.SendChannel[tabc.Stream]): __slots__ = ("x",) def send_nowait(self, value): raise RuntimeError async def send(self, value): raise RuntimeError # pragma: no cover def clone(self): raise RuntimeError # pragma: no cover async def aclose(self): pass # pragma: no cover channel = SlottedChannel() with pytest.raises(RuntimeError): channel.send_nowait(None) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_channel.py0000644000076500000000000002765200000000000017274 0ustar00quentinwheelimport pytest from ..testing import wait_all_tasks_blocked, assert_checkpoints import trio from trio import open_memory_channel, EndOfChannel async def test_channel(): with pytest.raises(TypeError): open_memory_channel(1.0) with pytest.raises(ValueError): open_memory_channel(-1) s, r = open_memory_channel(2) repr(s) # smoke test repr(r) # smoke test s.send_nowait(1) with assert_checkpoints(): await s.send(2) with pytest.raises(trio.WouldBlock): s.send_nowait(None) with assert_checkpoints(): assert await r.receive() == 1 assert r.receive_nowait() == 2 with pytest.raises(trio.WouldBlock): r.receive_nowait() s.send_nowait("last") await s.aclose() with pytest.raises(trio.ClosedResourceError): await s.send("too late") with pytest.raises(trio.ClosedResourceError): s.send_nowait("too late") with pytest.raises(trio.ClosedResourceError): s.clone() await s.aclose() assert r.receive_nowait() == "last" with pytest.raises(EndOfChannel): await r.receive() await r.aclose() with pytest.raises(trio.ClosedResourceError): await r.receive() with pytest.raises(trio.ClosedResourceError): await r.receive_nowait() await r.aclose() async def test_553(autojump_clock): s, r = open_memory_channel(1) with trio.move_on_after(10) as timeout_scope: await r.receive() assert timeout_scope.cancelled_caught await s.send("Test for PR #553") async def test_channel_multiple_producers(): async def producer(send_channel, i): # We close our handle when we're done with it async with send_channel: for j in range(3 * i, 3 * (i + 1)): await send_channel.send(j) send_channel, receive_channel = open_memory_channel(0) async with trio.open_nursery() as nursery: # We hand out clones to all the new producers, and then close the # original. async with send_channel: for i in range(10): nursery.start_soon(producer, send_channel.clone(), i) got = [] async for value in receive_channel: got.append(value) got.sort() assert got == list(range(30)) async def test_channel_multiple_consumers(): successful_receivers = set() received = [] async def consumer(receive_channel, i): async for value in receive_channel: successful_receivers.add(i) received.append(value) async with trio.open_nursery() as nursery: send_channel, receive_channel = trio.open_memory_channel(1) async with send_channel: for i in range(5): nursery.start_soon(consumer, receive_channel, i) await wait_all_tasks_blocked() for i in range(10): await send_channel.send(i) assert successful_receivers == set(range(5)) assert len(received) == 10 assert set(received) == set(range(10)) async def test_close_basics(): async def send_block(s, expect): with pytest.raises(expect): await s.send(None) # closing send -> other send gets ClosedResourceError s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(send_block, s, trio.ClosedResourceError) await wait_all_tasks_blocked() await s.aclose() # and it's persistent with pytest.raises(trio.ClosedResourceError): s.send_nowait(None) with pytest.raises(trio.ClosedResourceError): await s.send(None) # and receive gets EndOfChannel with pytest.raises(EndOfChannel): r.receive_nowait() with pytest.raises(EndOfChannel): await r.receive() # closing receive -> send gets BrokenResourceError s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(send_block, s, trio.BrokenResourceError) await wait_all_tasks_blocked() await r.aclose() # and it's persistent with pytest.raises(trio.BrokenResourceError): s.send_nowait(None) with pytest.raises(trio.BrokenResourceError): await s.send(None) # closing receive -> other receive gets ClosedResourceError async def receive_block(r): with pytest.raises(trio.ClosedResourceError): await r.receive() s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(receive_block, r) await wait_all_tasks_blocked() await r.aclose() # and it's persistent with pytest.raises(trio.ClosedResourceError): r.receive_nowait() with pytest.raises(trio.ClosedResourceError): await r.receive() async def test_close_sync(): async def send_block(s, expect): with pytest.raises(expect): await s.send(None) # closing send -> other send gets ClosedResourceError s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(send_block, s, trio.ClosedResourceError) await wait_all_tasks_blocked() s.close() # and it's persistent with pytest.raises(trio.ClosedResourceError): s.send_nowait(None) with pytest.raises(trio.ClosedResourceError): await s.send(None) # and receive gets EndOfChannel with pytest.raises(EndOfChannel): r.receive_nowait() with pytest.raises(EndOfChannel): await r.receive() # closing receive -> send gets BrokenResourceError s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(send_block, s, trio.BrokenResourceError) await wait_all_tasks_blocked() r.close() # and it's persistent with pytest.raises(trio.BrokenResourceError): s.send_nowait(None) with pytest.raises(trio.BrokenResourceError): await s.send(None) # closing receive -> other receive gets ClosedResourceError async def receive_block(r): with pytest.raises(trio.ClosedResourceError): await r.receive() s, r = open_memory_channel(0) async with trio.open_nursery() as nursery: nursery.start_soon(receive_block, r) await wait_all_tasks_blocked() r.close() # and it's persistent with pytest.raises(trio.ClosedResourceError): r.receive_nowait() with pytest.raises(trio.ClosedResourceError): await r.receive() async def test_receive_channel_clone_and_close(): s, r = open_memory_channel(10) r2 = r.clone() r3 = r.clone() s.send_nowait(None) await r.aclose() with r2: pass with pytest.raises(trio.ClosedResourceError): r.clone() with pytest.raises(trio.ClosedResourceError): r2.clone() # Can still send, r3 is still open s.send_nowait(None) await r3.aclose() # But now the receiver is really closed with pytest.raises(trio.BrokenResourceError): s.send_nowait(None) async def test_close_multiple_send_handles(): # With multiple send handles, closing one handle only wakes senders on # that handle, but others can continue just fine s1, r = open_memory_channel(0) s2 = s1.clone() async def send_will_close(): with pytest.raises(trio.ClosedResourceError): await s1.send("nope") async def send_will_succeed(): await s2.send("ok") async with trio.open_nursery() as nursery: nursery.start_soon(send_will_close) nursery.start_soon(send_will_succeed) await wait_all_tasks_blocked() await s1.aclose() assert await r.receive() == "ok" async def test_close_multiple_receive_handles(): # With multiple receive handles, closing one handle only wakes receivers on # that handle, but others can continue just fine s, r1 = open_memory_channel(0) r2 = r1.clone() async def receive_will_close(): with pytest.raises(trio.ClosedResourceError): await r1.receive() async def receive_will_succeed(): assert await r2.receive() == "ok" async with trio.open_nursery() as nursery: nursery.start_soon(receive_will_close) nursery.start_soon(receive_will_succeed) await wait_all_tasks_blocked() await r1.aclose() await s.send("ok") async def test_inf_capacity(): s, r = open_memory_channel(float("inf")) # It's accepted, and we can send all day without blocking with s: for i in range(10): s.send_nowait(i) got = [] async for i in r: got.append(i) assert got == list(range(10)) async def test_statistics(): s, r = open_memory_channel(2) assert s.statistics() == r.statistics() stats = s.statistics() assert stats.current_buffer_used == 0 assert stats.max_buffer_size == 2 assert stats.open_send_channels == 1 assert stats.open_receive_channels == 1 assert stats.tasks_waiting_send == 0 assert stats.tasks_waiting_receive == 0 s.send_nowait(None) assert s.statistics().current_buffer_used == 1 s2 = s.clone() assert s.statistics().open_send_channels == 2 await s.aclose() assert s2.statistics().open_send_channels == 1 r2 = r.clone() assert s2.statistics().open_receive_channels == 2 await r2.aclose() assert s2.statistics().open_receive_channels == 1 async with trio.open_nursery() as nursery: s2.send_nowait(None) # fill up the buffer assert s.statistics().current_buffer_used == 2 nursery.start_soon(s2.send, None) nursery.start_soon(s2.send, None) await wait_all_tasks_blocked() assert s.statistics().tasks_waiting_send == 2 nursery.cancel_scope.cancel() assert s.statistics().tasks_waiting_send == 0 # empty out the buffer again try: while True: r.receive_nowait() except trio.WouldBlock: pass async with trio.open_nursery() as nursery: nursery.start_soon(r.receive) await wait_all_tasks_blocked() assert s.statistics().tasks_waiting_receive == 1 nursery.cancel_scope.cancel() assert s.statistics().tasks_waiting_receive == 0 async def test_channel_fairness(): # We can remove an item we just sent, and send an item back in after, if # no-one else is waiting. s, r = open_memory_channel(1) s.send_nowait(1) assert r.receive_nowait() == 1 s.send_nowait(2) assert r.receive_nowait() == 2 # But if someone else is waiting to receive, then they "own" the item we # send, so we can't receive it (even though we run first): result = None async def do_receive(r): nonlocal result result = await r.receive() async with trio.open_nursery() as nursery: nursery.start_soon(do_receive, r) await wait_all_tasks_blocked() s.send_nowait(2) with pytest.raises(trio.WouldBlock): r.receive_nowait() assert result == 2 # And the analogous situation for send: if we free up a space, we can't # immediately send something in it if someone is already waiting to do # that s, r = open_memory_channel(1) s.send_nowait(1) with pytest.raises(trio.WouldBlock): s.send_nowait(None) async with trio.open_nursery() as nursery: nursery.start_soon(s.send, 2) await wait_all_tasks_blocked() assert r.receive_nowait() == 1 with pytest.raises(trio.WouldBlock): s.send_nowait(3) assert (await r.receive()) == 2 async def test_unbuffered(): s, r = open_memory_channel(0) with pytest.raises(trio.WouldBlock): r.receive_nowait() with pytest.raises(trio.WouldBlock): s.send_nowait(1) async def do_send(s, v): with assert_checkpoints(): await s.send(v) async with trio.open_nursery() as nursery: nursery.start_soon(do_send, s, 1) with assert_checkpoints(): assert await r.receive() == 1 with pytest.raises(trio.WouldBlock): r.receive_nowait() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_deprecate.py0000644000076500000000000001454100000000000017611 0ustar00quentinwheelimport pytest import inspect import warnings from .._deprecate import ( TrioDeprecationWarning, warn_deprecated, deprecated, deprecated_alias, ) from . import module_with_deprecations @pytest.fixture def recwarn_always(recwarn): warnings.simplefilter("always") # ResourceWarnings about unclosed sockets can occur nondeterministically # (during GC) which throws off the tests in this file warnings.simplefilter("ignore", ResourceWarning) return recwarn def _here(): info = inspect.getframeinfo(inspect.currentframe().f_back) return (info.filename, info.lineno) def test_warn_deprecated(recwarn_always): def deprecated_thing(): warn_deprecated("ice", "1.2", issue=1, instead="water") deprecated_thing() filename, lineno = _here() assert len(recwarn_always) == 1 got = recwarn_always.pop(TrioDeprecationWarning) assert "ice is deprecated" in got.message.args[0] assert "Trio 1.2" in got.message.args[0] assert "water instead" in got.message.args[0] assert "/issues/1" in got.message.args[0] assert got.filename == filename assert got.lineno == lineno - 1 def test_warn_deprecated_no_instead_or_issue(recwarn_always): # Explicitly no instead or issue warn_deprecated("water", "1.3", issue=None, instead=None) assert len(recwarn_always) == 1 got = recwarn_always.pop(TrioDeprecationWarning) assert "water is deprecated" in got.message.args[0] assert "no replacement" in got.message.args[0] assert "Trio 1.3" in got.message.args[0] def test_warn_deprecated_stacklevel(recwarn_always): def nested1(): nested2() def nested2(): warn_deprecated("x", "1.3", issue=7, instead="y", stacklevel=3) filename, lineno = _here() nested1() got = recwarn_always.pop(TrioDeprecationWarning) assert got.filename == filename assert got.lineno == lineno + 1 def old(): # pragma: no cover pass def new(): # pragma: no cover pass def test_warn_deprecated_formatting(recwarn_always): warn_deprecated(old, "1.0", issue=1, instead=new) got = recwarn_always.pop(TrioDeprecationWarning) assert "test_deprecate.old is deprecated" in got.message.args[0] assert "test_deprecate.new instead" in got.message.args[0] @deprecated("1.5", issue=123, instead=new) def deprecated_old(): return 3 def test_deprecated_decorator(recwarn_always): assert deprecated_old() == 3 got = recwarn_always.pop(TrioDeprecationWarning) assert "test_deprecate.deprecated_old is deprecated" in got.message.args[0] assert "1.5" in got.message.args[0] assert "test_deprecate.new" in got.message.args[0] assert "issues/123" in got.message.args[0] class Foo: @deprecated("1.0", issue=123, instead="crying") def method(self): return 7 def test_deprecated_decorator_method(recwarn_always): f = Foo() assert f.method() == 7 got = recwarn_always.pop(TrioDeprecationWarning) assert "test_deprecate.Foo.method is deprecated" in got.message.args[0] @deprecated("1.2", thing="the thing", issue=None, instead=None) def deprecated_with_thing(): return 72 def test_deprecated_decorator_with_explicit_thing(recwarn_always): assert deprecated_with_thing() == 72 got = recwarn_always.pop(TrioDeprecationWarning) assert "the thing is deprecated" in got.message.args[0] def new_hotness(): return "new hotness" old_hotness = deprecated_alias("old_hotness", new_hotness, "1.23", issue=1) def test_deprecated_alias(recwarn_always): assert old_hotness() == "new hotness" got = recwarn_always.pop(TrioDeprecationWarning) assert "test_deprecate.old_hotness is deprecated" in got.message.args[0] assert "1.23" in got.message.args[0] assert "test_deprecate.new_hotness instead" in got.message.args[0] assert "issues/1" in got.message.args[0] assert ".. deprecated:: 1.23" in old_hotness.__doc__ assert "test_deprecate.new_hotness instead" in old_hotness.__doc__ assert "issues/1>`__" in old_hotness.__doc__ class Alias: def new_hotness_method(self): return "new hotness method" old_hotness_method = deprecated_alias( "Alias.old_hotness_method", new_hotness_method, "3.21", issue=1 ) def test_deprecated_alias_method(recwarn_always): obj = Alias() assert obj.old_hotness_method() == "new hotness method" got = recwarn_always.pop(TrioDeprecationWarning) msg = got.message.args[0] assert "test_deprecate.Alias.old_hotness_method is deprecated" in msg assert "test_deprecate.Alias.new_hotness_method instead" in msg @deprecated("2.1", issue=1, instead="hi") def docstring_test1(): # pragma: no cover """Hello!""" @deprecated("2.1", issue=None, instead="hi") def docstring_test2(): # pragma: no cover """Hello!""" @deprecated("2.1", issue=1, instead=None) def docstring_test3(): # pragma: no cover """Hello!""" @deprecated("2.1", issue=None, instead=None) def docstring_test4(): # pragma: no cover """Hello!""" def test_deprecated_docstring_munging(): assert ( docstring_test1.__doc__ == """Hello! .. deprecated:: 2.1 Use hi instead. For details, see `issue #1 `__. """ ) assert ( docstring_test2.__doc__ == """Hello! .. deprecated:: 2.1 Use hi instead. """ ) assert ( docstring_test3.__doc__ == """Hello! .. deprecated:: 2.1 For details, see `issue #1 `__. """ ) assert ( docstring_test4.__doc__ == """Hello! .. deprecated:: 2.1 """ ) def test_module_with_deprecations(recwarn_always): assert module_with_deprecations.regular == "hi" assert len(recwarn_always) == 0 filename, lineno = _here() assert module_with_deprecations.dep1 == "value1" got = recwarn_always.pop(TrioDeprecationWarning) assert got.filename == filename assert got.lineno == lineno + 1 assert "module_with_deprecations.dep1" in got.message.args[0] assert "Trio 1.1" in got.message.args[0] assert "/issues/1" in got.message.args[0] assert "value1 instead" in got.message.args[0] assert module_with_deprecations.dep2 == "value2" got = recwarn_always.pop(TrioDeprecationWarning) assert "instead-string instead" in got.message.args[0] with pytest.raises(AttributeError): module_with_deprecations.asdf ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_exports.py0000644000076500000000000001270600000000000017362 0ustar00quentinwheelimport re import sys import importlib import types import inspect import enum import pytest import trio import trio.testing from .. import _core from .. import _util def test_core_is_properly_reexported(): # Each export from _core should be re-exported by exactly one of these # three modules: sources = [trio, trio.lowlevel, trio.testing] for symbol in dir(_core): if symbol.startswith("_") or symbol == "tests": continue found = 0 for source in sources: if symbol in dir(source) and getattr(source, symbol) is getattr( _core, symbol ): found += 1 print(symbol, found) assert found == 1 def public_modules(module): yield module for name, class_ in module.__dict__.items(): if name.startswith("_"): # pragma: no cover continue if not isinstance(class_, types.ModuleType): continue if not class_.__name__.startswith(module.__name__): # pragma: no cover continue if class_ is module: continue # We should rename the trio.tests module (#274), but until then we use # a special-case hack: if class_.__name__ == "trio.tests": continue yield from public_modules(class_) PUBLIC_MODULES = list(public_modules(trio)) PUBLIC_MODULE_NAMES = [m.__name__ for m in PUBLIC_MODULES] # It doesn't make sense for downstream redistributors to run this test, since # they might be using a newer version of Python with additional symbols which # won't be reflected in trio.socket, and this shouldn't cause downstream test # runs to start failing. @pytest.mark.redistributors_should_skip # pylint/jedi often have trouble with alpha releases, where Python's internals # are in flux, grammar may not have settled down, etc. @pytest.mark.skipif( sys.version_info.releaselevel == "alpha", reason="skip static introspection tools on Python dev/alpha releases", ) @pytest.mark.filterwarnings( # https://github.com/PyCQA/astroid/issues/681 "ignore:the imp module is deprecated.*:DeprecationWarning" ) @pytest.mark.parametrize("modname", PUBLIC_MODULE_NAMES) @pytest.mark.parametrize("tool", ["pylint", "jedi"]) @pytest.mark.filterwarnings( "ignore:" + re.escape( "The distutils package is deprecated and slated for removal in Python 3.12. " "Use setuptools or check PEP 632 for potential alternatives" ) + ":DeprecationWarning", "ignore:" + re.escape("The distutils.sysconfig module is deprecated, use sysconfig instead") + ":DeprecationWarning", ) def test_static_tool_sees_all_symbols(tool, modname): module = importlib.import_module(modname) def no_underscores(symbols): return {symbol for symbol in symbols if not symbol.startswith("_")} runtime_names = no_underscores(dir(module)) # We should rename the trio.tests module (#274), but until then we use a # special-case hack: if modname == "trio": runtime_names.remove("tests") if tool == "pylint": from pylint.lint import PyLinter linter = PyLinter() ast = linter.get_ast(module.__file__, modname) static_names = no_underscores(ast) elif tool == "jedi": import jedi # Simulate typing "import trio; trio." script = jedi.Script("import {}; {}.".format(modname, modname)) completions = script.complete() static_names = no_underscores(c.name for c in completions) else: # pragma: no cover assert False # It's expected that the static set will contain more names than the # runtime set: # - static tools are sometimes sloppy and include deleted names # - some symbols are platform-specific at runtime, but always show up in # static analysis (e.g. in trio.socket or trio.lowlevel) # So we check that the runtime names are a subset of the static names. missing_names = runtime_names - static_names if missing_names: # pragma: no cover print("{} can't see the following names in {}:".format(tool, modname)) print() for name in sorted(missing_names): print(" {}".format(name)) assert False def test_classes_are_final(): for module in PUBLIC_MODULES: for name, class_ in module.__dict__.items(): if not isinstance(class_, type): continue # Deprecated classes are exported with a leading underscore if name.startswith("_"): # pragma: no cover continue # Abstract classes can be subclassed, because that's the whole # point of ABCs if inspect.isabstract(class_): continue # Exceptions are allowed to be subclassed, because exception # subclassing isn't used to inherit behavior. if issubclass(class_, BaseException): continue # These are classes that are conceptually abstract, but # inspect.isabstract returns False for boring reasons. if class_ in {trio.abc.Instrument, trio.socket.SocketType}: continue # Enums have their own metaclass, so we can't use our metaclasses. # And I don't think there's a lot of risk from people subclassing # enums... if issubclass(class_, enum.Enum): continue # ... insert other special cases here ... assert isinstance(class_, _util.Final) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_file_io.py0000644000076500000000000001160200000000000017256 0ustar00quentinwheelimport io import os import pytest from unittest import mock from unittest.mock import sentinel import trio from trio import _core from trio._file_io import AsyncIOWrapper, _FILE_SYNC_ATTRS, _FILE_ASYNC_METHODS @pytest.fixture def path(tmpdir): return os.fspath(tmpdir.join("test")) @pytest.fixture def wrapped(): return mock.Mock(spec_set=io.StringIO) @pytest.fixture def async_file(wrapped): return trio.wrap_file(wrapped) def test_wrap_invalid(): with pytest.raises(TypeError): trio.wrap_file(str()) def test_wrap_non_iobase(): class FakeFile: def close(self): # pragma: no cover pass def write(self): # pragma: no cover pass wrapped = FakeFile() assert not isinstance(wrapped, io.IOBase) async_file = trio.wrap_file(wrapped) assert isinstance(async_file, AsyncIOWrapper) del FakeFile.write with pytest.raises(TypeError): trio.wrap_file(FakeFile()) def test_wrapped_property(async_file, wrapped): assert async_file.wrapped is wrapped def test_dir_matches_wrapped(async_file, wrapped): attrs = _FILE_SYNC_ATTRS.union(_FILE_ASYNC_METHODS) # all supported attrs in wrapped should be available in async_file assert all(attr in dir(async_file) for attr in attrs if attr in dir(wrapped)) # all supported attrs not in wrapped should not be available in async_file assert not any( attr in dir(async_file) for attr in attrs if attr not in dir(wrapped) ) def test_unsupported_not_forwarded(): class FakeFile(io.RawIOBase): def unsupported_attr(self): # pragma: no cover pass async_file = trio.wrap_file(FakeFile()) assert hasattr(async_file.wrapped, "unsupported_attr") with pytest.raises(AttributeError): getattr(async_file, "unsupported_attr") def test_sync_attrs_forwarded(async_file, wrapped): for attr_name in _FILE_SYNC_ATTRS: if attr_name not in dir(async_file): continue assert getattr(async_file, attr_name) is getattr(wrapped, attr_name) def test_sync_attrs_match_wrapper(async_file, wrapped): for attr_name in _FILE_SYNC_ATTRS: if attr_name in dir(async_file): continue with pytest.raises(AttributeError): getattr(async_file, attr_name) with pytest.raises(AttributeError): getattr(wrapped, attr_name) def test_async_methods_generated_once(async_file): for meth_name in _FILE_ASYNC_METHODS: if meth_name not in dir(async_file): continue assert getattr(async_file, meth_name) is getattr(async_file, meth_name) def test_async_methods_signature(async_file): # use read as a representative of all async methods assert async_file.read.__name__ == "read" assert async_file.read.__qualname__ == "AsyncIOWrapper.read" assert "io.StringIO.read" in async_file.read.__doc__ async def test_async_methods_wrap(async_file, wrapped): for meth_name in _FILE_ASYNC_METHODS: if meth_name not in dir(async_file): continue meth = getattr(async_file, meth_name) wrapped_meth = getattr(wrapped, meth_name) value = await meth(sentinel.argument, keyword=sentinel.keyword) wrapped_meth.assert_called_once_with( sentinel.argument, keyword=sentinel.keyword ) assert value == wrapped_meth() wrapped.reset_mock() async def test_async_methods_match_wrapper(async_file, wrapped): for meth_name in _FILE_ASYNC_METHODS: if meth_name in dir(async_file): continue with pytest.raises(AttributeError): getattr(async_file, meth_name) with pytest.raises(AttributeError): getattr(wrapped, meth_name) async def test_open(path): f = await trio.open_file(path, "w") assert isinstance(f, AsyncIOWrapper) await f.aclose() async def test_open_context_manager(path): async with await trio.open_file(path, "w") as f: assert isinstance(f, AsyncIOWrapper) assert not f.closed assert f.closed async def test_async_iter(): async_file = trio.wrap_file(io.StringIO("test\nfoo\nbar")) expected = list(async_file.wrapped) result = [] async_file.wrapped.seek(0) async for line in async_file: result.append(line) assert result == expected async def test_aclose_cancelled(path): with _core.CancelScope() as cscope: f = await trio.open_file(path, "w") cscope.cancel() with pytest.raises(_core.Cancelled): await f.write("a") with pytest.raises(_core.Cancelled): await f.aclose() assert f.closed async def test_detach_rewraps_asynciobase(): raw = io.BytesIO() buffered = io.BufferedReader(raw) async_file = trio.wrap_file(buffered) detached = await async_file.detach() assert isinstance(detached, AsyncIOWrapper) assert detached.wrapped is raw ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_generic.py0000644000076500000000000000513300000000000021315 0ustar00quentinwheelimport pytest import attr from ..abc import SendStream, ReceiveStream from .._highlevel_generic import StapledStream @attr.s class RecordSendStream(SendStream): record = attr.ib(factory=list) async def send_all(self, data): self.record.append(("send_all", data)) async def wait_send_all_might_not_block(self): self.record.append("wait_send_all_might_not_block") async def aclose(self): self.record.append("aclose") @attr.s class RecordReceiveStream(ReceiveStream): record = attr.ib(factory=list) async def receive_some(self, max_bytes=None): self.record.append(("receive_some", max_bytes)) async def aclose(self): self.record.append("aclose") async def test_StapledStream(): send_stream = RecordSendStream() receive_stream = RecordReceiveStream() stapled = StapledStream(send_stream, receive_stream) assert stapled.send_stream is send_stream assert stapled.receive_stream is receive_stream await stapled.send_all(b"foo") await stapled.wait_send_all_might_not_block() assert send_stream.record == [ ("send_all", b"foo"), "wait_send_all_might_not_block", ] send_stream.record.clear() await stapled.send_eof() assert send_stream.record == ["aclose"] send_stream.record.clear() async def fake_send_eof(): send_stream.record.append("send_eof") send_stream.send_eof = fake_send_eof await stapled.send_eof() assert send_stream.record == ["send_eof"] send_stream.record.clear() assert receive_stream.record == [] await stapled.receive_some(1234) assert receive_stream.record == [("receive_some", 1234)] assert send_stream.record == [] receive_stream.record.clear() await stapled.aclose() assert receive_stream.record == ["aclose"] assert send_stream.record == ["aclose"] async def test_StapledStream_with_erroring_close(): # Make sure that if one of the aclose methods errors out, then the other # one still gets called. class BrokenSendStream(RecordSendStream): async def aclose(self): await super().aclose() raise ValueError class BrokenReceiveStream(RecordReceiveStream): async def aclose(self): await super().aclose() raise ValueError stapled = StapledStream(BrokenSendStream(), BrokenReceiveStream()) with pytest.raises(ValueError) as excinfo: await stapled.aclose() assert isinstance(excinfo.value.__context__, ValueError) assert stapled.send_stream.record == ["aclose"] assert stapled.receive_stream.record == ["aclose"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_open_tcp_listeners.py0000644000076500000000000002250600000000000023603 0ustar00quentinwheelimport pytest import socket as stdlib_socket import errno import attr import trio from trio import open_tcp_listeners, serve_tcp, SocketListener, open_tcp_stream from trio.testing import open_stream_to_socket_listener from .. import socket as tsocket from .._core.tests.tutil import slow, creates_ipv6, binds_ipv6 async def test_open_tcp_listeners_basic(): listeners = await open_tcp_listeners(0) assert isinstance(listeners, list) for obj in listeners: assert isinstance(obj, SocketListener) # Binds to wildcard address by default assert obj.socket.family in [tsocket.AF_INET, tsocket.AF_INET6] assert obj.socket.getsockname()[0] in ["0.0.0.0", "::"] listener = listeners[0] # Make sure the backlog is at least 2 c1 = await open_stream_to_socket_listener(listener) c2 = await open_stream_to_socket_listener(listener) s1 = await listener.accept() s2 = await listener.accept() # Note that we don't know which client stream is connected to which server # stream await s1.send_all(b"x") await s2.send_all(b"x") assert await c1.receive_some(1) == b"x" assert await c2.receive_some(1) == b"x" for resource in [c1, c2, s1, s2] + listeners: await resource.aclose() async def test_open_tcp_listeners_specific_port_specific_host(): # Pick a port sock = tsocket.socket() await sock.bind(("127.0.0.1", 0)) host, port = sock.getsockname() sock.close() (listener,) = await open_tcp_listeners(port, host=host) async with listener: assert listener.socket.getsockname() == (host, port) @binds_ipv6 async def test_open_tcp_listeners_ipv6_v6only(): # Check IPV6_V6ONLY is working properly (ipv6_listener,) = await open_tcp_listeners(0, host="::1") async with ipv6_listener: _, port, *_ = ipv6_listener.socket.getsockname() with pytest.raises(OSError): await open_tcp_stream("127.0.0.1", port) async def test_open_tcp_listeners_rebind(): (l1,) = await open_tcp_listeners(0, host="127.0.0.1") sockaddr1 = l1.socket.getsockname() # Plain old rebinding while it's still there should fail, even if we have # SO_REUSEADDR set with stdlib_socket.socket() as probe: probe.setsockopt(stdlib_socket.SOL_SOCKET, stdlib_socket.SO_REUSEADDR, 1) with pytest.raises(OSError): probe.bind(sockaddr1) # Now use the first listener to set up some connections in various states, # and make sure that they don't create any obstacle to rebinding a second # listener after the first one is closed. c_established = await open_stream_to_socket_listener(l1) s_established = await l1.accept() c_time_wait = await open_stream_to_socket_listener(l1) s_time_wait = await l1.accept() # Server-initiated close leaves socket in TIME_WAIT await s_time_wait.aclose() await l1.aclose() (l2,) = await open_tcp_listeners(sockaddr1[1], host="127.0.0.1") sockaddr2 = l2.socket.getsockname() assert sockaddr1 == sockaddr2 assert s_established.socket.getsockname() == sockaddr2 assert c_time_wait.socket.getpeername() == sockaddr2 for resource in [ l1, l2, c_established, s_established, c_time_wait, s_time_wait, ]: await resource.aclose() class FakeOSError(OSError): pass @attr.s class FakeSocket(tsocket.SocketType): family = attr.ib() type = attr.ib() proto = attr.ib() closed = attr.ib(default=False) poison_listen = attr.ib(default=False) backlog = attr.ib(default=None) def getsockopt(self, level, option): if (level, option) == (tsocket.SOL_SOCKET, tsocket.SO_ACCEPTCONN): return True assert False # pragma: no cover def setsockopt(self, level, option, value): pass async def bind(self, sockaddr): pass def listen(self, backlog): assert self.backlog is None assert backlog is not None self.backlog = backlog if self.poison_listen: raise FakeOSError("whoops") def close(self): self.closed = True @attr.s class FakeSocketFactory: poison_after = attr.ib() sockets = attr.ib(factory=list) raise_on_family = attr.ib(factory=dict) # family => errno def socket(self, family, type, proto): if family in self.raise_on_family: raise OSError(self.raise_on_family[family], "nope") sock = FakeSocket(family, type, proto) self.poison_after -= 1 if self.poison_after == 0: sock.poison_listen = True self.sockets.append(sock) return sock @attr.s class FakeHostnameResolver: family_addr_pairs = attr.ib() async def getaddrinfo(self, host, port, family, type, proto, flags): return [ (family, tsocket.SOCK_STREAM, 0, "", (addr, port)) for family, addr in self.family_addr_pairs ] async def test_open_tcp_listeners_multiple_host_cleanup_on_error(): # If we were trying to bind to multiple hosts and one of them failed, they # call get cleaned up before returning fsf = FakeSocketFactory(3) tsocket.set_custom_socket_factory(fsf) tsocket.set_custom_hostname_resolver( FakeHostnameResolver( [ (tsocket.AF_INET, "1.1.1.1"), (tsocket.AF_INET, "2.2.2.2"), (tsocket.AF_INET, "3.3.3.3"), ] ) ) with pytest.raises(FakeOSError): await open_tcp_listeners(80, host="example.org") assert len(fsf.sockets) == 3 for sock in fsf.sockets: assert sock.closed async def test_open_tcp_listeners_port_checking(): for host in ["127.0.0.1", None]: with pytest.raises(TypeError): await open_tcp_listeners(None, host=host) with pytest.raises(TypeError): await open_tcp_listeners(b"80", host=host) with pytest.raises(TypeError): await open_tcp_listeners("http", host=host) async def test_serve_tcp(): async def handler(stream): await stream.send_all(b"x") async with trio.open_nursery() as nursery: listeners = await nursery.start(serve_tcp, handler, 0) stream = await open_stream_to_socket_listener(listeners[0]) async with stream: await stream.receive_some(1) == b"x" nursery.cancel_scope.cancel() @pytest.mark.parametrize( "try_families", [{tsocket.AF_INET}, {tsocket.AF_INET6}, {tsocket.AF_INET, tsocket.AF_INET6}], ) @pytest.mark.parametrize( "fail_families", [{tsocket.AF_INET}, {tsocket.AF_INET6}, {tsocket.AF_INET, tsocket.AF_INET6}], ) async def test_open_tcp_listeners_some_address_families_unavailable( try_families, fail_families ): fsf = FakeSocketFactory( 10, raise_on_family={family: errno.EAFNOSUPPORT for family in fail_families} ) tsocket.set_custom_socket_factory(fsf) tsocket.set_custom_hostname_resolver( FakeHostnameResolver([(family, "foo") for family in try_families]) ) should_succeed = try_families - fail_families if not should_succeed: with pytest.raises(OSError) as exc_info: await open_tcp_listeners(80, host="example.org") assert "This system doesn't support" in str(exc_info.value) if isinstance(exc_info.value.__cause__, trio.MultiError): for subexc in exc_info.value.__cause__.exceptions: assert "nope" in str(subexc) else: assert isinstance(exc_info.value.__cause__, OSError) assert "nope" in str(exc_info.value.__cause__) else: listeners = await open_tcp_listeners(80) for listener in listeners: should_succeed.remove(listener.socket.family) assert not should_succeed async def test_open_tcp_listeners_socket_fails_not_afnosupport(): fsf = FakeSocketFactory( 10, raise_on_family={ tsocket.AF_INET: errno.EAFNOSUPPORT, tsocket.AF_INET6: errno.EINVAL, }, ) tsocket.set_custom_socket_factory(fsf) tsocket.set_custom_hostname_resolver( FakeHostnameResolver([(tsocket.AF_INET, "foo"), (tsocket.AF_INET6, "bar")]) ) with pytest.raises(OSError) as exc_info: await open_tcp_listeners(80, host="example.org") assert exc_info.value.errno == errno.EINVAL assert exc_info.value.__cause__ is None assert "nope" in str(exc_info.value) # We used to have an elaborate test that opened a real TCP listening socket # and then tried to measure its backlog by making connections to it. And most # of the time, it worked. But no matter what we tried, it was always fragile, # because it had to do things like use timeouts to guess when the listening # queue was full, sometimes the CI hosts go into SYN-cookie mode (where there # effectively is no backlog), sometimes the host might not be enough resources # to give us the full requested backlog... it was a mess. So now we just check # that the backlog argument is passed through correctly. async def test_open_tcp_listeners_backlog(): fsf = FakeSocketFactory(99) tsocket.set_custom_socket_factory(fsf) for (given, expected) in [ (None, 0xFFFF), (99999999, 0xFFFF), (10, 10), (1, 1), ]: listeners = await open_tcp_listeners(0, backlog=given) assert listeners for listener in listeners: assert listener.socket.backlog == expected ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_open_tcp_stream.py0000644000076500000000000004320600000000000023066 0ustar00quentinwheelimport pytest import sys import socket import attr import trio from trio.socket import AF_INET, AF_INET6, SOCK_STREAM, IPPROTO_TCP from trio._highlevel_open_tcp_stream import ( reorder_for_rfc_6555_section_5_4, close_all, open_tcp_stream, format_host_port, ) def test_close_all(): class CloseMe: closed = False def close(self): self.closed = True class CloseKiller: def close(self): raise OSError c = CloseMe() with close_all() as to_close: to_close.add(c) assert c.closed c = CloseMe() with pytest.raises(RuntimeError): with close_all() as to_close: to_close.add(c) raise RuntimeError assert c.closed c = CloseMe() with pytest.raises(OSError): with close_all() as to_close: to_close.add(CloseKiller()) to_close.add(c) assert c.closed def test_reorder_for_rfc_6555_section_5_4(): def fake4(i): return ( AF_INET, SOCK_STREAM, IPPROTO_TCP, "", ("10.0.0.{}".format(i), 80), ) def fake6(i): return (AF_INET6, SOCK_STREAM, IPPROTO_TCP, "", ("::{}".format(i), 80)) for fake in fake4, fake6: # No effect on homogeneous lists targets = [fake(0), fake(1), fake(2)] reorder_for_rfc_6555_section_5_4(targets) assert targets == [fake(0), fake(1), fake(2)] # Single item lists also OK targets = [fake(0)] reorder_for_rfc_6555_section_5_4(targets) assert targets == [fake(0)] # If the list starts out with different families in positions 0 and 1, # then it's left alone orig = [fake4(0), fake6(0), fake4(1), fake6(1)] targets = list(orig) reorder_for_rfc_6555_section_5_4(targets) assert targets == orig # If not, it's reordered targets = [fake4(0), fake4(1), fake4(2), fake6(0), fake6(1)] reorder_for_rfc_6555_section_5_4(targets) assert targets == [fake4(0), fake6(0), fake4(1), fake4(2), fake6(1)] def test_format_host_port(): assert format_host_port("127.0.0.1", 80) == "127.0.0.1:80" assert format_host_port(b"127.0.0.1", 80) == "127.0.0.1:80" assert format_host_port("example.com", 443) == "example.com:443" assert format_host_port(b"example.com", 443) == "example.com:443" assert format_host_port("::1", "http") == "[::1]:http" assert format_host_port(b"::1", "http") == "[::1]:http" # Make sure we can connect to localhost using real kernel sockets async def test_open_tcp_stream_real_socket_smoketest(): listen_sock = trio.socket.socket() await listen_sock.bind(("127.0.0.1", 0)) _, listen_port = listen_sock.getsockname() listen_sock.listen(1) client_stream = await open_tcp_stream("127.0.0.1", listen_port) server_sock, _ = await listen_sock.accept() await client_stream.send_all(b"x") assert await server_sock.recv(1) == b"x" await client_stream.aclose() server_sock.close() listen_sock.close() async def test_open_tcp_stream_input_validation(): with pytest.raises(ValueError): await open_tcp_stream(None, 80) with pytest.raises(TypeError): await open_tcp_stream("127.0.0.1", b"80") def can_bind_127_0_0_2(): with socket.socket() as s: try: s.bind(("127.0.0.2", 0)) except OSError: return False return s.getsockname()[0] == "127.0.0.2" async def test_local_address_real(): with trio.socket.socket() as listener: await listener.bind(("127.0.0.1", 0)) listener.listen() # It's hard to test local_address properly, because you need multiple # local addresses that you can bind to. Fortunately, on most Linux # systems, you can bind to any 127.*.*.* address, and they all go # through the loopback interface. So we can use a non-standard # loopback address. On other systems, the only address we know for # certain we have is 127.0.0.1, so we can't really test local_address= # properly -- passing local_address=127.0.0.1 is indistinguishable # from not passing local_address= at all. But, we can still do a smoke # test to make sure the local_address= code doesn't crash. if can_bind_127_0_0_2(): local_address = "127.0.0.2" else: local_address = "127.0.0.1" async with await open_tcp_stream( *listener.getsockname(), local_address=local_address ) as client_stream: assert client_stream.socket.getsockname()[0] == local_address if hasattr(trio.socket, "IP_BIND_ADDRESS_NO_PORT"): assert client_stream.socket.getsockopt( trio.socket.IPPROTO_IP, trio.socket.IP_BIND_ADDRESS_NO_PORT ) server_sock, remote_addr = await listener.accept() await client_stream.aclose() server_sock.close() assert remote_addr[0] == local_address # Trying to connect to an ipv4 address with the ipv6 wildcard # local_address should fail with pytest.raises(OSError): await open_tcp_stream(*listener.getsockname(), local_address="::") # But the ipv4 wildcard address should work async with await open_tcp_stream( *listener.getsockname(), local_address="0.0.0.0" ) as client_stream: server_sock, remote_addr = await listener.accept() server_sock.close() assert remote_addr == client_stream.socket.getsockname() # Now, thorough tests using fake sockets @attr.s(eq=False) class FakeSocket(trio.socket.SocketType): scenario = attr.ib() family = attr.ib() type = attr.ib() proto = attr.ib() ip = attr.ib(default=None) port = attr.ib(default=None) succeeded = attr.ib(default=False) closed = attr.ib(default=False) failing = attr.ib(default=False) async def connect(self, sockaddr): self.ip = sockaddr[0] self.port = sockaddr[1] assert self.ip not in self.scenario.sockets self.scenario.sockets[self.ip] = self self.scenario.connect_times[self.ip] = trio.current_time() delay, result = self.scenario.ip_dict[self.ip] await trio.sleep(delay) if result == "error": raise OSError("sorry") if result == "postconnect_fail": self.failing = True self.succeeded = True def close(self): self.closed = True # called when SocketStream is constructed def setsockopt(self, *args, **kwargs): if self.failing: # raise something that isn't OSError as SocketStream # ignores those raise KeyboardInterrupt class Scenario(trio.abc.SocketFactory, trio.abc.HostnameResolver): def __init__(self, port, ip_list, supported_families): # ip_list have to be unique ip_order = [ip for (ip, _, _) in ip_list] assert len(set(ip_order)) == len(ip_list) ip_dict = {} for ip, delay, result in ip_list: assert 0 <= delay assert result in ["error", "success", "postconnect_fail"] ip_dict[ip] = (delay, result) self.port = port self.ip_order = ip_order self.ip_dict = ip_dict self.supported_families = supported_families self.socket_count = 0 self.sockets = {} self.connect_times = {} def socket(self, family, type, proto): if family not in self.supported_families: raise OSError("pretending not to support this family") self.socket_count += 1 return FakeSocket(self, family, type, proto) def _ip_to_gai_entry(self, ip): if ":" in ip: family = trio.socket.AF_INET6 sockaddr = (ip, self.port, 0, 0) else: family = trio.socket.AF_INET sockaddr = (ip, self.port) return (family, SOCK_STREAM, IPPROTO_TCP, "", sockaddr) async def getaddrinfo(self, host, port, family, type, proto, flags): assert host == b"test.example.com" assert port == self.port assert family == trio.socket.AF_UNSPEC assert type == trio.socket.SOCK_STREAM assert proto == 0 assert flags == 0 return [self._ip_to_gai_entry(ip) for ip in self.ip_order] async def getnameinfo(self, sockaddr, flags): # pragma: no cover raise NotImplementedError def check(self, succeeded): # sockets only go into self.sockets when connect is called; make sure # all the sockets that were created did in fact go in there. assert self.socket_count == len(self.sockets) for ip, socket in self.sockets.items(): assert ip in self.ip_dict if socket is not succeeded: assert socket.closed assert socket.port == self.port async def run_scenario( # The port to connect to port, # A list of # (ip, delay, result) # tuples, where delay is in seconds and result is "success" or "error" # The ip's will be returned from getaddrinfo in this order, and then # connect() calls to them will have the given result. ip_list, *, # If False, AF_INET4/6 sockets error out on creation, before connect is # even called. ipv4_supported=True, ipv6_supported=True, # Normally, we return (winning_sock, scenario object) # If this is True, we require there to be an exception, and return # (exception, scenario object) expect_error=(), **kwargs, ): supported_families = set() if ipv4_supported: supported_families.add(trio.socket.AF_INET) if ipv6_supported: supported_families.add(trio.socket.AF_INET6) scenario = Scenario(port, ip_list, supported_families) trio.socket.set_custom_hostname_resolver(scenario) trio.socket.set_custom_socket_factory(scenario) try: stream = await open_tcp_stream("test.example.com", port, **kwargs) assert expect_error == () scenario.check(stream.socket) return (stream.socket, scenario) except AssertionError: # pragma: no cover raise except expect_error as exc: scenario.check(None) return (exc, scenario) async def test_one_host_quick_success(autojump_clock): sock, scenario = await run_scenario(80, [("1.2.3.4", 0.123, "success")]) assert sock.ip == "1.2.3.4" assert trio.current_time() == 0.123 async def test_one_host_slow_success(autojump_clock): sock, scenario = await run_scenario(81, [("1.2.3.4", 100, "success")]) assert sock.ip == "1.2.3.4" assert trio.current_time() == 100 async def test_one_host_quick_fail(autojump_clock): exc, scenario = await run_scenario( 82, [("1.2.3.4", 0.123, "error")], expect_error=OSError ) assert isinstance(exc, OSError) assert trio.current_time() == 0.123 async def test_one_host_slow_fail(autojump_clock): exc, scenario = await run_scenario( 83, [("1.2.3.4", 100, "error")], expect_error=OSError ) assert isinstance(exc, OSError) assert trio.current_time() == 100 async def test_one_host_failed_after_connect(autojump_clock): exc, scenario = await run_scenario( 83, [("1.2.3.4", 1, "postconnect_fail")], expect_error=KeyboardInterrupt ) assert isinstance(exc, KeyboardInterrupt) # With the default 0.250 second delay, the third attempt will win async def test_basic_fallthrough(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 1, "success"), ("2.2.2.2", 1, "success"), ("3.3.3.3", 0.2, "success"), ], ) assert sock.ip == "3.3.3.3" # current time is default time + default time + connection time assert trio.current_time() == (0.250 + 0.250 + 0.2) assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.250, "3.3.3.3": 0.500, } async def test_early_success(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 1, "success"), ("2.2.2.2", 0.1, "success"), ("3.3.3.3", 0.2, "success"), ], ) assert sock.ip == "2.2.2.2" assert trio.current_time() == (0.250 + 0.1) assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.250, # 3.3.3.3 was never even started } # With a 0.450 second delay, the first attempt will win async def test_custom_delay(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 1, "success"), ("2.2.2.2", 1, "success"), ("3.3.3.3", 0.2, "success"), ], happy_eyeballs_delay=0.450, ) assert sock.ip == "1.1.1.1" assert trio.current_time() == 1 assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.450, "3.3.3.3": 0.900, } async def test_custom_errors_expedite(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 0.1, "error"), ("2.2.2.2", 0.2, "error"), ("3.3.3.3", 10, "success"), # .25 is the default timeout ("4.4.4.4", 0.25, "success"), ], ) assert sock.ip == "4.4.4.4" assert trio.current_time() == (0.1 + 0.2 + 0.25 + 0.25) assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.1, "3.3.3.3": 0.1 + 0.2, "4.4.4.4": 0.1 + 0.2 + 0.25, } async def test_all_fail(autojump_clock): exc, scenario = await run_scenario( 80, [ ("1.1.1.1", 0.1, "error"), ("2.2.2.2", 0.2, "error"), ("3.3.3.3", 10, "error"), ("4.4.4.4", 0.250, "error"), ], expect_error=OSError, ) assert isinstance(exc, OSError) assert isinstance(exc.__cause__, trio.MultiError) assert len(exc.__cause__.exceptions) == 4 assert trio.current_time() == (0.1 + 0.2 + 10) assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.1, "3.3.3.3": 0.1 + 0.2, "4.4.4.4": 0.1 + 0.2 + 0.25, } async def test_multi_success(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 0.5, "error"), ("2.2.2.2", 10, "success"), ("3.3.3.3", 10 - 1, "success"), ("4.4.4.4", 10 - 2, "success"), ("5.5.5.5", 0.5, "error"), ], happy_eyeballs_delay=1, ) assert not scenario.sockets["1.1.1.1"].succeeded assert ( scenario.sockets["2.2.2.2"].succeeded or scenario.sockets["3.3.3.3"].succeeded or scenario.sockets["4.4.4.4"].succeeded ) assert not scenario.sockets["5.5.5.5"].succeeded assert sock.ip in ["2.2.2.2", "3.3.3.3", "4.4.4.4"] assert trio.current_time() == (0.5 + 10) assert scenario.connect_times == { "1.1.1.1": 0, "2.2.2.2": 0.5, "3.3.3.3": 1.5, "4.4.4.4": 2.5, "5.5.5.5": 3.5, } async def test_does_reorder(autojump_clock): sock, scenario = await run_scenario( 80, [ ("1.1.1.1", 10, "error"), # This would win if we tried it first... ("2.2.2.2", 1, "success"), # But in fact we try this first, because of section 5.4 ("::3", 0.5, "success"), ], happy_eyeballs_delay=1, ) assert sock.ip == "::3" assert trio.current_time() == 1 + 0.5 assert scenario.connect_times == { "1.1.1.1": 0, "::3": 1, } async def test_handles_no_ipv4(autojump_clock): sock, scenario = await run_scenario( 80, # Here the ipv6 addresses fail at socket creation time, so the connect # configuration doesn't matter [ ("::1", 10, "success"), ("2.2.2.2", 0, "success"), ("::3", 0.1, "success"), ("4.4.4.4", 0, "success"), ], happy_eyeballs_delay=1, ipv4_supported=False, ) assert sock.ip == "::3" assert trio.current_time() == 1 + 0.1 assert scenario.connect_times == { "::1": 0, "::3": 1.0, } async def test_handles_no_ipv6(autojump_clock): sock, scenario = await run_scenario( 80, # Here the ipv6 addresses fail at socket creation time, so the connect # configuration doesn't matter [ ("::1", 0, "success"), ("2.2.2.2", 10, "success"), ("::3", 0, "success"), ("4.4.4.4", 0.1, "success"), ], happy_eyeballs_delay=1, ipv6_supported=False, ) assert sock.ip == "4.4.4.4" assert trio.current_time() == 1 + 0.1 assert scenario.connect_times == { "2.2.2.2": 0, "4.4.4.4": 1.0, } async def test_no_hosts(autojump_clock): exc, scenario = await run_scenario(80, [], expect_error=OSError) assert "no results found" in str(exc) async def test_cancel(autojump_clock): with trio.move_on_after(5) as cancel_scope: exc, scenario = await run_scenario( 80, [ ("1.1.1.1", 10, "success"), ("2.2.2.2", 10, "success"), ("3.3.3.3", 10, "success"), ("4.4.4.4", 10, "success"), ], expect_error=trio.MultiError, ) # What comes out should be 1 or more Cancelled errors that all belong # to this cancel_scope; this is the easiest way to check that raise exc assert cancel_scope.cancelled_caught assert trio.current_time() == 5 # This should have been called already, but just to make sure, since the # exception-handling logic in run_scenario is a bit complicated and the # main thing we care about here is that all the sockets were cleaned up. scenario.check(succeeded=False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_open_unix_stream.py0000644000076500000000000000355100000000000023262 0ustar00quentinwheelimport os import socket import tempfile import pytest from trio import open_unix_socket, Path from trio._highlevel_open_unix_stream import close_on_error if not hasattr(socket, "AF_UNIX"): pytestmark = pytest.mark.skip("Needs unix socket support") def test_close_on_error(): class CloseMe: closed = False def close(self): self.closed = True with close_on_error(CloseMe()) as c: pass assert not c.closed with pytest.raises(RuntimeError): with close_on_error(CloseMe()) as c: raise RuntimeError assert c.closed @pytest.mark.parametrize("filename", [4, 4.5]) async def test_open_with_bad_filename_type(filename): with pytest.raises(TypeError): await open_unix_socket(filename) async def test_open_bad_socket(): # mktemp is marked as insecure, but that's okay, we don't want the file to # exist name = tempfile.mktemp() with pytest.raises(FileNotFoundError): await open_unix_socket(name) async def test_open_unix_socket(): for name_type in [Path, str]: name = tempfile.mktemp() serv_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) with serv_sock: serv_sock.bind(name) try: serv_sock.listen(1) # The actual function we're testing unix_socket = await open_unix_socket(name_type(name)) async with unix_socket: client, _ = serv_sock.accept() with client: await unix_socket.send_all(b"test") assert client.recv(2048) == b"test" client.sendall(b"response") received = await unix_socket.receive_some(2048) assert received == b"response" finally: os.unlink(name) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_serve_listeners.py0000644000076500000000000001052500000000000023116 0ustar00quentinwheelimport pytest from functools import partial import errno import attr import trio from trio.testing import memory_stream_pair, wait_all_tasks_blocked @attr.s(hash=False, eq=False) class MemoryListener(trio.abc.Listener): closed = attr.ib(default=False) accepted_streams = attr.ib(factory=list) queued_streams = attr.ib(factory=(lambda: trio.open_memory_channel(1))) accept_hook = attr.ib(default=None) async def connect(self): assert not self.closed client, server = memory_stream_pair() await self.queued_streams[0].send(server) return client async def accept(self): await trio.lowlevel.checkpoint() assert not self.closed if self.accept_hook is not None: await self.accept_hook() stream = await self.queued_streams[1].receive() self.accepted_streams.append(stream) return stream async def aclose(self): self.closed = True await trio.lowlevel.checkpoint() async def test_serve_listeners_basic(): listeners = [MemoryListener(), MemoryListener()] record = [] def close_hook(): # Make sure this is a forceful close assert trio.current_effective_deadline() == float("-inf") record.append("closed") async def handler(stream): await stream.send_all(b"123") assert await stream.receive_some(10) == b"456" stream.send_stream.close_hook = close_hook stream.receive_stream.close_hook = close_hook async def client(listener): s = await listener.connect() assert await s.receive_some(10) == b"123" await s.send_all(b"456") async def do_tests(parent_nursery): async with trio.open_nursery() as nursery: for listener in listeners: for _ in range(3): nursery.start_soon(client, listener) await wait_all_tasks_blocked() # verifies that all 6 streams x 2 directions each were closed ok assert len(record) == 12 parent_nursery.cancel_scope.cancel() async with trio.open_nursery() as nursery: l2 = await nursery.start(trio.serve_listeners, handler, listeners) assert l2 == listeners # This is just split into another function because gh-136 isn't # implemented yet nursery.start_soon(do_tests, nursery) for listener in listeners: assert listener.closed async def test_serve_listeners_accept_unrecognized_error(): for error in [KeyError(), OSError(errno.ECONNABORTED, "ECONNABORTED")]: listener = MemoryListener() async def raise_error(): raise error listener.accept_hook = raise_error with pytest.raises(type(error)) as excinfo: await trio.serve_listeners(None, [listener]) assert excinfo.value is error async def test_serve_listeners_accept_capacity_error(autojump_clock, caplog): listener = MemoryListener() async def raise_EMFILE(): raise OSError(errno.EMFILE, "out of file descriptors") listener.accept_hook = raise_EMFILE # It retries every 100 ms, so in 950 ms it will retry at 0, 100, ..., 900 # = 10 times total with trio.move_on_after(0.950): await trio.serve_listeners(None, [listener]) assert len(caplog.records) == 10 for record in caplog.records: assert "retrying" in record.msg assert record.exc_info[1].errno == errno.EMFILE async def test_serve_listeners_connection_nursery(autojump_clock): listener = MemoryListener() async def handler(stream): await trio.sleep(1) class Done(Exception): pass async def connection_watcher(*, task_status=trio.TASK_STATUS_IGNORED): async with trio.open_nursery() as nursery: task_status.started(nursery) await wait_all_tasks_blocked() assert len(nursery.child_tasks) == 10 raise Done with pytest.raises(Done): async with trio.open_nursery() as nursery: handler_nursery = await nursery.start(connection_watcher) await nursery.start( partial( trio.serve_listeners, handler, [listener], handler_nursery=handler_nursery, ) ) for _ in range(10): nursery.start_soon(listener.connect) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_socket.py0000644000076500000000000002133100000000000021167 0ustar00quentinwheelimport pytest import sys import socket as stdlib_socket import errno from .. import _core from ..testing import ( check_half_closeable_stream, wait_all_tasks_blocked, assert_checkpoints, ) from .._highlevel_socket import * from .. import socket as tsocket async def test_SocketStream_basics(): # stdlib socket bad (even if connected) a, b = stdlib_socket.socketpair() with a, b: with pytest.raises(TypeError): SocketStream(a) # DGRAM socket bad with tsocket.socket(type=tsocket.SOCK_DGRAM) as sock: with pytest.raises(ValueError): SocketStream(sock) a, b = tsocket.socketpair() with a, b: s = SocketStream(a) assert s.socket is a # Use a real, connected socket to test socket options, because # socketpair() might give us a unix socket that doesn't support any of # these options with tsocket.socket() as listen_sock: await listen_sock.bind(("127.0.0.1", 0)) listen_sock.listen(1) with tsocket.socket() as client_sock: await client_sock.connect(listen_sock.getsockname()) s = SocketStream(client_sock) # TCP_NODELAY enabled by default assert s.getsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY) # We can disable it though s.setsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY, False) assert not s.getsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY) b = s.getsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY, 1) assert isinstance(b, bytes) async def test_SocketStream_send_all(): BIG = 10000000 a_sock, b_sock = tsocket.socketpair() with a_sock, b_sock: a = SocketStream(a_sock) b = SocketStream(b_sock) # Check a send_all that has to be split into multiple parts (on most # platforms... on Windows every send() either succeeds or fails as a # whole) async def sender(): data = bytearray(BIG) await a.send_all(data) # send_all uses memoryviews internally, which temporarily "lock" # the object they view. If it doesn't clean them up properly, then # some bytearray operations might raise an error afterwards, which # would be a pretty weird and annoying side-effect to spring on # users. So test that this doesn't happen, by forcing the # bytearray's underlying buffer to be realloc'ed: data += bytes(BIG) # (Note: the above line of code doesn't do a very good job at # testing anything, because: # - on CPython, the refcount GC generally cleans up memoryviews # for us even if we're sloppy. # - on PyPy3, at least as of 5.7.0, the memoryview code and the # bytearray code conspire so that resizing never fails – if # resizing forces the bytearray's internal buffer to move, then # all memoryview references are automagically updated (!!). # See: # https://gist.github.com/njsmith/0ffd38ec05ad8e34004f34a7dc492227 # But I'm leaving the test here in hopes that if this ever changes # and we break our implementation of send_all, then we'll get some # early warning...) async def receiver(): # Make sure the sender fills up the kernel buffers and blocks await wait_all_tasks_blocked() nbytes = 0 while nbytes < BIG: nbytes += len(await b.receive_some(BIG)) assert nbytes == BIG async with _core.open_nursery() as nursery: nursery.start_soon(sender) nursery.start_soon(receiver) # We know that we received BIG bytes of NULs so far. Make sure that # was all the data in there. await a.send_all(b"e") assert await b.receive_some(10) == b"e" await a.send_eof() assert await b.receive_some(10) == b"" async def fill_stream(s): async def sender(): while True: await s.send_all(b"x" * 10000) async def waiter(nursery): await wait_all_tasks_blocked() nursery.cancel_scope.cancel() async with _core.open_nursery() as nursery: nursery.start_soon(sender) nursery.start_soon(waiter, nursery) async def test_SocketStream_generic(): async def stream_maker(): left, right = tsocket.socketpair() return SocketStream(left), SocketStream(right) async def clogged_stream_maker(): left, right = await stream_maker() await fill_stream(left) await fill_stream(right) return left, right await check_half_closeable_stream(stream_maker, clogged_stream_maker) async def test_SocketListener(): # Not a Trio socket with stdlib_socket.socket() as s: s.bind(("127.0.0.1", 0)) s.listen(10) with pytest.raises(TypeError): SocketListener(s) # Not a SOCK_STREAM with tsocket.socket(type=tsocket.SOCK_DGRAM) as s: await s.bind(("127.0.0.1", 0)) with pytest.raises(ValueError) as excinfo: SocketListener(s) excinfo.match(r".*SOCK_STREAM") # Didn't call .listen() # macOS has no way to check for this, so skip testing it there. if sys.platform != "darwin": with tsocket.socket() as s: await s.bind(("127.0.0.1", 0)) with pytest.raises(ValueError) as excinfo: SocketListener(s) excinfo.match(r".*listen") listen_sock = tsocket.socket() await listen_sock.bind(("127.0.0.1", 0)) listen_sock.listen(10) listener = SocketListener(listen_sock) assert listener.socket is listen_sock client_sock = tsocket.socket() await client_sock.connect(listen_sock.getsockname()) with assert_checkpoints(): server_stream = await listener.accept() assert isinstance(server_stream, SocketStream) assert server_stream.socket.getsockname() == listen_sock.getsockname() assert server_stream.socket.getpeername() == client_sock.getsockname() with assert_checkpoints(): await listener.aclose() with assert_checkpoints(): await listener.aclose() with assert_checkpoints(): with pytest.raises(_core.ClosedResourceError): await listener.accept() client_sock.close() await server_stream.aclose() async def test_SocketListener_socket_closed_underfoot(): listen_sock = tsocket.socket() await listen_sock.bind(("127.0.0.1", 0)) listen_sock.listen(10) listener = SocketListener(listen_sock) # Close the socket, not the listener listen_sock.close() # SocketListener gives correct error with assert_checkpoints(): with pytest.raises(_core.ClosedResourceError): await listener.accept() async def test_SocketListener_accept_errors(): class FakeSocket(tsocket.SocketType): def __init__(self, events): self._events = iter(events) type = tsocket.SOCK_STREAM # Fool the check for SO_ACCEPTCONN in SocketListener.__init__ def getsockopt(self, level, opt): return True def setsockopt(self, level, opt, value): pass async def accept(self): await _core.checkpoint() event = next(self._events) if isinstance(event, BaseException): raise event else: return event, None fake_server_sock = FakeSocket([]) fake_listen_sock = FakeSocket( [ OSError(errno.ECONNABORTED, "Connection aborted"), OSError(errno.EPERM, "Permission denied"), OSError(errno.EPROTO, "Bad protocol"), fake_server_sock, OSError(errno.EMFILE, "Out of file descriptors"), OSError(errno.EFAULT, "attempt to write to read-only memory"), OSError(errno.ENOBUFS, "out of buffers"), fake_server_sock, ] ) l = SocketListener(fake_listen_sock) with assert_checkpoints(): s = await l.accept() assert s.socket is fake_server_sock for code in [errno.EMFILE, errno.EFAULT, errno.ENOBUFS]: with assert_checkpoints(): with pytest.raises(OSError) as excinfo: await l.accept() assert excinfo.value.errno == code with assert_checkpoints(): s = await l.accept() assert s.socket is fake_server_sock async def test_socket_stream_works_when_peer_has_already_closed(): sock_a, sock_b = tsocket.socketpair() with sock_a, sock_b: await sock_b.send(b"x") sock_b.close() stream = SocketStream(sock_a) assert await stream.receive_some(1) == b"x" assert await stream.receive_some(1) == b"" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_highlevel_ssl_helpers.py0000644000076500000000000000770600000000000022234 0ustar00quentinwheelimport pytest from functools import partial import attr import trio from trio.socket import AF_INET, SOCK_STREAM, IPPROTO_TCP import trio.testing from .test_ssl import client_ctx, SERVER_CTX from .._highlevel_ssl_helpers import ( open_ssl_over_tcp_stream, open_ssl_over_tcp_listeners, serve_ssl_over_tcp, ) async def echo_handler(stream): async with stream: try: while True: data = await stream.receive_some(10000) if not data: break await stream.send_all(data) except trio.BrokenResourceError: pass # Resolver that always returns the given sockaddr, no matter what host/port # you ask for. @attr.s class FakeHostnameResolver(trio.abc.HostnameResolver): sockaddr = attr.ib() async def getaddrinfo(self, *args): return [(AF_INET, SOCK_STREAM, IPPROTO_TCP, "", self.sockaddr)] async def getnameinfo(self, *args): # pragma: no cover raise NotImplementedError # This uses serve_ssl_over_tcp, which uses open_ssl_over_tcp_listeners... # noqa is needed because flake8 doesn't understand how pytest fixtures work. async def test_open_ssl_over_tcp_stream_and_everything_else(client_ctx): # noqa: F811 async with trio.open_nursery() as nursery: (listener,) = await nursery.start( partial(serve_ssl_over_tcp, echo_handler, 0, SERVER_CTX, host="127.0.0.1") ) async with listener: sockaddr = listener.transport_listener.socket.getsockname() hostname_resolver = FakeHostnameResolver(sockaddr) trio.socket.set_custom_hostname_resolver(hostname_resolver) # We don't have the right trust set up # (checks that ssl_context=None is doing some validation) stream = await open_ssl_over_tcp_stream("trio-test-1.example.org", 80) async with stream: with pytest.raises(trio.BrokenResourceError): await stream.do_handshake() # We have the trust but not the hostname # (checks custom ssl_context + hostname checking) stream = await open_ssl_over_tcp_stream( "xyzzy.example.org", 80, ssl_context=client_ctx ) async with stream: with pytest.raises(trio.BrokenResourceError): await stream.do_handshake() # This one should work! stream = await open_ssl_over_tcp_stream( "trio-test-1.example.org", 80, ssl_context=client_ctx ) async with stream: assert isinstance(stream, trio.SSLStream) assert stream.server_hostname == "trio-test-1.example.org" await stream.send_all(b"x") assert await stream.receive_some(1) == b"x" # Check https_compatible settings are being passed through assert not stream._https_compatible stream = await open_ssl_over_tcp_stream( "trio-test-1.example.org", 80, ssl_context=client_ctx, https_compatible=True, # also, smoke test happy_eyeballs_delay happy_eyeballs_delay=1, ) async with stream: assert stream._https_compatible # Stop the echo server nursery.cancel_scope.cancel() async def test_open_ssl_over_tcp_listeners(): (listener,) = await open_ssl_over_tcp_listeners(0, SERVER_CTX, host="127.0.0.1") async with listener: assert isinstance(listener, trio.SSLListener) tl = listener.transport_listener assert isinstance(tl, trio.SocketListener) assert tl.socket.getsockname()[0] == "127.0.0.1" assert not listener._https_compatible (listener,) = await open_ssl_over_tcp_listeners( 0, SERVER_CTX, host="127.0.0.1", https_compatible=True ) async with listener: assert listener._https_compatible ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_path.py0000644000076500000000000001467100000000000016615 0ustar00quentinwheelimport os import pathlib import pytest import trio from trio._path import AsyncAutoWrapperType as Type from trio._file_io import AsyncIOWrapper @pytest.fixture def path(tmpdir): p = str(tmpdir.join("test")) return trio.Path(p) def method_pair(path, method_name): path = pathlib.Path(path) async_path = trio.Path(path) return getattr(path, method_name), getattr(async_path, method_name) async def test_open_is_async_context_manager(path): async with await path.open("w") as f: assert isinstance(f, AsyncIOWrapper) assert f.closed async def test_magic(): path = trio.Path("test") assert str(path) == "test" assert bytes(path) == b"test" cls_pairs = [ (trio.Path, pathlib.Path), (pathlib.Path, trio.Path), (trio.Path, trio.Path), ] @pytest.mark.parametrize("cls_a,cls_b", cls_pairs) async def test_cmp_magic(cls_a, cls_b): a, b = cls_a(""), cls_b("") assert a == b assert not a != b a, b = cls_a("a"), cls_b("b") assert a < b assert b > a # this is intentionally testing equivalence with none, due to the # other=sentinel logic in _forward_magic assert not a == None # noqa assert not b == None # noqa # upstream python3.8 bug: we should also test (pathlib.Path, trio.Path), but # __*div__ does not properly raise NotImplementedError like the other comparison # magic, so trio.Path's implementation does not get dispatched cls_pairs = [ (trio.Path, pathlib.Path), (trio.Path, trio.Path), (trio.Path, str), (str, trio.Path), ] @pytest.mark.parametrize("cls_a,cls_b", cls_pairs) async def test_div_magic(cls_a, cls_b): a, b = cls_a("a"), cls_b("b") result = a / b assert isinstance(result, trio.Path) assert str(result) == os.path.join("a", "b") @pytest.mark.parametrize( "cls_a,cls_b", [(trio.Path, pathlib.Path), (trio.Path, trio.Path)] ) @pytest.mark.parametrize("path", ["foo", "foo/bar/baz", "./foo"]) async def test_hash_magic(cls_a, cls_b, path): a, b = cls_a(path), cls_b(path) assert hash(a) == hash(b) async def test_forwarded_properties(path): # use `name` as a representative of forwarded properties assert "name" in dir(path) assert path.name == "test" async def test_async_method_signature(path): # use `resolve` as a representative of wrapped methods assert path.resolve.__name__ == "resolve" assert path.resolve.__qualname__ == "Path.resolve" assert "pathlib.Path.resolve" in path.resolve.__doc__ @pytest.mark.parametrize("method_name", ["is_dir", "is_file"]) async def test_compare_async_stat_methods(method_name): method, async_method = method_pair(".", method_name) result = method() async_result = await async_method() assert result == async_result async def test_invalid_name_not_wrapped(path): with pytest.raises(AttributeError): getattr(path, "invalid_fake_attr") @pytest.mark.parametrize("method_name", ["absolute", "resolve"]) async def test_async_methods_rewrap(method_name): method, async_method = method_pair(".", method_name) result = method() async_result = await async_method() assert isinstance(async_result, trio.Path) assert str(result) == str(async_result) async def test_forward_methods_rewrap(path, tmpdir): with_name = path.with_name("foo") with_suffix = path.with_suffix(".py") assert isinstance(with_name, trio.Path) assert with_name == tmpdir.join("foo") assert isinstance(with_suffix, trio.Path) assert with_suffix == tmpdir.join("test.py") async def test_forward_properties_rewrap(path): assert isinstance(path.parent, trio.Path) async def test_forward_methods_without_rewrap(path, tmpdir): path = await path.parent.resolve() assert path.as_uri().startswith("file:///") async def test_repr(): path = trio.Path(".") assert repr(path) == "trio.Path('.')" class MockWrapped: unsupported = "unsupported" _private = "private" class MockWrapper: _forwards = MockWrapped _wraps = MockWrapped async def test_type_forwards_unsupported(): with pytest.raises(TypeError): Type.generate_forwards(MockWrapper, {}) async def test_type_wraps_unsupported(): with pytest.raises(TypeError): Type.generate_wraps(MockWrapper, {}) async def test_type_forwards_private(): Type.generate_forwards(MockWrapper, {"unsupported": None}) assert not hasattr(MockWrapper, "_private") async def test_type_wraps_private(): Type.generate_wraps(MockWrapper, {"unsupported": None}) assert not hasattr(MockWrapper, "_private") @pytest.mark.parametrize("meth", [trio.Path.__init__, trio.Path.joinpath]) async def test_path_wraps_path(path, meth): wrapped = await path.absolute() result = meth(path, wrapped) if result is None: result = path assert wrapped == result async def test_path_nonpath(): with pytest.raises(TypeError): trio.Path(1) async def test_open_file_can_open_path(path): async with await trio.open_file(path, "w") as f: assert f.name == os.fspath(path) async def test_globmethods(path): # Populate a directory tree await path.mkdir() await (path / "foo").mkdir() await (path / "foo" / "_bar.txt").write_bytes(b"") await (path / "bar.txt").write_bytes(b"") await (path / "bar.dat").write_bytes(b"") # Path.glob for _pattern, _results in { "*.txt": {"bar.txt"}, "**/*.txt": {"_bar.txt", "bar.txt"}, }.items(): entries = set() for entry in await path.glob(_pattern): assert isinstance(entry, trio.Path) entries.add(entry.name) assert entries == _results # Path.rglob entries = set() for entry in await path.rglob("*.txt"): assert isinstance(entry, trio.Path) entries.add(entry.name) assert entries == {"_bar.txt", "bar.txt"} async def test_iterdir(path): # Populate a directory await path.mkdir() await (path / "foo").mkdir() await (path / "bar.txt").write_bytes(b"") entries = set() for entry in await path.iterdir(): assert isinstance(entry, trio.Path) entries.add(entry.name) assert entries == {"bar.txt", "foo"} async def test_classmethods(): assert isinstance(await trio.Path.home(), trio.Path) # pathlib.Path has only two classmethods assert str(await trio.Path.home()) == os.path.expanduser("~") assert str(await trio.Path.cwd()) == os.getcwd() # Wrapped method has docstring assert trio.Path.home.__doc__ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_scheduler_determinism.py0000644000076500000000000000216100000000000022226 0ustar00quentinwheelimport trio async def scheduler_trace(): """Returns a scheduler-dependent value we can use to check determinism.""" trace = [] async def tracer(name): for i in range(50): trace.append((name, i)) await trio.sleep(0) async with trio.open_nursery() as nursery: for i in range(5): nursery.start_soon(tracer, i) return tuple(trace) def test_the_trio_scheduler_is_not_deterministic(): # At least, not yet. See https://github.com/python-trio/trio/issues/32 traces = [] for _ in range(10): traces.append(trio.run(scheduler_trace)) assert len(set(traces)) == len(traces) def test_the_trio_scheduler_is_deterministic_if_seeded(monkeypatch): monkeypatch.setattr(trio._core._run, "_ALLOW_DETERMINISTIC_SCHEDULING", True) traces = [] for _ in range(10): state = trio._core._run._r.getstate() try: trio._core._run._r.seed(0) traces.append(trio.run(scheduler_trace)) finally: trio._core._run._r.setstate(state) assert len(traces) == 10 assert len(set(traces)) == 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_signals.py0000644000076500000000000001550000000000000017311 0ustar00quentinwheelimport signal import pytest import trio from .. import _core from .._util import signal_raise from .._signals import open_signal_receiver, _signal_handler async def test_open_signal_receiver(): orig = signal.getsignal(signal.SIGILL) with open_signal_receiver(signal.SIGILL) as receiver: # Raise it a few times, to exercise signal coalescing, both at the # call_soon level and at the SignalQueue level signal_raise(signal.SIGILL) signal_raise(signal.SIGILL) await _core.wait_all_tasks_blocked() signal_raise(signal.SIGILL) await _core.wait_all_tasks_blocked() async for signum in receiver: # pragma: no branch assert signum == signal.SIGILL break assert receiver._pending_signal_count() == 0 signal_raise(signal.SIGILL) async for signum in receiver: # pragma: no branch assert signum == signal.SIGILL break assert receiver._pending_signal_count() == 0 with pytest.raises(RuntimeError): await receiver.__anext__() assert signal.getsignal(signal.SIGILL) is orig async def test_open_signal_receiver_restore_handler_after_one_bad_signal(): orig = signal.getsignal(signal.SIGILL) with pytest.raises(ValueError): with open_signal_receiver(signal.SIGILL, 1234567): pass # pragma: no cover # Still restored even if we errored out assert signal.getsignal(signal.SIGILL) is orig async def test_open_signal_receiver_empty_fail(): with pytest.raises(TypeError, match="No signals were provided"): with open_signal_receiver(): pass async def test_open_signal_receiver_restore_handler_after_duplicate_signal(): orig = signal.getsignal(signal.SIGILL) with open_signal_receiver(signal.SIGILL, signal.SIGILL): pass # Still restored correctly assert signal.getsignal(signal.SIGILL) is orig async def test_catch_signals_wrong_thread(): async def naughty(): with open_signal_receiver(signal.SIGINT): pass # pragma: no cover with pytest.raises(RuntimeError): await trio.to_thread.run_sync(trio.run, naughty) async def test_open_signal_receiver_conflict(): with pytest.raises(trio.BusyResourceError): with open_signal_receiver(signal.SIGILL) as receiver: async with trio.open_nursery() as nursery: nursery.start_soon(receiver.__anext__) nursery.start_soon(receiver.__anext__) # Blocks until all previous calls to run_sync_soon(idempotent=True) have been # processed. async def wait_run_sync_soon_idempotent_queue_barrier(): ev = trio.Event() token = _core.current_trio_token() token.run_sync_soon(ev.set, idempotent=True) await ev.wait() async def test_open_signal_receiver_no_starvation(): # Set up a situation where there are always 2 pending signals available to # report, and make sure that instead of getting the same signal reported # over and over, it alternates between reporting both of them. with open_signal_receiver(signal.SIGILL, signal.SIGFPE) as receiver: try: print(signal.getsignal(signal.SIGILL)) previous = None for _ in range(10): signal_raise(signal.SIGILL) signal_raise(signal.SIGFPE) await wait_run_sync_soon_idempotent_queue_barrier() if previous is None: previous = await receiver.__anext__() else: got = await receiver.__anext__() assert got in [signal.SIGILL, signal.SIGFPE] assert got != previous previous = got # Clear out the last signal so it doesn't get redelivered while receiver._pending_signal_count() != 0: await receiver.__anext__() except: # pragma: no cover # If there's an unhandled exception above, then exiting the # open_signal_receiver block might cause the signal to be # redelivered and give us a core dump instead of a traceback... import traceback traceback.print_exc() async def test_catch_signals_race_condition_on_exit(): delivered_directly = set() def direct_handler(signo, frame): delivered_directly.add(signo) print(1) # Test the version where the call_soon *doesn't* have a chance to run # before we exit the with block: with _signal_handler({signal.SIGILL, signal.SIGFPE}, direct_handler): with open_signal_receiver(signal.SIGILL, signal.SIGFPE) as receiver: signal_raise(signal.SIGILL) signal_raise(signal.SIGFPE) await wait_run_sync_soon_idempotent_queue_barrier() assert delivered_directly == {signal.SIGILL, signal.SIGFPE} delivered_directly.clear() print(2) # Test the version where the call_soon *does* have a chance to run before # we exit the with block: with _signal_handler({signal.SIGILL, signal.SIGFPE}, direct_handler): with open_signal_receiver(signal.SIGILL, signal.SIGFPE) as receiver: signal_raise(signal.SIGILL) signal_raise(signal.SIGFPE) await wait_run_sync_soon_idempotent_queue_barrier() assert receiver._pending_signal_count() == 2 assert delivered_directly == {signal.SIGILL, signal.SIGFPE} delivered_directly.clear() # Again, but with a SIG_IGN signal: print(3) with _signal_handler({signal.SIGILL}, signal.SIG_IGN): with open_signal_receiver(signal.SIGILL) as receiver: signal_raise(signal.SIGILL) await wait_run_sync_soon_idempotent_queue_barrier() # test passes if the process reaches this point without dying print(4) with _signal_handler({signal.SIGILL}, signal.SIG_IGN): with open_signal_receiver(signal.SIGILL) as receiver: signal_raise(signal.SIGILL) await wait_run_sync_soon_idempotent_queue_barrier() assert receiver._pending_signal_count() == 1 # test passes if the process reaches this point without dying # Check exception chaining if there are multiple exception-raising # handlers def raise_handler(signum, _): raise RuntimeError(signum) with _signal_handler({signal.SIGILL, signal.SIGFPE}, raise_handler): with pytest.raises(RuntimeError) as excinfo: with open_signal_receiver(signal.SIGILL, signal.SIGFPE) as receiver: signal_raise(signal.SIGILL) signal_raise(signal.SIGFPE) await wait_run_sync_soon_idempotent_queue_barrier() assert receiver._pending_signal_count() == 2 exc = excinfo.value signums = {exc.args[0]} assert isinstance(exc.__context__, RuntimeError) signums.add(exc.__context__.args[0]) assert signums == {signal.SIGILL, signal.SIGFPE} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_socket.py0000644000076500000000000010545600000000000017153 0ustar00quentinwheelimport errno import pytest import attr import os import socket as stdlib_socket import inspect import tempfile import sys as _sys from .._core.tests.tutil import creates_ipv6, binds_ipv6 from .. import _core from .. import _socket as _tsocket from .. import socket as tsocket from .._socket import _NUMERIC_ONLY, _try_sync from ..testing import assert_checkpoints, wait_all_tasks_blocked ################################################################ # utils ################################################################ class MonkeypatchedGAI: def __init__(self, orig_getaddrinfo): self._orig_getaddrinfo = orig_getaddrinfo self._responses = {} self.record = [] # get a normalized getaddrinfo argument tuple def _frozenbind(self, *args, **kwargs): sig = inspect.signature(self._orig_getaddrinfo) bound = sig.bind(*args, **kwargs) bound.apply_defaults() frozenbound = bound.args assert not bound.kwargs return frozenbound def set(self, response, *args, **kwargs): self._responses[self._frozenbind(*args, **kwargs)] = response def getaddrinfo(self, *args, **kwargs): bound = self._frozenbind(*args, **kwargs) self.record.append(bound) if bound in self._responses: return self._responses[bound] elif bound[-1] & stdlib_socket.AI_NUMERICHOST: return self._orig_getaddrinfo(*args, **kwargs) else: raise RuntimeError("gai called with unexpected arguments {}".format(bound)) @pytest.fixture def monkeygai(monkeypatch): controller = MonkeypatchedGAI(stdlib_socket.getaddrinfo) monkeypatch.setattr(stdlib_socket, "getaddrinfo", controller.getaddrinfo) return controller async def test__try_sync(): with assert_checkpoints(): async with _try_sync(): pass with assert_checkpoints(): with pytest.raises(KeyError): async with _try_sync(): raise KeyError async with _try_sync(): raise BlockingIOError def _is_ValueError(exc): return isinstance(exc, ValueError) async with _try_sync(_is_ValueError): raise ValueError with assert_checkpoints(): with pytest.raises(BlockingIOError): async with _try_sync(_is_ValueError): raise BlockingIOError ################################################################ # basic re-exports ################################################################ def test_socket_has_some_reexports(): assert tsocket.SOL_SOCKET == stdlib_socket.SOL_SOCKET assert tsocket.TCP_NODELAY == stdlib_socket.TCP_NODELAY assert tsocket.gaierror == stdlib_socket.gaierror assert tsocket.ntohs == stdlib_socket.ntohs ################################################################ # name resolution ################################################################ async def test_getaddrinfo(monkeygai): def check(got, expected): # win32 returns 0 for the proto field # musl and glibc have inconsistent handling of the canonical name # field (https://github.com/python-trio/trio/issues/1499) # Neither field gets used much and there isn't much opportunity for us # to mess them up, so we don't bother checking them here def interesting_fields(gai_tup): # (family, type, proto, canonname, sockaddr) family, type, proto, canonname, sockaddr = gai_tup return (family, type, sockaddr) def filtered(gai_list): return [interesting_fields(gai_tup) for gai_tup in gai_list] assert filtered(got) == filtered(expected) # Simple non-blocking non-error cases, ipv4 and ipv6: with assert_checkpoints(): res = await tsocket.getaddrinfo("127.0.0.1", "12345", type=tsocket.SOCK_STREAM) check( res, [ ( tsocket.AF_INET, # 127.0.0.1 is ipv4 tsocket.SOCK_STREAM, tsocket.IPPROTO_TCP, "", ("127.0.0.1", 12345), ), ], ) with assert_checkpoints(): res = await tsocket.getaddrinfo("::1", "12345", type=tsocket.SOCK_DGRAM) check( res, [ ( tsocket.AF_INET6, tsocket.SOCK_DGRAM, tsocket.IPPROTO_UDP, "", ("::1", 12345, 0, 0), ), ], ) monkeygai.set("x", b"host", "port", family=0, type=0, proto=0, flags=0) with assert_checkpoints(): res = await tsocket.getaddrinfo("host", "port") assert res == "x" assert monkeygai.record[-1] == (b"host", "port", 0, 0, 0, 0) # check raising an error from a non-blocking getaddrinfo with assert_checkpoints(): with pytest.raises(tsocket.gaierror) as excinfo: await tsocket.getaddrinfo("::1", "12345", type=-1) # Linux + glibc, Windows expected_errnos = {tsocket.EAI_SOCKTYPE} # Linux + musl expected_errnos.add(tsocket.EAI_SERVICE) # macOS if hasattr(tsocket, "EAI_BADHINTS"): expected_errnos.add(tsocket.EAI_BADHINTS) assert excinfo.value.errno in expected_errnos # check raising an error from a blocking getaddrinfo (exploits the fact # that monkeygai raises if it gets a non-numeric request it hasn't been # given an answer for) with assert_checkpoints(): with pytest.raises(RuntimeError): await tsocket.getaddrinfo("asdf", "12345") async def test_getnameinfo(): # Trivial test: ni_numeric = stdlib_socket.NI_NUMERICHOST | stdlib_socket.NI_NUMERICSERV with assert_checkpoints(): got = await tsocket.getnameinfo(("127.0.0.1", 1234), ni_numeric) assert got == ("127.0.0.1", "1234") # getnameinfo requires a numeric address as input: with assert_checkpoints(): with pytest.raises(tsocket.gaierror): await tsocket.getnameinfo(("google.com", 80), 0) with assert_checkpoints(): with pytest.raises(tsocket.gaierror): await tsocket.getnameinfo(("localhost", 80), 0) # Blocking call to get expected values: host, service = stdlib_socket.getnameinfo(("127.0.0.1", 80), 0) # Some working calls: got = await tsocket.getnameinfo(("127.0.0.1", 80), 0) assert got == (host, service) got = await tsocket.getnameinfo(("127.0.0.1", 80), tsocket.NI_NUMERICHOST) assert got == ("127.0.0.1", service) got = await tsocket.getnameinfo(("127.0.0.1", 80), tsocket.NI_NUMERICSERV) assert got == (host, "80") ################################################################ # constructors ################################################################ async def test_from_stdlib_socket(): sa, sb = stdlib_socket.socketpair() assert not isinstance(sa, tsocket.SocketType) with sa, sb: ta = tsocket.from_stdlib_socket(sa) assert isinstance(ta, tsocket.SocketType) assert sa.fileno() == ta.fileno() await ta.send(b"x") assert sb.recv(1) == b"x" # rejects other types with pytest.raises(TypeError): tsocket.from_stdlib_socket(1) class MySocket(stdlib_socket.socket): pass with MySocket() as mysock: with pytest.raises(TypeError): tsocket.from_stdlib_socket(mysock) async def test_from_fd(): sa, sb = stdlib_socket.socketpair() ta = tsocket.fromfd(sa.fileno(), sa.family, sa.type, sa.proto) with sa, sb, ta: assert ta.fileno() != sa.fileno() await ta.send(b"x") assert sb.recv(3) == b"x" async def test_socketpair_simple(): async def child(sock): print("sending hello") await sock.send(b"h") assert await sock.recv(1) == b"h" a, b = tsocket.socketpair() with a, b: async with _core.open_nursery() as nursery: nursery.start_soon(child, a) nursery.start_soon(child, b) @pytest.mark.skipif(not hasattr(tsocket, "fromshare"), reason="windows only") async def test_fromshare(): a, b = tsocket.socketpair() with a, b: # share with ourselves shared = a.share(os.getpid()) a2 = tsocket.fromshare(shared) with a2: assert a.fileno() != a2.fileno() await a2.send(b"x") assert await b.recv(1) == b"x" async def test_socket(): with tsocket.socket() as s: assert isinstance(s, tsocket.SocketType) assert s.family == tsocket.AF_INET @creates_ipv6 async def test_socket_v6(): with tsocket.socket(tsocket.AF_INET6, tsocket.SOCK_DGRAM) as s: assert isinstance(s, tsocket.SocketType) assert s.family == tsocket.AF_INET6 @pytest.mark.skipif(not _sys.platform == "linux", reason="linux only") async def test_sniff_sockopts(): from socket import AF_INET, AF_INET6, SOCK_DGRAM, SOCK_STREAM # generate the combinations of families/types we're testing: sockets = [] for family in [AF_INET, AF_INET6]: for type in [SOCK_DGRAM, SOCK_STREAM]: sockets.append(stdlib_socket.socket(family, type)) for socket in sockets: # regular Trio socket constructor tsocket_socket = tsocket.socket(fileno=socket.fileno()) # check family / type for correctness: assert tsocket_socket.family == socket.family assert tsocket_socket.type == socket.type tsocket_socket.detach() # fromfd constructor tsocket_from_fd = tsocket.fromfd(socket.fileno(), AF_INET, SOCK_STREAM) # check family / type for correctness: assert tsocket_from_fd.family == socket.family assert tsocket_from_fd.type == socket.type tsocket_from_fd.close() socket.close() ################################################################ # _SocketType ################################################################ async def test_SocketType_basics(): sock = tsocket.socket() with sock as cm_enter_value: assert cm_enter_value is sock assert isinstance(sock.fileno(), int) assert not sock.get_inheritable() sock.set_inheritable(True) assert sock.get_inheritable() sock.setsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY, False) assert not sock.getsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY) sock.setsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY, True) assert sock.getsockopt(tsocket.IPPROTO_TCP, tsocket.TCP_NODELAY) # closed sockets have fileno() == -1 assert sock.fileno() == -1 # smoke test repr(sock) # detach with tsocket.socket() as sock: fd = sock.fileno() assert sock.detach() == fd assert sock.fileno() == -1 # close sock = tsocket.socket() assert sock.fileno() >= 0 sock.close() assert sock.fileno() == -1 # share was tested above together with fromshare # check __dir__ assert "family" in dir(sock) assert "recv" in dir(sock) assert "setsockopt" in dir(sock) # our __getattr__ handles unknown names with pytest.raises(AttributeError): sock.asdf # type family proto stdlib_sock = stdlib_socket.socket() sock = tsocket.from_stdlib_socket(stdlib_sock) assert sock.type == _tsocket.real_socket_type(stdlib_sock.type) assert sock.family == stdlib_sock.family assert sock.proto == stdlib_sock.proto sock.close() async def test_SocketType_dup(): a, b = tsocket.socketpair() with a, b: a2 = a.dup() with a2: assert isinstance(a2, tsocket.SocketType) assert a2.fileno() != a.fileno() a.close() await a2.send(b"x") assert await b.recv(1) == b"x" async def test_SocketType_shutdown(): a, b = tsocket.socketpair() with a, b: await a.send(b"x") assert await b.recv(1) == b"x" assert not a.did_shutdown_SHUT_WR assert not b.did_shutdown_SHUT_WR a.shutdown(tsocket.SHUT_WR) assert a.did_shutdown_SHUT_WR assert not b.did_shutdown_SHUT_WR assert await b.recv(1) == b"" await b.send(b"y") assert await a.recv(1) == b"y" a, b = tsocket.socketpair() with a, b: assert not a.did_shutdown_SHUT_WR a.shutdown(tsocket.SHUT_RD) assert not a.did_shutdown_SHUT_WR a, b = tsocket.socketpair() with a, b: assert not a.did_shutdown_SHUT_WR a.shutdown(tsocket.SHUT_RDWR) assert a.did_shutdown_SHUT_WR @pytest.mark.parametrize( "address, socket_type", [ ("127.0.0.1", tsocket.AF_INET), pytest.param("::1", tsocket.AF_INET6, marks=binds_ipv6), ], ) async def test_SocketType_simple_server(address, socket_type): # listen, bind, accept, connect, getpeername, getsockname listener = tsocket.socket(socket_type) client = tsocket.socket(socket_type) with listener, client: await listener.bind((address, 0)) listener.listen(20) addr = listener.getsockname()[:2] async with _core.open_nursery() as nursery: nursery.start_soon(client.connect, addr) server, client_addr = await listener.accept() with server: assert client_addr == server.getpeername() == client.getsockname() await server.send(b"x") assert await client.recv(1) == b"x" async def test_SocketType_is_readable(): a, b = tsocket.socketpair() with a, b: assert not a.is_readable() await b.send(b"x") await _core.wait_readable(a) assert a.is_readable() assert await a.recv(1) == b"x" assert not a.is_readable() # On some macOS systems, getaddrinfo likes to return V4-mapped addresses even # when we *don't* pass AI_V4MAPPED. # https://github.com/python-trio/trio/issues/580 def gai_without_v4mapped_is_buggy(): # pragma: no cover try: stdlib_socket.getaddrinfo("1.2.3.4", 0, family=stdlib_socket.AF_INET6) except stdlib_socket.gaierror: return False else: return True @attr.s class Addresses: bind_all = attr.ib() localhost = attr.ib() arbitrary = attr.ib() broadcast = attr.ib() # Direct thorough tests of the implicit resolver helpers @pytest.mark.parametrize( "socket_type, addrs", [ ( tsocket.AF_INET, Addresses( bind_all="0.0.0.0", localhost="127.0.0.1", arbitrary="1.2.3.4", broadcast="255.255.255.255", ), ), pytest.param( tsocket.AF_INET6, Addresses( bind_all="::", localhost="::1", arbitrary="1::2", broadcast="::ffff:255.255.255.255", ), marks=creates_ipv6, ), ], ) async def test_SocketType_resolve(socket_type, addrs): v6 = socket_type == tsocket.AF_INET6 def pad(addr): if v6: while len(addr) < 4: addr += (0,) return addr def assert_eq(actual, expected): assert pad(expected) == pad(actual) with tsocket.socket(family=socket_type) as sock: # For some reason the stdlib special-cases "" to pass NULL to # getaddrinfo They also error out on None, but whatever, None is much # more consistent, so we accept it too. for null in [None, ""]: got = await sock._resolve_local_address_nocp((null, 80)) assert_eq(got, (addrs.bind_all, 80)) got = await sock._resolve_remote_address_nocp((null, 80)) assert_eq(got, (addrs.localhost, 80)) # AI_PASSIVE only affects the wildcard address, so for everything else # _resolve_local_address_nocp and _resolve_remote_address_nocp should # work the same: for resolver in ["_resolve_local_address_nocp", "_resolve_remote_address_nocp"]: async def res(*args): return await getattr(sock, resolver)(*args) assert_eq(await res((addrs.arbitrary, "http")), (addrs.arbitrary, 80)) if v6: # Check handling of different length ipv6 address tuples assert_eq(await res(("1::2", 80)), ("1::2", 80, 0, 0)) assert_eq(await res(("1::2", 80, 0)), ("1::2", 80, 0, 0)) assert_eq(await res(("1::2", 80, 0, 0)), ("1::2", 80, 0, 0)) # Non-zero flowinfo/scopeid get passed through assert_eq(await res(("1::2", 80, 1)), ("1::2", 80, 1, 0)) assert_eq(await res(("1::2", 80, 1, 2)), ("1::2", 80, 1, 2)) # And again with a string port, as a trick to avoid the # already-resolved address fastpath and make sure we call # getaddrinfo assert_eq(await res(("1::2", "80")), ("1::2", 80, 0, 0)) assert_eq(await res(("1::2", "80", 0)), ("1::2", 80, 0, 0)) assert_eq(await res(("1::2", "80", 0, 0)), ("1::2", 80, 0, 0)) assert_eq(await res(("1::2", "80", 1)), ("1::2", 80, 1, 0)) assert_eq(await res(("1::2", "80", 1, 2)), ("1::2", 80, 1, 2)) # V4 mapped addresses resolved if V6ONLY is False sock.setsockopt(tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, False) assert_eq(await res(("1.2.3.4", "http")), ("::ffff:1.2.3.4", 80)) # Check the special case, because why not assert_eq(await res(("", 123)), (addrs.broadcast, 123)) # But not if it's true (at least on systems where getaddrinfo works # correctly) if v6 and not gai_without_v4mapped_is_buggy(): sock.setsockopt(tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, True) with pytest.raises(tsocket.gaierror) as excinfo: await res(("1.2.3.4", 80)) # Windows, macOS expected_errnos = {tsocket.EAI_NONAME} # Linux if hasattr(tsocket, "EAI_ADDRFAMILY"): expected_errnos.add(tsocket.EAI_ADDRFAMILY) assert excinfo.value.errno in expected_errnos # A family where we know nothing about the addresses, so should just # pass them through. This should work on Linux, which is enough to # smoke test the basic functionality... try: netlink_sock = tsocket.socket( family=tsocket.AF_NETLINK, type=tsocket.SOCK_DGRAM ) except (AttributeError, OSError): pass else: assert await getattr(netlink_sock, resolver)("asdf") == "asdf" netlink_sock.close() with pytest.raises(ValueError): await res("1.2.3.4") with pytest.raises(ValueError): await res(("1.2.3.4",)) with pytest.raises(ValueError): if v6: await res(("1.2.3.4", 80, 0, 0, 0)) else: await res(("1.2.3.4", 80, 0, 0)) async def test_SocketType_unresolved_names(): with tsocket.socket() as sock: await sock.bind(("localhost", 0)) assert sock.getsockname()[0] == "127.0.0.1" sock.listen(10) with tsocket.socket() as sock2: await sock2.connect(("localhost", sock.getsockname()[1])) assert sock2.getpeername() == sock.getsockname() # check gaierror propagates out with tsocket.socket() as sock: with pytest.raises(tsocket.gaierror): # definitely not a valid request await sock.bind(("1.2:3", -1)) # This tests all the complicated paths through _nonblocking_helper, using recv # as a stand-in for all the methods that use _nonblocking_helper. async def test_SocketType_non_blocking_paths(): a, b = stdlib_socket.socketpair() with a, b: ta = tsocket.from_stdlib_socket(a) b.setblocking(False) # cancel before even calling b.send(b"1") with _core.CancelScope() as cscope: cscope.cancel() with assert_checkpoints(): with pytest.raises(_core.Cancelled): await ta.recv(10) # immediate success (also checks that the previous attempt didn't # actually read anything) with assert_checkpoints(): await ta.recv(10) == b"1" # immediate failure with assert_checkpoints(): with pytest.raises(TypeError): await ta.recv("haha") # block then succeed async def do_successful_blocking_recv(): with assert_checkpoints(): assert await ta.recv(10) == b"2" async with _core.open_nursery() as nursery: nursery.start_soon(do_successful_blocking_recv) await wait_all_tasks_blocked() b.send(b"2") # block then cancelled async def do_cancelled_blocking_recv(): with assert_checkpoints(): with pytest.raises(_core.Cancelled): await ta.recv(10) async with _core.open_nursery() as nursery: nursery.start_soon(do_cancelled_blocking_recv) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() # Okay, here's the trickiest one: we want to exercise the path where # the task is signaled to wake, goes to recv, but then the recv fails, # so it has to go back to sleep and try again. Strategy: have two # tasks waiting on two sockets (to work around the rule against having # two tasks waiting on the same socket), wake them both up at the same # time, and whichever one runs first "steals" the data from the # other: tb = tsocket.from_stdlib_socket(b) async def t1(): with assert_checkpoints(): assert await ta.recv(1) == b"a" with assert_checkpoints(): assert await tb.recv(1) == b"b" async def t2(): with assert_checkpoints(): assert await tb.recv(1) == b"b" with assert_checkpoints(): assert await ta.recv(1) == b"a" async with _core.open_nursery() as nursery: nursery.start_soon(t1) nursery.start_soon(t2) await wait_all_tasks_blocked() a.send(b"b") b.send(b"a") await wait_all_tasks_blocked() a.send(b"b") b.send(b"a") # This tests the complicated paths through connect async def test_SocketType_connect_paths(): with tsocket.socket() as sock: with pytest.raises(ValueError): # Should be a tuple await sock.connect("localhost") # cancelled before we start with tsocket.socket() as sock: with _core.CancelScope() as cancel_scope: cancel_scope.cancel() with pytest.raises(_core.Cancelled): await sock.connect(("127.0.0.1", 80)) # Cancelled in between the connect() call and the connect completing with _core.CancelScope() as cancel_scope: with tsocket.socket() as sock, tsocket.socket() as listener: await listener.bind(("127.0.0.1", 0)) listener.listen() # Swap in our weird subclass under the trio.socket._SocketType's # nose -- and then swap it back out again before we hit # wait_socket_writable, which insists on a real socket. class CancelSocket(stdlib_socket.socket): def connect(self, *args, **kwargs): cancel_scope.cancel() sock._sock = stdlib_socket.fromfd( self.detach(), self.family, self.type ) sock._sock.connect(*args, **kwargs) # If connect *doesn't* raise, then pretend it did raise BlockingIOError # pragma: no cover sock._sock.close() sock._sock = CancelSocket() with assert_checkpoints(): with pytest.raises(_core.Cancelled): await sock.connect(listener.getsockname()) assert sock.fileno() == -1 # Failed connect (hopefully after raising BlockingIOError) with tsocket.socket() as sock: with pytest.raises(OSError): # TCP port 2 is not assigned. Pretty sure nothing will be # listening there. (We used to bind a port and then *not* call # listen() to ensure nothing was listening there, but it turns # out on macOS if you do this it takes 30 seconds for the # connect to fail. Really. Also if you use a non-routable # address. This way fails instantly though. As long as nothing # is listening on port 2.) await sock.connect(("127.0.0.1", 2)) async def test_resolve_remote_address_exception_closes_socket(): # Here we are testing issue 247, any cancellation will leave the socket closed with _core.CancelScope() as cancel_scope: with tsocket.socket() as sock: async def _resolve_remote_address_nocp(self, *args, **kwargs): cancel_scope.cancel() await _core.checkpoint() sock._resolve_remote_address_nocp = _resolve_remote_address_nocp with assert_checkpoints(): with pytest.raises(_core.Cancelled): await sock.connect("") assert sock.fileno() == -1 async def test_send_recv_variants(): a, b = tsocket.socketpair() with a, b: # recv, including with flags assert await a.send(b"x") == 1 assert await b.recv(10, tsocket.MSG_PEEK) == b"x" assert await b.recv(10) == b"x" # recv_into await a.send(b"x") buf = bytearray(10) await b.recv_into(buf) assert buf == b"x" + b"\x00" * 9 if hasattr(a, "sendmsg"): assert await a.sendmsg([b"xxx"], []) == 3 assert await b.recv(10) == b"xxx" a = tsocket.socket(type=tsocket.SOCK_DGRAM) b = tsocket.socket(type=tsocket.SOCK_DGRAM) with a, b: await a.bind(("127.0.0.1", 0)) await b.bind(("127.0.0.1", 0)) targets = [b.getsockname(), ("localhost", b.getsockname()[1])] # recvfrom + sendto, with and without names for target in targets: assert await a.sendto(b"xxx", target) == 3 (data, addr) = await b.recvfrom(10) assert data == b"xxx" assert addr == a.getsockname() # sendto + flags # # I can't find any flags that send() accepts... on Linux at least # passing MSG_MORE to send_some on a connected UDP socket seems to # just be ignored. # # But there's no MSG_MORE on Windows or macOS. I guess send_some flags # are really not very useful, but at least this tests them a bit. if hasattr(tsocket, "MSG_MORE"): await a.sendto(b"xxx", tsocket.MSG_MORE, b.getsockname()) await a.sendto(b"yyy", tsocket.MSG_MORE, b.getsockname()) await a.sendto(b"zzz", b.getsockname()) (data, addr) = await b.recvfrom(10) assert data == b"xxxyyyzzz" assert addr == a.getsockname() # recvfrom_into assert await a.sendto(b"xxx", b.getsockname()) == 3 buf = bytearray(10) (nbytes, addr) = await b.recvfrom_into(buf) assert nbytes == 3 assert buf == b"xxx" + b"\x00" * 7 assert addr == a.getsockname() if hasattr(b, "recvmsg"): assert await a.sendto(b"xxx", b.getsockname()) == 3 (data, ancdata, msg_flags, addr) = await b.recvmsg(10) assert data == b"xxx" assert ancdata == [] assert msg_flags == 0 assert addr == a.getsockname() if hasattr(b, "recvmsg_into"): assert await a.sendto(b"xyzw", b.getsockname()) == 4 buf1 = bytearray(2) buf2 = bytearray(3) ret = await b.recvmsg_into([buf1, buf2]) (nbytes, ancdata, msg_flags, addr) = ret assert nbytes == 4 assert buf1 == b"xy" assert buf2 == b"zw" + b"\x00" assert ancdata == [] assert msg_flags == 0 assert addr == a.getsockname() if hasattr(a, "sendmsg"): for target in targets: assert await a.sendmsg([b"x", b"yz"], [], 0, target) == 3 assert await b.recvfrom(10) == (b"xyz", a.getsockname()) a = tsocket.socket(type=tsocket.SOCK_DGRAM) b = tsocket.socket(type=tsocket.SOCK_DGRAM) with a, b: await b.bind(("127.0.0.1", 0)) await a.connect(b.getsockname()) # send on a connected udp socket; each call creates a separate # datagram await a.send(b"xxx") await a.send(b"yyy") assert await b.recv(10) == b"xxx" assert await b.recv(10) == b"yyy" async def test_idna(monkeygai): # This is the encoding for "faß.de", which uses one of the characters that # IDNA 2003 handles incorrectly: monkeygai.set("ok faß.de", b"xn--fa-hia.de", 80) monkeygai.set("ok ::1", "::1", 80, flags=_NUMERIC_ONLY) monkeygai.set("ok ::1", b"::1", 80, flags=_NUMERIC_ONLY) # Some things that should not reach the underlying socket.getaddrinfo: monkeygai.set("bad", "fass.de", 80) # We always call socket.getaddrinfo with bytes objects: monkeygai.set("bad", "xn--fa-hia.de", 80) assert "ok ::1" == await tsocket.getaddrinfo("::1", 80) assert "ok ::1" == await tsocket.getaddrinfo(b"::1", 80) assert "ok faß.de" == await tsocket.getaddrinfo("faß.de", 80) assert "ok faß.de" == await tsocket.getaddrinfo("xn--fa-hia.de", 80) assert "ok faß.de" == await tsocket.getaddrinfo(b"xn--fa-hia.de", 80) async def test_getprotobyname(): # These are the constants used in IP header fields, so the numeric values # had *better* be stable across systems... assert await tsocket.getprotobyname("udp") == 17 assert await tsocket.getprotobyname("tcp") == 6 async def test_custom_hostname_resolver(monkeygai): class CustomResolver: async def getaddrinfo(self, host, port, family, type, proto, flags): return ("custom_gai", host, port, family, type, proto, flags) async def getnameinfo(self, sockaddr, flags): return ("custom_gni", sockaddr, flags) cr = CustomResolver() assert tsocket.set_custom_hostname_resolver(cr) is None # Check that the arguments are all getting passed through. # We have to use valid calls to avoid making the underlying system # getaddrinfo cranky when it's used for NUMERIC checks. for vals in [ (tsocket.AF_INET, 0, 0, 0), (0, tsocket.SOCK_STREAM, 0, 0), (0, 0, tsocket.IPPROTO_TCP, 0), (0, 0, 0, tsocket.AI_CANONNAME), ]: assert await tsocket.getaddrinfo("localhost", "foo", *vals) == ( "custom_gai", b"localhost", "foo", *vals, ) # IDNA encoding is handled before calling the special object got = await tsocket.getaddrinfo("föö", "foo") expected = ("custom_gai", b"xn--f-1gaa", "foo", 0, 0, 0, 0) assert got == expected assert await tsocket.getnameinfo("a", 0) == ("custom_gni", "a", 0) # We can set it back to None assert tsocket.set_custom_hostname_resolver(None) is cr # And now Trio switches back to calling socket.getaddrinfo (specifically # our monkeypatched version of socket.getaddrinfo) monkeygai.set("x", b"host", "port", family=0, type=0, proto=0, flags=0) assert await tsocket.getaddrinfo("host", "port") == "x" async def test_custom_socket_factory(): class CustomSocketFactory: def socket(self, family, type, proto): return ("hi", family, type, proto) csf = CustomSocketFactory() assert tsocket.set_custom_socket_factory(csf) is None assert tsocket.socket() == ("hi", tsocket.AF_INET, tsocket.SOCK_STREAM, 0) assert tsocket.socket(1, 2, 3) == ("hi", 1, 2, 3) # socket with fileno= doesn't call our custom method fd = stdlib_socket.socket().detach() wrapped = tsocket.socket(fileno=fd) assert hasattr(wrapped, "bind") wrapped.close() # Likewise for socketpair a, b = tsocket.socketpair() with a, b: assert hasattr(a, "bind") assert hasattr(b, "bind") assert tsocket.set_custom_socket_factory(None) is csf async def test_SocketType_is_abstract(): with pytest.raises(TypeError): tsocket.SocketType() @pytest.mark.skipif(not hasattr(tsocket, "AF_UNIX"), reason="no unix domain sockets") async def test_unix_domain_socket(): # Bind has a special branch to use a thread, since it has to do filesystem # traversal. Maybe connect should too? Not sure. async def check_AF_UNIX(path): with tsocket.socket(family=tsocket.AF_UNIX) as lsock: await lsock.bind(path) lsock.listen(10) with tsocket.socket(family=tsocket.AF_UNIX) as csock: await csock.connect(path) ssock, _ = await lsock.accept() with ssock: await csock.send(b"x") assert await ssock.recv(1) == b"x" # Can't use tmpdir fixture, because we can exceed the maximum AF_UNIX path # length on macOS. with tempfile.TemporaryDirectory() as tmpdir: path = "{}/sock".format(tmpdir) await check_AF_UNIX(path) try: cookie = os.urandom(20).hex().encode("ascii") await check_AF_UNIX(b"\x00trio-test-" + cookie) except FileNotFoundError: # macOS doesn't support abstract filenames with the leading NUL byte pass async def test_interrupted_by_close(): a_stdlib, b_stdlib = stdlib_socket.socketpair() with a_stdlib, b_stdlib: a_stdlib.setblocking(False) data = b"x" * 99999 try: while True: a_stdlib.send(data) except BlockingIOError: pass a = tsocket.from_stdlib_socket(a_stdlib) async def sender(): with pytest.raises(_core.ClosedResourceError): await a.send(data) async def receiver(): with pytest.raises(_core.ClosedResourceError): await a.recv(1) async with _core.open_nursery() as nursery: nursery.start_soon(sender) nursery.start_soon(receiver) await wait_all_tasks_blocked() a.close() async def test_many_sockets(): total = 5000 # Must be more than MAX_AFD_GROUP_SIZE sockets = [] for x in range(total // 2): try: a, b = stdlib_socket.socketpair() except OSError as e: # pragma: no cover assert e.errno in (errno.EMFILE, errno.ENFILE) break sockets += [a, b] async with _core.open_nursery() as nursery: for s in sockets: nursery.start_soon(_core.wait_readable, s) await _core.wait_all_tasks_blocked() nursery.cancel_scope.cancel() for sock in sockets: sock.close() if x != total // 2 - 1: # pragma: no cover print(f"Unable to open more than {(x-1)*2} sockets.") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_ssl.py0000644000076500000000000013736200000000000016465 0ustar00quentinwheelimport re import sys import pytest import threading import socket as stdlib_socket import ssl from contextlib import contextmanager from functools import partial from OpenSSL import SSL import trustme from async_generator import asynccontextmanager import trio from .. import _core from .._highlevel_socket import SocketStream, SocketListener from .._highlevel_generic import aclose_forcefully from .._core import ClosedResourceError, BrokenResourceError from .._highlevel_open_tcp_stream import open_tcp_stream from .. import socket as tsocket from .._ssl import SSLStream, SSLListener, NeedHandshakeError from .._util import ConflictDetector from .._core.tests.tutil import slow from ..testing import ( assert_checkpoints, Sequencer, memory_stream_pair, lockstep_stream_pair, check_two_way_stream, ) # We have two different kinds of echo server fixtures we use for testing. The # first is a real server written using the stdlib ssl module and blocking # sockets. It runs in a thread and we talk to it over a real socketpair(), to # validate interoperability in a semi-realistic setting. # # The second is a very weird virtual echo server that lives inside a custom # Stream class. It lives entirely inside the Python object space; there are no # operating system calls in it at all. No threads, no I/O, nothing. It's # 'send_all' call takes encrypted data from a client and feeds it directly into # the server-side TLS state engine to decrypt, then takes that data, feeds it # back through to get the encrypted response, and returns it from 'receive_some'. This # gives us full control and reproducibility. This server is written using # PyOpenSSL, so that we can trigger renegotiations on demand. It also allows # us to insert random (virtual) delays, to really exercise all the weird paths # in SSLStream's state engine. # # Both present a certificate for "trio-test-1.example.org". TRIO_TEST_CA = trustme.CA() TRIO_TEST_1_CERT = TRIO_TEST_CA.issue_server_cert("trio-test-1.example.org") SERVER_CTX = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) TRIO_TEST_1_CERT.configure_cert(SERVER_CTX) # TLS 1.3 has a lot of changes from previous versions. So we want to run tests # with both TLS 1.3, and TLS 1.2. if hasattr(ssl, "OP_NO_TLSv1_3"): # "tls13" means that we're willing to negotiate TLS 1.3. Usually that's # what will happen, but the renegotiation tests explicitly force a # downgrade on the server side. "tls12" means we refuse to negotiate TLS # 1.3, so we'll almost certainly use TLS 1.2. client_ctx_params = ["tls13", "tls12"] else: # We can't control whether we use TLS 1.3, so we just have to accept # whatever openssl wants to use. This might be TLS 1.2 (if openssl is # old), or it might be TLS 1.3 (if openssl is new, but our python version # is too old to expose the configuration knobs). client_ctx_params = ["default"] @pytest.fixture(scope="module", params=client_ctx_params) def client_ctx(request): ctx = ssl.create_default_context() TRIO_TEST_CA.configure_trust(ctx) if request.param in ["default", "tls13"]: return ctx elif request.param == "tls12": ctx.options |= ssl.OP_NO_TLSv1_3 return ctx else: # pragma: no cover assert False # The blocking socket server. def ssl_echo_serve_sync(sock, *, expect_fail=False): try: wrapped = SERVER_CTX.wrap_socket( sock, server_side=True, suppress_ragged_eofs=False ) with wrapped: wrapped.do_handshake() while True: data = wrapped.recv(4096) if not data: # other side has initiated a graceful shutdown; we try to # respond in kind but it's legal for them to have already # gone away. exceptions = (BrokenPipeError, ssl.SSLZeroReturnError) try: wrapped.unwrap() except exceptions: pass except ssl.SSLWantWriteError: # pragma: no cover # Under unclear conditions, CPython sometimes raises # SSLWantWriteError here. This is a bug (bpo-32219), # but it's not our bug. Christian Heimes thinks # it's fixed in 'recent' CPython versions so we fail # the test for those and ignore it for earlier # versions. if ( sys.implementation.name != "cpython" or sys.version_info >= (3, 8) ): pytest.fail( "still an issue on recent python versions " "add a comment to " "https://bugs.python.org/issue32219" ) return wrapped.sendall(data) # This is an obscure workaround for an openssl bug. In server mode, in # some versions, openssl sends some extra data at the end of do_handshake # that it shouldn't send. Normally this is harmless, but, if the other # side shuts down the connection before it reads that data, it might cause # the OS to report a ECONNREST or even ECONNABORTED (which is just wrong, # since ECONNABORTED is supposed to mean that connect() failed, but what # can you do). In this case the other side did nothing wrong, but there's # no way to recover, so we let it pass, and just cross our fingers its not # hiding any (other) real bugs. For more details see: # # https://github.com/python-trio/trio/issues/1293 # # Also, this happens frequently but non-deterministically, so we have to # 'no cover' it to avoid coverage flapping. except (ConnectionResetError, ConnectionAbortedError): # pragma: no cover return except Exception as exc: if expect_fail: print("ssl_echo_serve_sync got error as expected:", exc) else: # pragma: no cover print("ssl_echo_serve_sync got unexpected error:", exc) raise else: if expect_fail: # pragma: no cover raise RuntimeError("failed to fail?") finally: sock.close() # Fixture that gives a raw socket connected to a trio-test-1 echo server # (running in a thread). Useful for testing making connections with different # SSLContexts. @asynccontextmanager async def ssl_echo_server_raw(**kwargs): a, b = stdlib_socket.socketpair() async with trio.open_nursery() as nursery: # Exiting the 'with a, b' context manager closes the sockets, which # causes the thread to exit (possibly with an error), which allows the # nursery context manager to exit too. with a, b: nursery.start_soon( trio.to_thread.run_sync, partial(ssl_echo_serve_sync, b, **kwargs) ) yield SocketStream(tsocket.from_stdlib_socket(a)) # Fixture that gives a properly set up SSLStream connected to a trio-test-1 # echo server (running in a thread) @asynccontextmanager async def ssl_echo_server(client_ctx, **kwargs): async with ssl_echo_server_raw(**kwargs) as sock: yield SSLStream(sock, client_ctx, server_hostname="trio-test-1.example.org") # The weird in-memory server ... thing. # Doesn't inherit from Stream because I left out the methods that we don't # actually need. class PyOpenSSLEchoStream: def __init__(self, sleeper=None): ctx = SSL.Context(SSL.SSLv23_METHOD) # TLS 1.3 removes renegotiation support. Which is great for them, but # we still have to support versions before that, and that means we # need to test renegotiation support, which means we need to force this # to use a lower version where this test server can trigger # renegotiations. Of course TLS 1.3 support isn't released yet, but # I'm told that this will work once it is. (And once it is we can # remove the pragma: no cover too.) Alternatively, we could switch to # using TLSv1_2_METHOD. # # Discussion: https://github.com/pyca/pyopenssl/issues/624 # This is the right way, but we can't use it until this PR is in a # released: # https://github.com/pyca/pyopenssl/pull/861 # # if hasattr(SSL, "OP_NO_TLSv1_3"): # ctx.set_options(SSL.OP_NO_TLSv1_3) # # Fortunately pyopenssl uses cryptography under the hood, so we can be # confident that they're using the same version of openssl from cryptography.hazmat.bindings.openssl.binding import Binding b = Binding() if hasattr(b.lib, "SSL_OP_NO_TLSv1_3"): ctx.set_options(b.lib.SSL_OP_NO_TLSv1_3) # Unfortunately there's currently no way to say "use 1.3 or worse", we # can only disable specific versions. And if the two sides start # negotiating 1.4 at some point in the future, it *might* mean that # our tests silently stop working properly. So the next line is a # tripwire to remind us we need to revisit this stuff in 5 years or # whatever when the next TLS version is released: assert not hasattr(SSL, "OP_NO_TLSv1_4") TRIO_TEST_1_CERT.configure_cert(ctx) self._conn = SSL.Connection(ctx, None) self._conn.set_accept_state() self._lot = _core.ParkingLot() self._pending_cleartext = bytearray() self._send_all_conflict_detector = ConflictDetector( "simultaneous calls to PyOpenSSLEchoStream.send_all" ) self._receive_some_conflict_detector = ConflictDetector( "simultaneous calls to PyOpenSSLEchoStream.receive_some" ) if sleeper is None: async def no_op_sleeper(_): return self.sleeper = no_op_sleeper else: self.sleeper = sleeper async def aclose(self): self._conn.bio_shutdown() def renegotiate_pending(self): return self._conn.renegotiate_pending() def renegotiate(self): # Returns false if a renegotiation is already in progress, meaning # nothing happens. assert self._conn.renegotiate() async def wait_send_all_might_not_block(self): with self._send_all_conflict_detector: await _core.checkpoint() await _core.checkpoint() await self.sleeper("wait_send_all_might_not_block") async def send_all(self, data): print(" --> transport_stream.send_all") with self._send_all_conflict_detector: await _core.checkpoint() await _core.checkpoint() await self.sleeper("send_all") self._conn.bio_write(data) while True: await self.sleeper("send_all") try: data = self._conn.recv(1) except SSL.ZeroReturnError: self._conn.shutdown() print("renegotiations:", self._conn.total_renegotiations()) break except SSL.WantReadError: break else: self._pending_cleartext += data self._lot.unpark_all() await self.sleeper("send_all") print(" <-- transport_stream.send_all finished") async def receive_some(self, nbytes=None): print(" --> transport_stream.receive_some") if nbytes is None: nbytes = 65536 # arbitrary with self._receive_some_conflict_detector: try: await _core.checkpoint() await _core.checkpoint() while True: await self.sleeper("receive_some") try: return self._conn.bio_read(nbytes) except SSL.WantReadError: # No data in our ciphertext buffer; try to generate # some. if self._pending_cleartext: # We have some cleartext; maybe we can encrypt it # and then return it. print(" trying", self._pending_cleartext) try: # PyOpenSSL bug: doesn't accept bytearray # https://github.com/pyca/pyopenssl/issues/621 next_byte = self._pending_cleartext[0:1] self._conn.send(bytes(next_byte)) # Apparently this next bit never gets hit in the # test suite, but it's not an interesting omission # so let's pragma it. except SSL.WantReadError: # pragma: no cover # We didn't manage to send the cleartext (and # in particular we better leave it there to # try again, due to openssl's retry # semantics), but it's possible we pushed a # renegotiation forward and *now* we have data # to send. try: return self._conn.bio_read(nbytes) except SSL.WantReadError: # Nope. We're just going to have to wait # for someone to call send_all() to give # use more data. print("parking (a)") await self._lot.park() else: # We successfully sent that byte, so we don't # have to again. del self._pending_cleartext[0:1] else: # no pending cleartext; nothing to do but wait for # someone to call send_all print("parking (b)") await self._lot.park() finally: await self.sleeper("receive_some") print(" <-- transport_stream.receive_some finished") async def test_PyOpenSSLEchoStream_gives_resource_busy_errors(): # Make sure that PyOpenSSLEchoStream complains if two tasks call send_all # at the same time, or ditto for receive_some. The tricky cases where SSLStream # might accidentally do this are during renegotiation, which we test using # PyOpenSSLEchoStream, so this makes sure that if we do have a bug then # PyOpenSSLEchoStream will notice and complain. s = PyOpenSSLEchoStream() with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(s.send_all, b"x") nursery.start_soon(s.send_all, b"x") assert "simultaneous" in str(excinfo.value) s = PyOpenSSLEchoStream() with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(s.send_all, b"x") nursery.start_soon(s.wait_send_all_might_not_block) assert "simultaneous" in str(excinfo.value) s = PyOpenSSLEchoStream() with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(s.wait_send_all_might_not_block) nursery.start_soon(s.wait_send_all_might_not_block) assert "simultaneous" in str(excinfo.value) s = PyOpenSSLEchoStream() with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(s.receive_some, 1) nursery.start_soon(s.receive_some, 1) assert "simultaneous" in str(excinfo.value) @contextmanager def virtual_ssl_echo_server(client_ctx, **kwargs): fakesock = PyOpenSSLEchoStream(**kwargs) yield SSLStream(fakesock, client_ctx, server_hostname="trio-test-1.example.org") def ssl_wrap_pair( client_ctx, client_transport, server_transport, *, client_kwargs={}, server_kwargs={}, ): client_ssl = SSLStream( client_transport, client_ctx, server_hostname="trio-test-1.example.org", **client_kwargs, ) server_ssl = SSLStream( server_transport, SERVER_CTX, server_side=True, **server_kwargs ) return client_ssl, server_ssl def ssl_memory_stream_pair(client_ctx, **kwargs): client_transport, server_transport = memory_stream_pair() return ssl_wrap_pair(client_ctx, client_transport, server_transport, **kwargs) def ssl_lockstep_stream_pair(client_ctx, **kwargs): client_transport, server_transport = lockstep_stream_pair() return ssl_wrap_pair(client_ctx, client_transport, server_transport, **kwargs) # Simple smoke test for handshake/send/receive/shutdown talking to a # synchronous server, plus make sure that we do the bare minimum of # certificate checking (even though this is really Python's responsibility) async def test_ssl_client_basics(client_ctx): # Everything OK async with ssl_echo_server(client_ctx) as s: assert not s.server_side await s.send_all(b"x") assert await s.receive_some(1) == b"x" await s.aclose() # Didn't configure the CA file, should fail async with ssl_echo_server_raw(expect_fail=True) as sock: bad_client_ctx = ssl.create_default_context() s = SSLStream(sock, bad_client_ctx, server_hostname="trio-test-1.example.org") assert not s.server_side with pytest.raises(BrokenResourceError) as excinfo: await s.send_all(b"x") assert isinstance(excinfo.value.__cause__, ssl.SSLError) # Trusted CA, but wrong host name async with ssl_echo_server_raw(expect_fail=True) as sock: s = SSLStream(sock, client_ctx, server_hostname="trio-test-2.example.org") assert not s.server_side with pytest.raises(BrokenResourceError) as excinfo: await s.send_all(b"x") assert isinstance(excinfo.value.__cause__, ssl.CertificateError) async def test_ssl_server_basics(client_ctx): a, b = stdlib_socket.socketpair() with a, b: server_sock = tsocket.from_stdlib_socket(b) server_transport = SSLStream( SocketStream(server_sock), SERVER_CTX, server_side=True ) assert server_transport.server_side def client(): with client_ctx.wrap_socket( a, server_hostname="trio-test-1.example.org" ) as client_sock: client_sock.sendall(b"x") assert client_sock.recv(1) == b"y" client_sock.sendall(b"z") client_sock.unwrap() t = threading.Thread(target=client) t.start() assert await server_transport.receive_some(1) == b"x" await server_transport.send_all(b"y") assert await server_transport.receive_some(1) == b"z" assert await server_transport.receive_some(1) == b"" await server_transport.aclose() t.join() async def test_attributes(client_ctx): async with ssl_echo_server_raw(expect_fail=True) as sock: good_ctx = client_ctx bad_ctx = ssl.create_default_context() s = SSLStream(sock, good_ctx, server_hostname="trio-test-1.example.org") assert s.transport_stream is sock # Forwarded attribute getting assert s.context is good_ctx assert s.server_side == False # noqa assert s.server_hostname == "trio-test-1.example.org" with pytest.raises(AttributeError): s.asfdasdfsa # __dir__ assert "transport_stream" in dir(s) assert "context" in dir(s) # Setting the attribute goes through to the underlying object # most attributes on SSLObject are read-only with pytest.raises(AttributeError): s.server_side = True with pytest.raises(AttributeError): s.server_hostname = "asdf" # but .context is *not*. Check that we forward attribute setting by # making sure that after we set the bad context our handshake indeed # fails: s.context = bad_ctx assert s.context is bad_ctx with pytest.raises(BrokenResourceError) as excinfo: await s.do_handshake() assert isinstance(excinfo.value.__cause__, ssl.SSLError) # Note: this test fails horribly if we force TLS 1.2 and trigger a # renegotiation at the beginning (e.g. by switching to the pyopenssl # server). Usually the client crashes in SSLObject.write with "UNEXPECTED # RECORD"; sometimes we get something more exotic like a SyscallError. This is # odd because openssl isn't doing any syscalls, but so it goes. After lots of # websearching I'm pretty sure this is due to a bug in OpenSSL, where it just # can't reliably handle full-duplex communication combined with # renegotiation. Nice, eh? # # https://rt.openssl.org/Ticket/Display.html?id=3712 # https://rt.openssl.org/Ticket/Display.html?id=2481 # http://openssl.6102.n7.nabble.com/TLS-renegotiation-failure-on-receiving-application-data-during-handshake-td48127.html # https://stackoverflow.com/questions/18728355/ssl-renegotiation-with-full-duplex-socket-communication # # In some variants of this test (maybe only against the java server?) I've # also seen cases where our send_all blocks waiting to write, and then our receive_some # also blocks waiting to write, and they never wake up again. It looks like # some kind of deadlock. I suspect there may be an issue where we've filled up # the send buffers, and the remote side is trying to handle the renegotiation # from inside a write() call, so it has a problem: there's all this application # data clogging up the pipe, but it can't process and return it to the # application because it's in write(), and it doesn't want to buffer infinite # amounts of data, and... actually I guess those are the only two choices. # # NSS even documents that you shouldn't try to do a renegotiation except when # the connection is idle: # # https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/SSL_functions/sslfnc.html#1061582 # # I begin to see why HTTP/2 forbids renegotiation and TLS 1.3 removes it... async def test_full_duplex_basics(client_ctx): CHUNKS = 30 CHUNK_SIZE = 32768 EXPECTED = CHUNKS * CHUNK_SIZE sent = bytearray() received = bytearray() async def sender(s): nonlocal sent for i in range(CHUNKS): print(i) chunk = bytes([i] * CHUNK_SIZE) sent += chunk await s.send_all(chunk) async def receiver(s): nonlocal received while len(received) < EXPECTED: chunk = await s.receive_some(CHUNK_SIZE // 2) received += chunk async with ssl_echo_server(client_ctx) as s: async with _core.open_nursery() as nursery: nursery.start_soon(sender, s) nursery.start_soon(receiver, s) # And let's have some doing handshakes too, everyone # simultaneously nursery.start_soon(s.do_handshake) nursery.start_soon(s.do_handshake) await s.aclose() assert len(sent) == len(received) == EXPECTED assert sent == received async def test_renegotiation_simple(client_ctx): with virtual_ssl_echo_server(client_ctx) as s: await s.do_handshake() s.transport_stream.renegotiate() await s.send_all(b"a") assert await s.receive_some(1) == b"a" # Have to send some more data back and forth to make sure the # renegotiation is finished before shutting down the # connection... otherwise openssl raises an error. I think this is a # bug in openssl but what can ya do. await s.send_all(b"b") assert await s.receive_some(1) == b"b" await s.aclose() @slow async def test_renegotiation_randomized(mock_clock, client_ctx): # The only blocking things in this function are our random sleeps, so 0 is # a good threshold. mock_clock.autojump_threshold = 0 import random r = random.Random(0) async def sleeper(_): await trio.sleep(r.uniform(0, 10)) async def clear(): while s.transport_stream.renegotiate_pending(): with assert_checkpoints(): await send(b"-") with assert_checkpoints(): await expect(b"-") print("-- clear --") async def send(byte): await s.transport_stream.sleeper("outer send") print("calling SSLStream.send_all", byte) with assert_checkpoints(): await s.send_all(byte) async def expect(expected): await s.transport_stream.sleeper("expect") print("calling SSLStream.receive_some, expecting", expected) assert len(expected) == 1 with assert_checkpoints(): assert await s.receive_some(1) == expected with virtual_ssl_echo_server(client_ctx, sleeper=sleeper) as s: await s.do_handshake() await send(b"a") s.transport_stream.renegotiate() await expect(b"a") await clear() for i in range(100): b1 = bytes([i % 0xFF]) b2 = bytes([(2 * i) % 0xFF]) s.transport_stream.renegotiate() async with _core.open_nursery() as nursery: nursery.start_soon(send, b1) nursery.start_soon(expect, b1) async with _core.open_nursery() as nursery: nursery.start_soon(expect, b2) nursery.start_soon(send, b2) await clear() for i in range(100): b1 = bytes([i % 0xFF]) b2 = bytes([(2 * i) % 0xFF]) await send(b1) s.transport_stream.renegotiate() await expect(b1) async with _core.open_nursery() as nursery: nursery.start_soon(expect, b2) nursery.start_soon(send, b2) await clear() # Checking that wait_send_all_might_not_block and receive_some don't # conflict: # 1) Set up a situation where expect (receive_some) is blocked sending, # and wait_send_all_might_not_block comes in. # Our receive_some() call will get stuck when it hits send_all async def sleeper_with_slow_send_all(method): if method == "send_all": await trio.sleep(100000) # And our wait_send_all_might_not_block call will give it time to get # stuck, and then start async def sleep_then_wait_writable(): await trio.sleep(1000) await s.wait_send_all_might_not_block() with virtual_ssl_echo_server(client_ctx, sleeper=sleeper_with_slow_send_all) as s: await send(b"x") s.transport_stream.renegotiate() async with _core.open_nursery() as nursery: nursery.start_soon(expect, b"x") nursery.start_soon(sleep_then_wait_writable) await clear() await s.aclose() # 2) Same, but now wait_send_all_might_not_block is stuck when # receive_some tries to send. async def sleeper_with_slow_wait_writable_and_expect(method): if method == "wait_send_all_might_not_block": await trio.sleep(100000) elif method == "expect": await trio.sleep(1000) with virtual_ssl_echo_server( client_ctx, sleeper=sleeper_with_slow_wait_writable_and_expect ) as s: await send(b"x") s.transport_stream.renegotiate() async with _core.open_nursery() as nursery: nursery.start_soon(expect, b"x") nursery.start_soon(s.wait_send_all_might_not_block) await clear() await s.aclose() async def test_resource_busy_errors(client_ctx): async def do_send_all(): with assert_checkpoints(): await s.send_all(b"x") async def do_receive_some(): with assert_checkpoints(): await s.receive_some(1) async def do_wait_send_all_might_not_block(): with assert_checkpoints(): await s.wait_send_all_might_not_block() s, _ = ssl_lockstep_stream_pair(client_ctx) with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all) nursery.start_soon(do_send_all) assert "another task" in str(excinfo.value) s, _ = ssl_lockstep_stream_pair(client_ctx) with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(do_receive_some) nursery.start_soon(do_receive_some) assert "another task" in str(excinfo.value) s, _ = ssl_lockstep_stream_pair(client_ctx) with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all) nursery.start_soon(do_wait_send_all_might_not_block) assert "another task" in str(excinfo.value) s, _ = ssl_lockstep_stream_pair(client_ctx) with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(do_wait_send_all_might_not_block) nursery.start_soon(do_wait_send_all_might_not_block) assert "another task" in str(excinfo.value) async def test_wait_writable_calls_underlying_wait_writable(): record = [] class NotAStream: async def wait_send_all_might_not_block(self): record.append("ok") ctx = ssl.create_default_context() s = SSLStream(NotAStream(), ctx, server_hostname="x") await s.wait_send_all_might_not_block() assert record == ["ok"] async def test_checkpoints(client_ctx): async with ssl_echo_server(client_ctx) as s: with assert_checkpoints(): await s.do_handshake() with assert_checkpoints(): await s.do_handshake() with assert_checkpoints(): await s.wait_send_all_might_not_block() with assert_checkpoints(): await s.send_all(b"xxx") with assert_checkpoints(): await s.receive_some(1) # These receive_some's in theory could return immediately, because the # "xxx" was sent in a single record and after the first # receive_some(1) the rest are sitting inside the SSLObject's internal # buffers. with assert_checkpoints(): await s.receive_some(1) with assert_checkpoints(): await s.receive_some(1) with assert_checkpoints(): await s.unwrap() async with ssl_echo_server(client_ctx) as s: await s.do_handshake() with assert_checkpoints(): await s.aclose() async def test_send_all_empty_string(client_ctx): async with ssl_echo_server(client_ctx) as s: await s.do_handshake() # underlying SSLObject interprets writing b"" as indicating an EOF, # for some reason. Make sure we don't inherit this. with assert_checkpoints(): await s.send_all(b"") with assert_checkpoints(): await s.send_all(b"") await s.send_all(b"x") assert await s.receive_some(1) == b"x" await s.aclose() @pytest.mark.parametrize("https_compatible", [False, True]) async def test_SSLStream_generic(client_ctx, https_compatible): async def stream_maker(): return ssl_memory_stream_pair( client_ctx, client_kwargs={"https_compatible": https_compatible}, server_kwargs={"https_compatible": https_compatible}, ) async def clogged_stream_maker(): client, server = ssl_lockstep_stream_pair(client_ctx) # If we don't do handshakes up front, then we run into a problem in # the following situation: # - server does wait_send_all_might_not_block # - client does receive_some to unclog it # Then the client's receive_some will actually send some data to start # the handshake, and itself get stuck. async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) return client, server await check_two_way_stream(stream_maker, clogged_stream_maker) async def test_unwrap(client_ctx): client_ssl, server_ssl = ssl_memory_stream_pair(client_ctx) client_transport = client_ssl.transport_stream server_transport = server_ssl.transport_stream seq = Sequencer() async def client(): await client_ssl.do_handshake() await client_ssl.send_all(b"x") assert await client_ssl.receive_some(1) == b"y" await client_ssl.send_all(b"z") # After sending that, disable outgoing data from our end, to make # sure the server doesn't see our EOF until after we've sent some # trailing data async with seq(0): send_all_hook = client_transport.send_stream.send_all_hook client_transport.send_stream.send_all_hook = None assert await client_ssl.receive_some(1) == b"" assert client_ssl.transport_stream is client_transport # We just received EOF. Unwrap the connection and send some more. raw, trailing = await client_ssl.unwrap() assert raw is client_transport assert trailing == b"" assert client_ssl.transport_stream is None await raw.send_all(b"trailing") # Reconnect the streams. Now the server will receive both our shutdown # acknowledgement + the trailing data in a single lump. client_transport.send_stream.send_all_hook = send_all_hook await client_transport.send_stream.send_all_hook() async def server(): await server_ssl.do_handshake() assert await server_ssl.receive_some(1) == b"x" await server_ssl.send_all(b"y") assert await server_ssl.receive_some(1) == b"z" # Now client is blocked waiting for us to send something, but # instead we close the TLS connection (with sequencer to make sure # that the client won't see and automatically respond before we've had # a chance to disable the client->server transport) async with seq(1): raw, trailing = await server_ssl.unwrap() assert raw is server_transport assert trailing == b"trailing" assert server_ssl.transport_stream is None async with _core.open_nursery() as nursery: nursery.start_soon(client) nursery.start_soon(server) async def test_closing_nice_case(client_ctx): # the nice case: graceful closes all around client_ssl, server_ssl = ssl_memory_stream_pair(client_ctx) client_transport = client_ssl.transport_stream # Both the handshake and the close require back-and-forth discussion, so # we need to run them concurrently async def client_closer(): with assert_checkpoints(): await client_ssl.aclose() async def server_closer(): assert await server_ssl.receive_some(10) == b"" assert await server_ssl.receive_some(10) == b"" with assert_checkpoints(): await server_ssl.aclose() async with _core.open_nursery() as nursery: nursery.start_soon(client_closer) nursery.start_soon(server_closer) # closing the SSLStream also closes its transport with pytest.raises(ClosedResourceError): await client_transport.send_all(b"123") # once closed, it's OK to close again with assert_checkpoints(): await client_ssl.aclose() with assert_checkpoints(): await client_ssl.aclose() # Trying to send more data does not work with pytest.raises(ClosedResourceError): await server_ssl.send_all(b"123") # And once the connection is has been closed *locally*, then instead of # getting empty bytestrings we get a proper error with pytest.raises(ClosedResourceError): await client_ssl.receive_some(10) == b"" with pytest.raises(ClosedResourceError): await client_ssl.unwrap() with pytest.raises(ClosedResourceError): await client_ssl.do_handshake() # Check that a graceful close *before* handshaking gives a clean EOF on # the other side client_ssl, server_ssl = ssl_memory_stream_pair(client_ctx) async def expect_eof_server(): with assert_checkpoints(): assert await server_ssl.receive_some(10) == b"" with assert_checkpoints(): await server_ssl.aclose() async with _core.open_nursery() as nursery: nursery.start_soon(client_ssl.aclose) nursery.start_soon(expect_eof_server) async def test_send_all_fails_in_the_middle(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) async def bad_hook(): raise KeyError client.transport_stream.send_stream.send_all_hook = bad_hook with pytest.raises(KeyError): await client.send_all(b"x") with pytest.raises(BrokenResourceError): await client.wait_send_all_might_not_block() closed = 0 def close_hook(): nonlocal closed closed += 1 client.transport_stream.send_stream.close_hook = close_hook client.transport_stream.receive_stream.close_hook = close_hook await client.aclose() assert closed == 2 async def test_ssl_over_ssl(client_ctx): client_0, server_0 = memory_stream_pair() client_1 = SSLStream( client_0, client_ctx, server_hostname="trio-test-1.example.org" ) server_1 = SSLStream(server_0, SERVER_CTX, server_side=True) client_2 = SSLStream( client_1, client_ctx, server_hostname="trio-test-1.example.org" ) server_2 = SSLStream(server_1, SERVER_CTX, server_side=True) async def client(): await client_2.send_all(b"hi") assert await client_2.receive_some(10) == b"bye" async def server(): assert await server_2.receive_some(10) == b"hi" await server_2.send_all(b"bye") async with _core.open_nursery() as nursery: nursery.start_soon(client) nursery.start_soon(server) async def test_ssl_bad_shutdown(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) await trio.aclose_forcefully(client) # now the server sees a broken stream with pytest.raises(BrokenResourceError): await server.receive_some(10) with pytest.raises(BrokenResourceError): await server.send_all(b"x" * 10) await server.aclose() async def test_ssl_bad_shutdown_but_its_ok(client_ctx): client, server = ssl_memory_stream_pair( client_ctx, server_kwargs={"https_compatible": True}, client_kwargs={"https_compatible": True}, ) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) await trio.aclose_forcefully(client) # the server sees that as a clean shutdown assert await server.receive_some(10) == b"" with pytest.raises(BrokenResourceError): await server.send_all(b"x" * 10) await server.aclose() async def test_ssl_handshake_failure_during_aclose(): # Weird scenario: aclose() triggers an automatic handshake, and this # fails. This also exercises a bit of code in aclose() that was otherwise # uncovered, for re-raising exceptions after calling aclose_forcefully on # the underlying transport. async with ssl_echo_server_raw(expect_fail=True) as sock: # Don't configure trust correctly client_ctx = ssl.create_default_context() s = SSLStream(sock, client_ctx, server_hostname="trio-test-1.example.org") # It's a little unclear here whether aclose should swallow the error # or let it escape. We *do* swallow the error if it arrives when we're # sending close_notify, because both sides closing the connection # simultaneously is allowed. But I guess when https_compatible=False # then it's bad if we can get through a whole connection with a peer # that has no valid certificate, and never raise an error. with pytest.raises(BrokenResourceError): await s.aclose() async def test_ssl_only_closes_stream_once(client_ctx): # We used to have a bug where if transport_stream.aclose() raised an # error, we would call it again. This checks that that's fixed. client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) client_orig_close_hook = client.transport_stream.send_stream.close_hook transport_close_count = 0 def close_hook(): nonlocal transport_close_count client_orig_close_hook() transport_close_count += 1 raise KeyError client.transport_stream.send_stream.close_hook = close_hook with pytest.raises(KeyError): await client.aclose() assert transport_close_count == 1 async def test_ssl_https_compatibility_disagreement(client_ctx): client, server = ssl_memory_stream_pair( client_ctx, server_kwargs={"https_compatible": False}, client_kwargs={"https_compatible": True}, ) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) # client is in HTTPS-mode, server is not # so client doing graceful_shutdown causes an error on server async def receive_and_expect_error(): with pytest.raises(BrokenResourceError) as excinfo: await server.receive_some(10) assert isinstance(excinfo.value.__cause__, ssl.SSLEOFError) async with _core.open_nursery() as nursery: nursery.start_soon(client.aclose) nursery.start_soon(receive_and_expect_error) async def test_https_mode_eof_before_handshake(client_ctx): client, server = ssl_memory_stream_pair( client_ctx, server_kwargs={"https_compatible": True}, client_kwargs={"https_compatible": True}, ) async def server_expect_clean_eof(): assert await server.receive_some(10) == b"" async with _core.open_nursery() as nursery: nursery.start_soon(client.aclose) nursery.start_soon(server_expect_clean_eof) async def test_send_error_during_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) async def bad_hook(): raise KeyError client.transport_stream.send_stream.send_all_hook = bad_hook with pytest.raises(KeyError): with assert_checkpoints(): await client.do_handshake() with pytest.raises(BrokenResourceError): with assert_checkpoints(): await client.do_handshake() async def test_receive_error_during_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) async def bad_hook(): raise KeyError client.transport_stream.receive_stream.receive_some_hook = bad_hook async def client_side(cancel_scope): with pytest.raises(KeyError): with assert_checkpoints(): await client.do_handshake() cancel_scope.cancel() async with _core.open_nursery() as nursery: nursery.start_soon(client_side, nursery.cancel_scope) nursery.start_soon(server.do_handshake) with pytest.raises(BrokenResourceError): with assert_checkpoints(): await client.do_handshake() async def test_selected_alpn_protocol_before_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) with pytest.raises(NeedHandshakeError): client.selected_alpn_protocol() with pytest.raises(NeedHandshakeError): server.selected_alpn_protocol() async def test_selected_alpn_protocol_when_not_set(client_ctx): # ALPN protocol still returns None when it's not set, # instead of raising an exception client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) assert client.selected_alpn_protocol() is None assert server.selected_alpn_protocol() is None assert client.selected_alpn_protocol() == server.selected_alpn_protocol() async def test_selected_npn_protocol_before_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) with pytest.raises(NeedHandshakeError): client.selected_npn_protocol() with pytest.raises(NeedHandshakeError): server.selected_npn_protocol() @pytest.mark.filterwarnings( r"ignore: ssl module. NPN is deprecated, use ALPN instead:UserWarning" ) async def test_selected_npn_protocol_when_not_set(client_ctx): # NPN protocol still returns None when it's not set, # instead of raising an exception client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) assert client.selected_npn_protocol() is None assert server.selected_npn_protocol() is None assert client.selected_npn_protocol() == server.selected_npn_protocol() async def test_get_channel_binding_before_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) with pytest.raises(NeedHandshakeError): client.get_channel_binding() with pytest.raises(NeedHandshakeError): server.get_channel_binding() async def test_get_channel_binding_after_handshake(client_ctx): client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) assert client.get_channel_binding() is not None assert server.get_channel_binding() is not None assert client.get_channel_binding() == server.get_channel_binding() async def test_getpeercert(client_ctx): # Make sure we're not affected by https://bugs.python.org/issue29334 client, server = ssl_memory_stream_pair(client_ctx) async with _core.open_nursery() as nursery: nursery.start_soon(client.do_handshake) nursery.start_soon(server.do_handshake) assert server.getpeercert() is None print(client.getpeercert()) assert ("DNS", "trio-test-1.example.org") in client.getpeercert()["subjectAltName"] async def test_SSLListener(client_ctx): async def setup(**kwargs): listen_sock = tsocket.socket() await listen_sock.bind(("127.0.0.1", 0)) listen_sock.listen(1) socket_listener = SocketListener(listen_sock) ssl_listener = SSLListener(socket_listener, SERVER_CTX, **kwargs) transport_client = await open_tcp_stream(*listen_sock.getsockname()) ssl_client = SSLStream( transport_client, client_ctx, server_hostname="trio-test-1.example.org" ) return listen_sock, ssl_listener, ssl_client listen_sock, ssl_listener, ssl_client = await setup() async with ssl_client: ssl_server = await ssl_listener.accept() async with ssl_server: assert not ssl_server._https_compatible # Make sure the connection works async with _core.open_nursery() as nursery: nursery.start_soon(ssl_client.do_handshake) nursery.start_soon(ssl_server.do_handshake) # Test SSLListener.aclose await ssl_listener.aclose() assert listen_sock.fileno() == -1 ################ # Test https_compatible _, ssl_listener, ssl_client = await setup(https_compatible=True) ssl_server = await ssl_listener.accept() assert ssl_server._https_compatible await aclose_forcefully(ssl_listener) await aclose_forcefully(ssl_client) await aclose_forcefully(ssl_server) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_subprocess.py0000644000076500000000000003737400000000000020056 0ustar00quentinwheelimport os import signal import subprocess import sys import pytest import random from functools import partial from .. import ( _core, move_on_after, fail_after, sleep, sleep_forever, Process, open_process, run_process, TrioDeprecationWarning, ) from .._core.tests.tutil import slow, skip_if_fbsd_pipes_broken from ..testing import wait_all_tasks_blocked posix = os.name == "posix" if posix: from signal import SIGKILL, SIGTERM, SIGUSR1 else: SIGKILL, SIGTERM, SIGUSR1 = None, None, None # Since Windows has very few command-line utilities generally available, # all of our subprocesses are Python processes running short bits of # (mostly) cross-platform code. def python(code): return [sys.executable, "-u", "-c", "import sys; " + code] EXIT_TRUE = python("sys.exit(0)") EXIT_FALSE = python("sys.exit(1)") CAT = python("sys.stdout.buffer.write(sys.stdin.buffer.read())") SLEEP = lambda seconds: python("import time; time.sleep({})".format(seconds)) def got_signal(proc, sig): if posix: return proc.returncode == -sig else: return proc.returncode != 0 async def test_basic(): async with await open_process(EXIT_TRUE) as proc: pass assert isinstance(proc, Process) assert proc._pidfd is None assert proc.returncode == 0 assert repr(proc) == f"" async with await open_process(EXIT_FALSE) as proc: pass assert proc.returncode == 1 assert repr(proc) == "".format( EXIT_FALSE, "exited with status 1" ) async def test_auto_update_returncode(): p = await open_process(SLEEP(9999)) assert p.returncode is None assert "running" in repr(p) p.kill() p._proc.wait() assert p.returncode is not None assert "exited" in repr(p) assert p._pidfd is None assert p.returncode is not None async def test_multi_wait(): async with await open_process(SLEEP(10)) as proc: # Check that wait (including multi-wait) tolerates being cancelled async with _core.open_nursery() as nursery: nursery.start_soon(proc.wait) nursery.start_soon(proc.wait) nursery.start_soon(proc.wait) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() # Now try waiting for real async with _core.open_nursery() as nursery: nursery.start_soon(proc.wait) nursery.start_soon(proc.wait) nursery.start_soon(proc.wait) await wait_all_tasks_blocked() proc.kill() async def test_kill_when_context_cancelled(): with move_on_after(100) as scope: async with await open_process(SLEEP(10)) as proc: assert proc.poll() is None scope.cancel() await sleep_forever() assert scope.cancelled_caught assert got_signal(proc, SIGKILL) assert repr(proc) == "".format( SLEEP(10), "exited with signal 9" if posix else "exited with status 1" ) COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR = python( "data = sys.stdin.buffer.read(); " "sys.stdout.buffer.write(data); " "sys.stderr.buffer.write(data[::-1])" ) async def test_pipes(): async with await open_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) as proc: msg = b"the quick brown fox jumps over the lazy dog" async def feed_input(): await proc.stdin.send_all(msg) await proc.stdin.aclose() async def check_output(stream, expected): seen = bytearray() async for chunk in stream: seen += chunk assert seen == expected async with _core.open_nursery() as nursery: # fail eventually if something is broken nursery.cancel_scope.deadline = _core.current_time() + 30.0 nursery.start_soon(feed_input) nursery.start_soon(check_output, proc.stdout, msg) nursery.start_soon(check_output, proc.stderr, msg[::-1]) assert not nursery.cancel_scope.cancelled_caught assert 0 == await proc.wait() async def test_interactive(): # Test some back-and-forth with a subprocess. This one works like so: # in: 32\n # out: 0000...0000\n (32 zeroes) # err: 1111...1111\n (64 ones) # in: 10\n # out: 2222222222\n (10 twos) # err: 3333....3333\n (20 threes) # in: EOF # out: EOF # err: EOF async with await open_process( python( "idx = 0\n" "while True:\n" " line = sys.stdin.readline()\n" " if line == '': break\n" " request = int(line.strip())\n" " print(str(idx * 2) * request)\n" " print(str(idx * 2 + 1) * request * 2, file=sys.stderr)\n" " idx += 1\n" ), stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) as proc: newline = b"\n" if posix else b"\r\n" async def expect(idx, request): async with _core.open_nursery() as nursery: async def drain_one(stream, count, digit): while count > 0: result = await stream.receive_some(count) assert result == ( "{}".format(digit).encode("utf-8") * len(result) ) count -= len(result) assert count == 0 assert await stream.receive_some(len(newline)) == newline nursery.start_soon(drain_one, proc.stdout, request, idx * 2) nursery.start_soon(drain_one, proc.stderr, request * 2, idx * 2 + 1) with fail_after(5): await proc.stdin.send_all(b"12") await sleep(0.1) await proc.stdin.send_all(b"345" + newline) await expect(0, 12345) await proc.stdin.send_all(b"100" + newline + b"200" + newline) await expect(1, 100) await expect(2, 200) await proc.stdin.send_all(b"0" + newline) await expect(3, 0) await proc.stdin.send_all(b"999999") with move_on_after(0.1) as scope: await expect(4, 0) assert scope.cancelled_caught await proc.stdin.send_all(newline) await expect(4, 999999) await proc.stdin.aclose() assert await proc.stdout.receive_some(1) == b"" assert await proc.stderr.receive_some(1) == b"" assert proc.returncode == 0 async def test_run(): data = bytes(random.randint(0, 255) for _ in range(2 ** 18)) result = await run_process( CAT, stdin=data, capture_stdout=True, capture_stderr=True ) assert result.args == CAT assert result.returncode == 0 assert result.stdout == data assert result.stderr == b"" result = await run_process(CAT, capture_stdout=True) assert result.args == CAT assert result.returncode == 0 assert result.stdout == b"" assert result.stderr is None result = await run_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=data, capture_stdout=True, capture_stderr=True, ) assert result.args == COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR assert result.returncode == 0 assert result.stdout == data assert result.stderr == data[::-1] # invalid combinations with pytest.raises(UnicodeError): await run_process(CAT, stdin="oh no, it's text") with pytest.raises(ValueError): await run_process(CAT, stdin=subprocess.PIPE) with pytest.raises(ValueError): await run_process(CAT, capture_stdout=True, stdout=subprocess.DEVNULL) with pytest.raises(ValueError): await run_process(CAT, capture_stderr=True, stderr=None) async def test_run_check(): cmd = python("sys.stderr.buffer.write(b'test\\n'); sys.exit(1)") with pytest.raises(subprocess.CalledProcessError) as excinfo: await run_process(cmd, stdin=subprocess.DEVNULL, capture_stderr=True) assert excinfo.value.cmd == cmd assert excinfo.value.returncode == 1 assert excinfo.value.stderr == b"test\n" assert excinfo.value.stdout is None result = await run_process( cmd, capture_stdout=True, capture_stderr=True, check=False ) assert result.args == cmd assert result.stdout == b"" assert result.stderr == b"test\n" assert result.returncode == 1 @skip_if_fbsd_pipes_broken async def test_run_with_broken_pipe(): result = await run_process( [sys.executable, "-c", "import sys; sys.stdin.close()"], stdin=b"x" * 131072 ) assert result.returncode == 0 assert result.stdout is result.stderr is None async def test_stderr_stdout(): async with await open_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) as proc: assert proc.stdout is not None assert proc.stderr is None await proc.stdio.send_all(b"1234") await proc.stdio.send_eof() output = [] while True: chunk = await proc.stdio.receive_some(16) if chunk == b"": break output.append(chunk) assert b"".join(output) == b"12344321" assert proc.returncode == 0 # equivalent test with run_process() result = await run_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=b"1234", capture_stdout=True, stderr=subprocess.STDOUT, ) assert result.returncode == 0 assert result.stdout == b"12344321" assert result.stderr is None # this one hits the branch where stderr=STDOUT but stdout # is not redirected async with await open_process( CAT, stdin=subprocess.PIPE, stderr=subprocess.STDOUT ) as proc: assert proc.stdout is None assert proc.stderr is None await proc.stdin.aclose() assert proc.returncode == 0 if posix: try: r, w = os.pipe() async with await open_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=w, stderr=subprocess.STDOUT, ) as proc: os.close(w) assert proc.stdio is None assert proc.stdout is None assert proc.stderr is None await proc.stdin.send_all(b"1234") await proc.stdin.aclose() assert await proc.wait() == 0 assert os.read(r, 4096) == b"12344321" assert os.read(r, 4096) == b"" finally: os.close(r) async def test_errors(): with pytest.raises(TypeError) as excinfo: await open_process(["ls"], encoding="utf-8") assert "unbuffered byte streams" in str(excinfo.value) assert "the 'encoding' option is not supported" in str(excinfo.value) if posix: with pytest.raises(TypeError) as excinfo: await open_process(["ls"], shell=True) with pytest.raises(TypeError) as excinfo: await open_process("ls", shell=False) async def test_signals(): async def test_one_signal(send_it, signum): with move_on_after(1.0) as scope: async with await open_process(SLEEP(3600)) as proc: send_it(proc) assert not scope.cancelled_caught if posix: assert proc.returncode == -signum else: assert proc.returncode != 0 await test_one_signal(Process.kill, SIGKILL) await test_one_signal(Process.terminate, SIGTERM) # Test that we can send arbitrary signals. # # We used to use SIGINT here, but it turns out that the Python interpreter # has race conditions that can cause it to explode in weird ways if it # tries to handle SIGINT during startup. SIGUSR1's default disposition is # to terminate the target process, and Python doesn't try to do anything # clever to handle it. if posix: await test_one_signal(lambda proc: proc.send_signal(SIGUSR1), SIGUSR1) @pytest.mark.skipif(not posix, reason="POSIX specific") async def test_wait_reapable_fails(): old_sigchld = signal.signal(signal.SIGCHLD, signal.SIG_IGN) try: # With SIGCHLD disabled, the wait() syscall will wait for the # process to exit but then fail with ECHILD. Make sure we # support this case as the stdlib subprocess module does. async with await open_process(SLEEP(3600)) as proc: async with _core.open_nursery() as nursery: nursery.start_soon(proc.wait) await wait_all_tasks_blocked() proc.kill() nursery.cancel_scope.deadline = _core.current_time() + 1.0 assert not nursery.cancel_scope.cancelled_caught assert proc.returncode == 0 # exit status unknowable, so... finally: signal.signal(signal.SIGCHLD, old_sigchld) @slow def test_waitid_eintr(): # This only matters on PyPy (where we're coding EINTR handling # ourselves) but the test works on all waitid platforms. from .._subprocess_platform import wait_child_exiting if not wait_child_exiting.__module__.endswith("waitid"): pytest.skip("waitid only") from .._subprocess_platform.waitid import sync_wait_reapable got_alarm = False sleeper = subprocess.Popen(["sleep", "3600"]) def on_alarm(sig, frame): nonlocal got_alarm got_alarm = True sleeper.kill() old_sigalrm = signal.signal(signal.SIGALRM, on_alarm) try: signal.alarm(1) sync_wait_reapable(sleeper.pid) assert sleeper.wait(timeout=1) == -9 finally: if sleeper.returncode is None: # pragma: no cover # We only get here if something fails in the above; # if the test passes, wait() will reap the process sleeper.kill() sleeper.wait() signal.signal(signal.SIGALRM, old_sigalrm) async def test_custom_deliver_cancel(): custom_deliver_cancel_called = False async def custom_deliver_cancel(proc): nonlocal custom_deliver_cancel_called custom_deliver_cancel_called = True proc.terminate() # Make sure this does get cancelled when the process exits, and that # the process really exited. try: await sleep_forever() finally: assert proc.returncode is not None async with _core.open_nursery() as nursery: nursery.start_soon( partial(run_process, SLEEP(9999), deliver_cancel=custom_deliver_cancel) ) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() assert custom_deliver_cancel_called async def test_warn_on_failed_cancel_terminate(monkeypatch): original_terminate = Process.terminate def broken_terminate(self): original_terminate(self) raise OSError("whoops") monkeypatch.setattr(Process, "terminate", broken_terminate) with pytest.warns(RuntimeWarning, match=".*whoops.*"): async with _core.open_nursery() as nursery: nursery.start_soon(run_process, SLEEP(9999)) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() @pytest.mark.skipif(os.name != "posix", reason="posix only") async def test_warn_on_cancel_SIGKILL_escalation(autojump_clock, monkeypatch): monkeypatch.setattr(Process, "terminate", lambda *args: None) with pytest.warns(RuntimeWarning, match=".*ignored SIGTERM.*"): async with _core.open_nursery() as nursery: nursery.start_soon(run_process, SLEEP(9999)) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_sync.py0000644000076500000000000003733600000000000016640 0ustar00quentinwheelimport pytest import weakref from ..testing import wait_all_tasks_blocked, assert_checkpoints from .. import _core from .. import _timeouts from .._timeouts import sleep_forever, move_on_after from .._sync import * async def test_Event(): e = Event() assert not e.is_set() assert e.statistics().tasks_waiting == 0 e.set() assert e.is_set() with assert_checkpoints(): await e.wait() e = Event() record = [] async def child(): record.append("sleeping") await e.wait() record.append("woken") async with _core.open_nursery() as nursery: nursery.start_soon(child) nursery.start_soon(child) await wait_all_tasks_blocked() assert record == ["sleeping", "sleeping"] assert e.statistics().tasks_waiting == 2 e.set() await wait_all_tasks_blocked() assert record == ["sleeping", "sleeping", "woken", "woken"] async def test_CapacityLimiter(): with pytest.raises(TypeError): CapacityLimiter(1.0) with pytest.raises(ValueError): CapacityLimiter(-1) c = CapacityLimiter(2) repr(c) # smoke test assert c.total_tokens == 2 assert c.borrowed_tokens == 0 assert c.available_tokens == 2 with pytest.raises(RuntimeError): c.release() assert c.borrowed_tokens == 0 c.acquire_nowait() assert c.borrowed_tokens == 1 assert c.available_tokens == 1 stats = c.statistics() assert stats.borrowed_tokens == 1 assert stats.total_tokens == 2 assert stats.borrowers == [_core.current_task()] assert stats.tasks_waiting == 0 # Can't re-acquire when we already have it with pytest.raises(RuntimeError): c.acquire_nowait() assert c.borrowed_tokens == 1 with pytest.raises(RuntimeError): await c.acquire() assert c.borrowed_tokens == 1 # We can acquire on behalf of someone else though with assert_checkpoints(): await c.acquire_on_behalf_of("someone") # But then we've run out of capacity assert c.borrowed_tokens == 2 with pytest.raises(_core.WouldBlock): c.acquire_on_behalf_of_nowait("third party") assert set(c.statistics().borrowers) == {_core.current_task(), "someone"} # Until we release one c.release_on_behalf_of(_core.current_task()) assert c.statistics().borrowers == ["someone"] c.release_on_behalf_of("someone") assert c.borrowed_tokens == 0 with assert_checkpoints(): async with c: assert c.borrowed_tokens == 1 async with _core.open_nursery() as nursery: await c.acquire_on_behalf_of("value 1") await c.acquire_on_behalf_of("value 2") nursery.start_soon(c.acquire_on_behalf_of, "value 3") await wait_all_tasks_blocked() assert c.borrowed_tokens == 2 assert c.statistics().tasks_waiting == 1 c.release_on_behalf_of("value 2") # Fairness: assert c.borrowed_tokens == 2 with pytest.raises(_core.WouldBlock): c.acquire_nowait() c.release_on_behalf_of("value 3") c.release_on_behalf_of("value 1") async def test_CapacityLimiter_inf(): from math import inf c = CapacityLimiter(inf) repr(c) # smoke test assert c.total_tokens == inf assert c.borrowed_tokens == 0 assert c.available_tokens == inf with pytest.raises(RuntimeError): c.release() assert c.borrowed_tokens == 0 c.acquire_nowait() assert c.borrowed_tokens == 1 assert c.available_tokens == inf async def test_CapacityLimiter_change_total_tokens(): c = CapacityLimiter(2) with pytest.raises(TypeError): c.total_tokens = 1.0 with pytest.raises(ValueError): c.total_tokens = 0 with pytest.raises(ValueError): c.total_tokens = -10 assert c.total_tokens == 2 async with _core.open_nursery() as nursery: for i in range(5): nursery.start_soon(c.acquire_on_behalf_of, i) await wait_all_tasks_blocked() assert set(c.statistics().borrowers) == {0, 1} assert c.statistics().tasks_waiting == 3 c.total_tokens += 2 assert set(c.statistics().borrowers) == {0, 1, 2, 3} assert c.statistics().tasks_waiting == 1 c.total_tokens -= 3 assert c.borrowed_tokens == 4 assert c.total_tokens == 1 c.release_on_behalf_of(0) c.release_on_behalf_of(1) c.release_on_behalf_of(2) assert set(c.statistics().borrowers) == {3} assert c.statistics().tasks_waiting == 1 c.release_on_behalf_of(3) assert set(c.statistics().borrowers) == {4} assert c.statistics().tasks_waiting == 0 # regression test for issue #548 async def test_CapacityLimiter_memleak_548(): limiter = CapacityLimiter(total_tokens=1) await limiter.acquire() async with _core.open_nursery() as n: n.start_soon(limiter.acquire) await wait_all_tasks_blocked() # give it a chance to run the task n.cancel_scope.cancel() # if this is 1, the acquire call (despite being killed) is still there in the task, and will # leak memory all the while the limiter is active assert len(limiter._pending_borrowers) == 0 async def test_Semaphore(): with pytest.raises(TypeError): Semaphore(1.0) with pytest.raises(ValueError): Semaphore(-1) s = Semaphore(1) repr(s) # smoke test assert s.value == 1 assert s.max_value is None s.release() assert s.value == 2 assert s.statistics().tasks_waiting == 0 s.acquire_nowait() assert s.value == 1 with assert_checkpoints(): await s.acquire() assert s.value == 0 with pytest.raises(_core.WouldBlock): s.acquire_nowait() s.release() assert s.value == 1 with assert_checkpoints(): async with s: assert s.value == 0 assert s.value == 1 s.acquire_nowait() record = [] async def do_acquire(s): record.append("started") await s.acquire() record.append("finished") async with _core.open_nursery() as nursery: nursery.start_soon(do_acquire, s) await wait_all_tasks_blocked() assert record == ["started"] assert s.value == 0 s.release() # Fairness: assert s.value == 0 with pytest.raises(_core.WouldBlock): s.acquire_nowait() assert record == ["started", "finished"] async def test_Semaphore_bounded(): with pytest.raises(TypeError): Semaphore(1, max_value=1.0) with pytest.raises(ValueError): Semaphore(2, max_value=1) bs = Semaphore(1, max_value=1) assert bs.max_value == 1 repr(bs) # smoke test with pytest.raises(ValueError): bs.release() assert bs.value == 1 bs.acquire_nowait() assert bs.value == 0 bs.release() assert bs.value == 1 @pytest.mark.parametrize("lockcls", [Lock, StrictFIFOLock], ids=lambda fn: fn.__name__) async def test_Lock_and_StrictFIFOLock(lockcls): l = lockcls() # noqa assert not l.locked() # make sure locks can be weakref'ed (gh-331) r = weakref.ref(l) assert r() is l repr(l) # smoke test # make sure repr uses the right name for subclasses assert lockcls.__name__ in repr(l) with assert_checkpoints(): async with l: assert l.locked() repr(l) # smoke test (repr branches on locked/unlocked) assert not l.locked() l.acquire_nowait() assert l.locked() l.release() assert not l.locked() with assert_checkpoints(): await l.acquire() assert l.locked() l.release() assert not l.locked() l.acquire_nowait() with pytest.raises(RuntimeError): # Error out if we already own the lock l.acquire_nowait() l.release() with pytest.raises(RuntimeError): # Error out if we don't own the lock l.release() holder_task = None async def holder(): nonlocal holder_task holder_task = _core.current_task() async with l: await sleep_forever() async with _core.open_nursery() as nursery: assert not l.locked() nursery.start_soon(holder) await wait_all_tasks_blocked() assert l.locked() # WouldBlock if someone else holds the lock with pytest.raises(_core.WouldBlock): l.acquire_nowait() # Can't release a lock someone else holds with pytest.raises(RuntimeError): l.release() statistics = l.statistics() print(statistics) assert statistics.locked assert statistics.owner is holder_task assert statistics.tasks_waiting == 0 nursery.start_soon(holder) await wait_all_tasks_blocked() statistics = l.statistics() print(statistics) assert statistics.tasks_waiting == 1 nursery.cancel_scope.cancel() statistics = l.statistics() assert not statistics.locked assert statistics.owner is None assert statistics.tasks_waiting == 0 async def test_Condition(): with pytest.raises(TypeError): Condition(Semaphore(1)) with pytest.raises(TypeError): Condition(StrictFIFOLock) l = Lock() # noqa c = Condition(l) assert not l.locked() assert not c.locked() with assert_checkpoints(): await c.acquire() assert l.locked() assert c.locked() c = Condition() assert not c.locked() c.acquire_nowait() assert c.locked() with pytest.raises(RuntimeError): c.acquire_nowait() c.release() with pytest.raises(RuntimeError): # Can't wait without holding the lock await c.wait() with pytest.raises(RuntimeError): # Can't notify without holding the lock c.notify() with pytest.raises(RuntimeError): # Can't notify without holding the lock c.notify_all() finished_waiters = set() async def waiter(i): async with c: await c.wait() finished_waiters.add(i) async with _core.open_nursery() as nursery: for i in range(3): nursery.start_soon(waiter, i) await wait_all_tasks_blocked() async with c: c.notify() assert c.locked() await wait_all_tasks_blocked() assert finished_waiters == {0} async with c: c.notify_all() await wait_all_tasks_blocked() assert finished_waiters == {0, 1, 2} finished_waiters = set() async with _core.open_nursery() as nursery: for i in range(3): nursery.start_soon(waiter, i) await wait_all_tasks_blocked() async with c: c.notify(2) statistics = c.statistics() print(statistics) assert statistics.tasks_waiting == 1 assert statistics.lock_statistics.tasks_waiting == 2 # exiting the context manager hands off the lock to the first task assert c.statistics().lock_statistics.tasks_waiting == 1 await wait_all_tasks_blocked() assert finished_waiters == {0, 1} async with c: c.notify_all() # After being cancelled still hold the lock (!) # (Note that c.__aexit__ checks that we hold the lock as well) with _core.CancelScope() as scope: async with c: scope.cancel() try: await c.wait() finally: assert c.locked() from .._sync import async_cm from .._channel import open_memory_channel # Three ways of implementing a Lock in terms of a channel. Used to let us put # the channel through the generic lock tests. @async_cm class ChannelLock1: def __init__(self, capacity): self.s, self.r = open_memory_channel(capacity) for _ in range(capacity - 1): self.s.send_nowait(None) def acquire_nowait(self): self.s.send_nowait(None) async def acquire(self): await self.s.send(None) def release(self): self.r.receive_nowait() @async_cm class ChannelLock2: def __init__(self): self.s, self.r = open_memory_channel(10) self.s.send_nowait(None) def acquire_nowait(self): self.r.receive_nowait() async def acquire(self): await self.r.receive() def release(self): self.s.send_nowait(None) @async_cm class ChannelLock3: def __init__(self): self.s, self.r = open_memory_channel(0) # self.acquired is true when one task acquires the lock and # only becomes false when it's released and no tasks are # waiting to acquire. self.acquired = False def acquire_nowait(self): assert not self.acquired self.acquired = True async def acquire(self): if self.acquired: await self.s.send(None) else: self.acquired = True await _core.checkpoint() def release(self): try: self.r.receive_nowait() except _core.WouldBlock: assert self.acquired self.acquired = False lock_factories = [ lambda: CapacityLimiter(1), lambda: Semaphore(1), Lock, StrictFIFOLock, lambda: ChannelLock1(10), lambda: ChannelLock1(1), ChannelLock2, ChannelLock3, ] lock_factory_names = [ "CapacityLimiter(1)", "Semaphore(1)", "Lock", "StrictFIFOLock", "ChannelLock1(10)", "ChannelLock1(1)", "ChannelLock2", "ChannelLock3", ] generic_lock_test = pytest.mark.parametrize( "lock_factory", lock_factories, ids=lock_factory_names ) # Spawn a bunch of workers that take a lock and then yield; make sure that # only one worker is ever in the critical section at a time. @generic_lock_test async def test_generic_lock_exclusion(lock_factory): LOOPS = 10 WORKERS = 5 in_critical_section = False acquires = 0 async def worker(lock_like): nonlocal in_critical_section, acquires for _ in range(LOOPS): async with lock_like: acquires += 1 assert not in_critical_section in_critical_section = True await _core.checkpoint() await _core.checkpoint() assert in_critical_section in_critical_section = False async with _core.open_nursery() as nursery: lock_like = lock_factory() for _ in range(WORKERS): nursery.start_soon(worker, lock_like) assert not in_critical_section assert acquires == LOOPS * WORKERS # Several workers queue on the same lock; make sure they each get it, in # order. @generic_lock_test async def test_generic_lock_fifo_fairness(lock_factory): initial_order = [] record = [] LOOPS = 5 async def loopy(name, lock_like): # Record the order each task was initially scheduled in initial_order.append(name) for _ in range(LOOPS): async with lock_like: record.append(name) lock_like = lock_factory() async with _core.open_nursery() as nursery: nursery.start_soon(loopy, 1, lock_like) nursery.start_soon(loopy, 2, lock_like) nursery.start_soon(loopy, 3, lock_like) # The first three could be in any order due to scheduling randomness, # but after that they should repeat in the same order for i in range(LOOPS): assert record[3 * i : 3 * (i + 1)] == initial_order @generic_lock_test async def test_generic_lock_acquire_nowait_blocks_acquire(lock_factory): lock_like = lock_factory() record = [] async def lock_taker(): record.append("started") async with lock_like: pass record.append("finished") async with _core.open_nursery() as nursery: lock_like.acquire_nowait() nursery.start_soon(lock_taker) await wait_all_tasks_blocked() assert record == ["started"] lock_like.release() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_testing.py0000644000076500000000000004521600000000000017335 0ustar00quentinwheel# XX this should get broken up, like testing.py did import tempfile import pytest from .._core.tests.tutil import can_bind_ipv6 from .. import sleep from .. import _core from .._highlevel_generic import aclose_forcefully from ..testing import * from ..testing._check_streams import _assert_raises from ..testing._memory_streams import _UnboundedByteQueue from .. import socket as tsocket from .._highlevel_socket import SocketListener async def test_wait_all_tasks_blocked(): record = [] async def busy_bee(): for _ in range(10): await _core.checkpoint() record.append("busy bee exhausted") async def waiting_for_bee_to_leave(): await wait_all_tasks_blocked() record.append("quiet at last!") async with _core.open_nursery() as nursery: nursery.start_soon(busy_bee) nursery.start_soon(waiting_for_bee_to_leave) nursery.start_soon(waiting_for_bee_to_leave) # check cancellation record = [] async def cancelled_while_waiting(): try: await wait_all_tasks_blocked() except _core.Cancelled: record.append("ok") async with _core.open_nursery() as nursery: nursery.start_soon(cancelled_while_waiting) nursery.cancel_scope.cancel() assert record == ["ok"] async def test_wait_all_tasks_blocked_with_timeouts(mock_clock): record = [] async def timeout_task(): record.append("tt start") await sleep(5) record.append("tt finished") async with _core.open_nursery() as nursery: nursery.start_soon(timeout_task) await wait_all_tasks_blocked() assert record == ["tt start"] mock_clock.jump(10) await wait_all_tasks_blocked() assert record == ["tt start", "tt finished"] async def test_wait_all_tasks_blocked_with_cushion(): record = [] async def blink(): record.append("blink start") await sleep(0.01) await sleep(0.01) await sleep(0.01) record.append("blink end") async def wait_no_cushion(): await wait_all_tasks_blocked() record.append("wait_no_cushion end") async def wait_small_cushion(): await wait_all_tasks_blocked(0.02) record.append("wait_small_cushion end") async def wait_big_cushion(): await wait_all_tasks_blocked(0.03) record.append("wait_big_cushion end") async with _core.open_nursery() as nursery: nursery.start_soon(blink) nursery.start_soon(wait_no_cushion) nursery.start_soon(wait_small_cushion) nursery.start_soon(wait_small_cushion) nursery.start_soon(wait_big_cushion) assert record == [ "blink start", "wait_no_cushion end", "blink end", "wait_small_cushion end", "wait_small_cushion end", "wait_big_cushion end", ] ################################################################ async def test_assert_checkpoints(recwarn): with assert_checkpoints(): await _core.checkpoint() with pytest.raises(AssertionError): with assert_checkpoints(): 1 + 1 # partial yield cases # if you have a schedule point but not a cancel point, or vice-versa, then # that's not a checkpoint. for partial_yield in [ _core.checkpoint_if_cancelled, _core.cancel_shielded_checkpoint, ]: print(partial_yield) with pytest.raises(AssertionError): with assert_checkpoints(): await partial_yield() # But both together count as a checkpoint with assert_checkpoints(): await _core.checkpoint_if_cancelled() await _core.cancel_shielded_checkpoint() async def test_assert_no_checkpoints(recwarn): with assert_no_checkpoints(): 1 + 1 with pytest.raises(AssertionError): with assert_no_checkpoints(): await _core.checkpoint() # partial yield cases # if you have a schedule point but not a cancel point, or vice-versa, then # that doesn't make *either* version of assert_{no_,}yields happy. for partial_yield in [ _core.checkpoint_if_cancelled, _core.cancel_shielded_checkpoint, ]: print(partial_yield) with pytest.raises(AssertionError): with assert_no_checkpoints(): await partial_yield() # And both together also count as a checkpoint with pytest.raises(AssertionError): with assert_no_checkpoints(): await _core.checkpoint_if_cancelled() await _core.cancel_shielded_checkpoint() ################################################################ async def test_Sequencer(): record = [] def t(val): print(val) record.append(val) async def f1(seq): async with seq(1): t(("f1", 1)) async with seq(3): t(("f1", 3)) async with seq(4): t(("f1", 4)) async def f2(seq): async with seq(0): t(("f2", 0)) async with seq(2): t(("f2", 2)) seq = Sequencer() async with _core.open_nursery() as nursery: nursery.start_soon(f1, seq) nursery.start_soon(f2, seq) async with seq(5): await wait_all_tasks_blocked() assert record == [("f2", 0), ("f1", 1), ("f2", 2), ("f1", 3), ("f1", 4)] seq = Sequencer() # Catches us if we try to re-use a sequence point: async with seq(0): pass with pytest.raises(RuntimeError): async with seq(0): pass # pragma: no cover async def test_Sequencer_cancel(): # Killing a blocked task makes everything blow up record = [] seq = Sequencer() async def child(i): with _core.CancelScope() as scope: if i == 1: scope.cancel() try: async with seq(i): pass # pragma: no cover except RuntimeError: record.append("seq({}) RuntimeError".format(i)) async with _core.open_nursery() as nursery: nursery.start_soon(child, 1) nursery.start_soon(child, 2) async with seq(0): pass # pragma: no cover assert record == ["seq(1) RuntimeError", "seq(2) RuntimeError"] # Late arrivals also get errors with pytest.raises(RuntimeError): async with seq(3): pass # pragma: no cover ################################################################ async def test__assert_raises(): with pytest.raises(AssertionError): with _assert_raises(RuntimeError): 1 + 1 with pytest.raises(TypeError): with _assert_raises(RuntimeError): "foo" + 1 with _assert_raises(RuntimeError): raise RuntimeError # This is a private implementation detail, but it's complex enough to be worth # testing directly async def test__UnboundeByteQueue(): ubq = _UnboundedByteQueue() ubq.put(b"123") ubq.put(b"456") assert ubq.get_nowait(1) == b"1" assert ubq.get_nowait(10) == b"23456" ubq.put(b"789") assert ubq.get_nowait() == b"789" with pytest.raises(_core.WouldBlock): ubq.get_nowait(10) with pytest.raises(_core.WouldBlock): ubq.get_nowait() with pytest.raises(TypeError): ubq.put("string") ubq.put(b"abc") with assert_checkpoints(): assert await ubq.get(10) == b"abc" ubq.put(b"def") ubq.put(b"ghi") with assert_checkpoints(): assert await ubq.get(1) == b"d" with assert_checkpoints(): assert await ubq.get() == b"efghi" async def putter(data): await wait_all_tasks_blocked() ubq.put(data) async def getter(expect): with assert_checkpoints(): assert await ubq.get() == expect async with _core.open_nursery() as nursery: nursery.start_soon(getter, b"xyz") nursery.start_soon(putter, b"xyz") # Two gets at the same time -> BusyResourceError with pytest.raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(getter, b"asdf") nursery.start_soon(getter, b"asdf") # Closing ubq.close() with pytest.raises(_core.ClosedResourceError): ubq.put(b"---") assert ubq.get_nowait(10) == b"" assert ubq.get_nowait() == b"" assert await ubq.get(10) == b"" assert await ubq.get() == b"" # close is idempotent ubq.close() # close wakes up blocked getters ubq2 = _UnboundedByteQueue() async def closer(): await wait_all_tasks_blocked() ubq2.close() async with _core.open_nursery() as nursery: nursery.start_soon(getter, b"") nursery.start_soon(closer) async def test_MemorySendStream(): mss = MemorySendStream() async def do_send_all(data): with assert_checkpoints(): await mss.send_all(data) await do_send_all(b"123") assert mss.get_data_nowait(1) == b"1" assert mss.get_data_nowait() == b"23" with assert_checkpoints(): await mss.wait_send_all_might_not_block() with pytest.raises(_core.WouldBlock): mss.get_data_nowait() with pytest.raises(_core.WouldBlock): mss.get_data_nowait(10) await do_send_all(b"456") with assert_checkpoints(): assert await mss.get_data() == b"456" # Call send_all twice at once; one should get BusyResourceError and one # should succeed. But we can't let the error propagate, because it might # cause the other to be cancelled before it can finish doing its thing, # and we don't know which one will get the error. resource_busy_count = 0 async def do_send_all_count_resourcebusy(): nonlocal resource_busy_count try: await do_send_all(b"xxx") except _core.BusyResourceError: resource_busy_count += 1 async with _core.open_nursery() as nursery: nursery.start_soon(do_send_all_count_resourcebusy) nursery.start_soon(do_send_all_count_resourcebusy) assert resource_busy_count == 1 with assert_checkpoints(): await mss.aclose() assert await mss.get_data() == b"xxx" assert await mss.get_data() == b"" with pytest.raises(_core.ClosedResourceError): await do_send_all(b"---") # hooks assert mss.send_all_hook is None assert mss.wait_send_all_might_not_block_hook is None assert mss.close_hook is None record = [] async def send_all_hook(): # hook runs after send_all does its work (can pull data out) assert mss2.get_data_nowait() == b"abc" record.append("send_all_hook") async def wait_send_all_might_not_block_hook(): record.append("wait_send_all_might_not_block_hook") def close_hook(): record.append("close_hook") mss2 = MemorySendStream( send_all_hook, wait_send_all_might_not_block_hook, close_hook ) assert mss2.send_all_hook is send_all_hook assert mss2.wait_send_all_might_not_block_hook is wait_send_all_might_not_block_hook assert mss2.close_hook is close_hook await mss2.send_all(b"abc") await mss2.wait_send_all_might_not_block() await aclose_forcefully(mss2) mss2.close() assert record == [ "send_all_hook", "wait_send_all_might_not_block_hook", "close_hook", "close_hook", ] async def test_MemoryReceiveStream(): mrs = MemoryReceiveStream() async def do_receive_some(max_bytes): with assert_checkpoints(): return await mrs.receive_some(max_bytes) mrs.put_data(b"abc") assert await do_receive_some(1) == b"a" assert await do_receive_some(10) == b"bc" mrs.put_data(b"abc") assert await do_receive_some(None) == b"abc" with pytest.raises(_core.BusyResourceError): async with _core.open_nursery() as nursery: nursery.start_soon(do_receive_some, 10) nursery.start_soon(do_receive_some, 10) assert mrs.receive_some_hook is None mrs.put_data(b"def") mrs.put_eof() mrs.put_eof() assert await do_receive_some(10) == b"def" assert await do_receive_some(10) == b"" assert await do_receive_some(10) == b"" with pytest.raises(_core.ClosedResourceError): mrs.put_data(b"---") async def receive_some_hook(): mrs2.put_data(b"xxx") record = [] def close_hook(): record.append("closed") mrs2 = MemoryReceiveStream(receive_some_hook, close_hook) assert mrs2.receive_some_hook is receive_some_hook assert mrs2.close_hook is close_hook mrs2.put_data(b"yyy") assert await mrs2.receive_some(10) == b"yyyxxx" assert await mrs2.receive_some(10) == b"xxx" assert await mrs2.receive_some(10) == b"xxx" mrs2.put_data(b"zzz") mrs2.receive_some_hook = None assert await mrs2.receive_some(10) == b"zzz" mrs2.put_data(b"lost on close") with assert_checkpoints(): await mrs2.aclose() assert record == ["closed"] with pytest.raises(_core.ClosedResourceError): await mrs2.receive_some(10) async def test_MemoryRecvStream_closing(): mrs = MemoryReceiveStream() # close with no pending data mrs.close() with pytest.raises(_core.ClosedResourceError): assert await mrs.receive_some(10) == b"" # repeated closes ok mrs.close() # put_data now fails with pytest.raises(_core.ClosedResourceError): mrs.put_data(b"123") mrs2 = MemoryReceiveStream() # close with pending data mrs2.put_data(b"xyz") mrs2.close() with pytest.raises(_core.ClosedResourceError): await mrs2.receive_some(10) async def test_memory_stream_pump(): mss = MemorySendStream() mrs = MemoryReceiveStream() # no-op if no data present memory_stream_pump(mss, mrs) await mss.send_all(b"123") memory_stream_pump(mss, mrs) assert await mrs.receive_some(10) == b"123" await mss.send_all(b"456") assert memory_stream_pump(mss, mrs, max_bytes=1) assert await mrs.receive_some(10) == b"4" assert memory_stream_pump(mss, mrs, max_bytes=1) assert memory_stream_pump(mss, mrs, max_bytes=1) assert not memory_stream_pump(mss, mrs, max_bytes=1) assert await mrs.receive_some(10) == b"56" mss.close() memory_stream_pump(mss, mrs) assert await mrs.receive_some(10) == b"" async def test_memory_stream_one_way_pair(): s, r = memory_stream_one_way_pair() assert s.send_all_hook is not None assert s.wait_send_all_might_not_block_hook is None assert s.close_hook is not None assert r.receive_some_hook is None await s.send_all(b"123") assert await r.receive_some(10) == b"123" async def receiver(expected): assert await r.receive_some(10) == expected # This fails if we pump on r.receive_some_hook; we need to pump on s.send_all_hook async with _core.open_nursery() as nursery: nursery.start_soon(receiver, b"abc") await wait_all_tasks_blocked() await s.send_all(b"abc") # And this fails if we don't pump from close_hook async with _core.open_nursery() as nursery: nursery.start_soon(receiver, b"") await wait_all_tasks_blocked() await s.aclose() s, r = memory_stream_one_way_pair() async with _core.open_nursery() as nursery: nursery.start_soon(receiver, b"") await wait_all_tasks_blocked() s.close() s, r = memory_stream_one_way_pair() old = s.send_all_hook s.send_all_hook = None await s.send_all(b"456") async def cancel_after_idle(nursery): await wait_all_tasks_blocked() nursery.cancel_scope.cancel() async def check_for_cancel(): with pytest.raises(_core.Cancelled): # This should block forever... or until cancelled. Even though we # sent some data on the send stream. await r.receive_some(10) async with _core.open_nursery() as nursery: nursery.start_soon(cancel_after_idle, nursery) nursery.start_soon(check_for_cancel) s.send_all_hook = old await s.send_all(b"789") assert await r.receive_some(10) == b"456789" async def test_memory_stream_pair(): a, b = memory_stream_pair() await a.send_all(b"123") await b.send_all(b"abc") assert await b.receive_some(10) == b"123" assert await a.receive_some(10) == b"abc" await a.send_eof() assert await b.receive_some(10) == b"" async def sender(): await wait_all_tasks_blocked() await b.send_all(b"xyz") async def receiver(): assert await a.receive_some(10) == b"xyz" async with _core.open_nursery() as nursery: nursery.start_soon(receiver) nursery.start_soon(sender) async def test_memory_streams_with_generic_tests(): async def one_way_stream_maker(): return memory_stream_one_way_pair() await check_one_way_stream(one_way_stream_maker, None) async def half_closeable_stream_maker(): return memory_stream_pair() await check_half_closeable_stream(half_closeable_stream_maker, None) async def test_lockstep_streams_with_generic_tests(): async def one_way_stream_maker(): return lockstep_stream_one_way_pair() await check_one_way_stream(one_way_stream_maker, one_way_stream_maker) async def two_way_stream_maker(): return lockstep_stream_pair() await check_two_way_stream(two_way_stream_maker, two_way_stream_maker) async def test_open_stream_to_socket_listener(): async def check(listener): async with listener: client_stream = await open_stream_to_socket_listener(listener) async with client_stream: server_stream = await listener.accept() async with server_stream: await client_stream.send_all(b"x") await server_stream.receive_some(1) == b"x" # Listener bound to localhost sock = tsocket.socket() await sock.bind(("127.0.0.1", 0)) sock.listen(10) await check(SocketListener(sock)) # Listener bound to IPv4 wildcard (needs special handling) sock = tsocket.socket() await sock.bind(("0.0.0.0", 0)) sock.listen(10) await check(SocketListener(sock)) if can_bind_ipv6: # Listener bound to IPv6 wildcard (needs special handling) sock = tsocket.socket(family=tsocket.AF_INET6) await sock.bind(("::", 0)) sock.listen(10) await check(SocketListener(sock)) if hasattr(tsocket, "AF_UNIX"): # Listener bound to Unix-domain socket sock = tsocket.socket(family=tsocket.AF_UNIX) # can't use pytest's tmpdir; if we try then macOS says "OSError: # AF_UNIX path too long" with tempfile.TemporaryDirectory() as tmpdir: path = "{}/sock".format(tmpdir) await sock.bind(path) sock.listen(10) await check(SocketListener(sock)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_threads.py0000644000076500000000000004542500000000000017314 0ustar00quentinwheelimport threading import queue as stdlib_queue import time import weakref import pytest from trio._core import TrioToken, current_trio_token from .. import _core from .. import Event, CapacityLimiter, sleep from ..testing import wait_all_tasks_blocked from .._core.tests.tutil import buggy_pypy_asyncgens from .._threads import ( to_thread_run_sync, current_default_thread_limiter, from_thread_run, from_thread_run_sync, ) from .._core.tests.test_ki import ki_self async def test_do_in_trio_thread(): trio_thread = threading.current_thread() async def check_case(do_in_trio_thread, fn, expected, trio_token=None): record = [] def threadfn(): try: record.append(("start", threading.current_thread())) x = do_in_trio_thread(fn, record, trio_token=trio_token) record.append(("got", x)) except BaseException as exc: print(exc) record.append(("error", type(exc))) child_thread = threading.Thread(target=threadfn, daemon=True) child_thread.start() while child_thread.is_alive(): print("yawn") await sleep(0.01) assert record == [("start", child_thread), ("f", trio_thread), expected] token = _core.current_trio_token() def f(record): assert not _core.currently_ki_protected() record.append(("f", threading.current_thread())) return 2 await check_case(from_thread_run_sync, f, ("got", 2), trio_token=token) def f(record): assert not _core.currently_ki_protected() record.append(("f", threading.current_thread())) raise ValueError await check_case(from_thread_run_sync, f, ("error", ValueError), trio_token=token) async def f(record): assert not _core.currently_ki_protected() await _core.checkpoint() record.append(("f", threading.current_thread())) return 3 await check_case(from_thread_run, f, ("got", 3), trio_token=token) async def f(record): assert not _core.currently_ki_protected() await _core.checkpoint() record.append(("f", threading.current_thread())) raise KeyError await check_case(from_thread_run, f, ("error", KeyError), trio_token=token) async def test_do_in_trio_thread_from_trio_thread(): with pytest.raises(RuntimeError): from_thread_run_sync(lambda: None) # pragma: no branch async def foo(): # pragma: no cover pass with pytest.raises(RuntimeError): from_thread_run(foo) def test_run_in_trio_thread_ki(): # if we get a control-C during a run_in_trio_thread, then it propagates # back to the caller (slick!) record = set() async def check_run_in_trio_thread(): token = _core.current_trio_token() def trio_thread_fn(): print("in Trio thread") assert not _core.currently_ki_protected() print("ki_self") try: ki_self() finally: import sys print("finally", sys.exc_info()) async def trio_thread_afn(): trio_thread_fn() def external_thread_fn(): try: print("running") from_thread_run_sync(trio_thread_fn, trio_token=token) except KeyboardInterrupt: print("ok1") record.add("ok1") try: from_thread_run(trio_thread_afn, trio_token=token) except KeyboardInterrupt: print("ok2") record.add("ok2") thread = threading.Thread(target=external_thread_fn) thread.start() print("waiting") while thread.is_alive(): await sleep(0.01) print("waited, joining") thread.join() print("done") _core.run(check_run_in_trio_thread) assert record == {"ok1", "ok2"} def test_await_in_trio_thread_while_main_exits(): record = [] ev = Event() async def trio_fn(): record.append("sleeping") ev.set() await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED) def thread_fn(token): try: from_thread_run(trio_fn, trio_token=token) except _core.Cancelled: record.append("cancelled") async def main(): token = _core.current_trio_token() thread = threading.Thread(target=thread_fn, args=(token,)) thread.start() await ev.wait() assert record == ["sleeping"] return thread thread = _core.run(main) thread.join() assert record == ["sleeping", "cancelled"] async def test_run_in_worker_thread(): trio_thread = threading.current_thread() def f(x): return (x, threading.current_thread()) x, child_thread = await to_thread_run_sync(f, 1) assert x == 1 assert child_thread != trio_thread def g(): raise ValueError(threading.current_thread()) with pytest.raises(ValueError) as excinfo: await to_thread_run_sync(g) print(excinfo.value.args) assert excinfo.value.args[0] != trio_thread async def test_run_in_worker_thread_cancellation(): register = [None] def f(q): # Make the thread block for a controlled amount of time register[0] = "blocking" q.get() register[0] = "finished" async def child(q, cancellable): record.append("start") try: return await to_thread_run_sync(f, q, cancellable=cancellable) finally: record.append("exit") record = [] q = stdlib_queue.Queue() async with _core.open_nursery() as nursery: nursery.start_soon(child, q, True) # Give it a chance to get started. (This is important because # to_thread_run_sync does a checkpoint_if_cancelled before # blocking on the thread, and we don't want to trigger this.) await wait_all_tasks_blocked() assert record == ["start"] # Then cancel it. nursery.cancel_scope.cancel() # The task exited, but the thread didn't: assert register[0] != "finished" # Put the thread out of its misery: q.put(None) while register[0] != "finished": time.sleep(0.01) # This one can't be cancelled record = [] register[0] = None async with _core.open_nursery() as nursery: nursery.start_soon(child, q, False) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() with _core.CancelScope(shield=True): for _ in range(10): await _core.checkpoint() # It's still running assert record == ["start"] q.put(None) # Now it exits # But if we cancel *before* it enters, the entry is itself a cancellation # point with _core.CancelScope() as scope: scope.cancel() await child(q, False) assert scope.cancelled_caught # Make sure that if trio.run exits, and then the thread finishes, then that's # handled gracefully. (Requires that the thread result machinery be prepared # for call_soon to raise RunFinishedError.) def test_run_in_worker_thread_abandoned(capfd, monkeypatch): monkeypatch.setattr(_core._thread_cache, "IDLE_TIMEOUT", 0.01) q1 = stdlib_queue.Queue() q2 = stdlib_queue.Queue() def thread_fn(): q1.get() q2.put(threading.current_thread()) async def main(): async def child(): await to_thread_run_sync(thread_fn, cancellable=True) async with _core.open_nursery() as nursery: nursery.start_soon(child) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() _core.run(main) q1.put(None) # This makes sure: # - the thread actually ran # - that thread has finished before we check for its output thread = q2.get() while thread.is_alive(): time.sleep(0.01) # pragma: no cover # Make sure we don't have a "Exception in thread ..." dump to the console: out, err = capfd.readouterr() assert "Exception in thread" not in out assert "Exception in thread" not in err @pytest.mark.parametrize("MAX", [3, 5, 10]) @pytest.mark.parametrize("cancel", [False, True]) @pytest.mark.parametrize("use_default_limiter", [False, True]) async def test_run_in_worker_thread_limiter(MAX, cancel, use_default_limiter): # This test is a bit tricky. The goal is to make sure that if we set # limiter=CapacityLimiter(MAX), then in fact only MAX threads are ever # running at a time, even if there are more concurrent calls to # to_thread_run_sync, and even if some of those are cancelled. And # also to make sure that the default limiter actually limits. COUNT = 2 * MAX gate = threading.Event() lock = threading.Lock() if use_default_limiter: c = current_default_thread_limiter() orig_total_tokens = c.total_tokens c.total_tokens = MAX limiter_arg = None else: c = CapacityLimiter(MAX) orig_total_tokens = MAX limiter_arg = c try: # We used to use regular variables and 'nonlocal' here, but it turns # out that it's not safe to assign to closed-over variables that are # visible in multiple threads, at least as of CPython 3.6 and PyPy # 5.8: # # https://bugs.python.org/issue30744 # https://bitbucket.org/pypy/pypy/issues/2591/ # # Mutating them in-place is OK though (as long as you use proper # locking etc.). class state: pass state.ran = 0 state.high_water = 0 state.running = 0 state.parked = 0 token = _core.current_trio_token() def thread_fn(cancel_scope): print("thread_fn start") from_thread_run_sync(cancel_scope.cancel, trio_token=token) with lock: state.ran += 1 state.running += 1 state.high_water = max(state.high_water, state.running) # The Trio thread below watches this value and uses it as a # signal that all the stats calculations have finished. state.parked += 1 gate.wait() with lock: state.parked -= 1 state.running -= 1 print("thread_fn exiting") async def run_thread(event): with _core.CancelScope() as cancel_scope: await to_thread_run_sync( thread_fn, cancel_scope, limiter=limiter_arg, cancellable=cancel ) print("run_thread finished, cancelled:", cancel_scope.cancelled_caught) event.set() async with _core.open_nursery() as nursery: print("spawning") events = [] for i in range(COUNT): events.append(Event()) nursery.start_soon(run_thread, events[-1]) await wait_all_tasks_blocked() # In the cancel case, we in particular want to make sure that the # cancelled tasks don't release the semaphore. So let's wait until # at least one of them has exited, and that everything has had a # chance to settle down from this, before we check that everyone # who's supposed to be waiting is waiting: if cancel: print("waiting for first cancellation to clear") await events[0].wait() await wait_all_tasks_blocked() # Then wait until the first MAX threads are parked in gate.wait(), # and the next MAX threads are parked on the semaphore, to make # sure no-one is sneaking past, and to make sure the high_water # check below won't fail due to scheduling issues. (It could still # fail if too many threads are let through here.) while state.parked != MAX or c.statistics().tasks_waiting != MAX: await sleep(0.01) # pragma: no cover # Then release the threads gate.set() assert state.high_water == MAX if cancel: # Some threads might still be running; need to wait to them to # finish before checking that all threads ran. We can do this # using the CapacityLimiter. while c.borrowed_tokens > 0: await sleep(0.01) # pragma: no cover assert state.ran == COUNT assert state.running == 0 finally: c.total_tokens = orig_total_tokens async def test_run_in_worker_thread_custom_limiter(): # Basically just checking that we only call acquire_on_behalf_of and # release_on_behalf_of, since that's part of our documented API. record = [] class CustomLimiter: async def acquire_on_behalf_of(self, borrower): record.append("acquire") self._borrower = borrower def release_on_behalf_of(self, borrower): record.append("release") assert borrower == self._borrower await to_thread_run_sync(lambda: None, limiter=CustomLimiter()) assert record == ["acquire", "release"] async def test_run_in_worker_thread_limiter_error(): record = [] class BadCapacityLimiter: async def acquire_on_behalf_of(self, borrower): record.append("acquire") def release_on_behalf_of(self, borrower): record.append("release") raise ValueError bs = BadCapacityLimiter() with pytest.raises(ValueError) as excinfo: await to_thread_run_sync(lambda: None, limiter=bs) assert excinfo.value.__context__ is None assert record == ["acquire", "release"] record = [] # If the original function raised an error, then the semaphore error # chains with it d = {} with pytest.raises(ValueError) as excinfo: await to_thread_run_sync(lambda: d["x"], limiter=bs) assert isinstance(excinfo.value.__context__, KeyError) assert record == ["acquire", "release"] async def test_run_in_worker_thread_fail_to_spawn(monkeypatch): # Test the unlikely but possible case where trying to spawn a thread fails def bad_start(self, *args): raise RuntimeError("the engines canna take it captain") monkeypatch.setattr(_core._thread_cache.ThreadCache, "start_thread_soon", bad_start) limiter = current_default_thread_limiter() assert limiter.borrowed_tokens == 0 # We get an appropriate error, and the limiter is cleanly released with pytest.raises(RuntimeError) as excinfo: await to_thread_run_sync(lambda: None) # pragma: no cover assert "engines" in str(excinfo.value) assert limiter.borrowed_tokens == 0 async def test_trio_to_thread_run_sync_token(): # Test that to_thread_run_sync automatically injects the current trio token # into a spawned thread def thread_fn(): callee_token = from_thread_run_sync(_core.current_trio_token) return callee_token caller_token = _core.current_trio_token() callee_token = await to_thread_run_sync(thread_fn) assert callee_token == caller_token async def test_trio_to_thread_run_sync_expected_error(): # Test correct error when passed async function async def async_fn(): # pragma: no cover pass with pytest.raises(TypeError, match="expected a sync function"): await to_thread_run_sync(async_fn) async def test_trio_from_thread_run_sync(): # Test that to_thread_run_sync correctly "hands off" the trio token to # trio.from_thread.run_sync() def thread_fn(): trio_time = from_thread_run_sync(_core.current_time) return trio_time trio_time = await to_thread_run_sync(thread_fn) assert isinstance(trio_time, float) # Test correct error when passed async function async def async_fn(): # pragma: no cover pass def thread_fn(): from_thread_run_sync(async_fn) with pytest.raises(TypeError, match="expected a sync function"): await to_thread_run_sync(thread_fn) async def test_trio_from_thread_run(): # Test that to_thread_run_sync correctly "hands off" the trio token to # trio.from_thread.run() record = [] async def back_in_trio_fn(): _core.current_time() # implicitly checks that we're in trio record.append("back in trio") def thread_fn(): record.append("in thread") from_thread_run(back_in_trio_fn) await to_thread_run_sync(thread_fn) assert record == ["in thread", "back in trio"] # Test correct error when passed sync function def sync_fn(): # pragma: no cover pass with pytest.raises(TypeError, match="appears to be synchronous"): await to_thread_run_sync(from_thread_run, sync_fn) async def test_trio_from_thread_token(): # Test that to_thread_run_sync and spawned trio.from_thread.run_sync() # share the same Trio token def thread_fn(): callee_token = from_thread_run_sync(_core.current_trio_token) return callee_token caller_token = _core.current_trio_token() callee_token = await to_thread_run_sync(thread_fn) assert callee_token == caller_token async def test_trio_from_thread_token_kwarg(): # Test that to_thread_run_sync and spawned trio.from_thread.run_sync() can # use an explicitly defined token def thread_fn(token): callee_token = from_thread_run_sync(_core.current_trio_token, trio_token=token) return callee_token caller_token = _core.current_trio_token() callee_token = await to_thread_run_sync(thread_fn, caller_token) assert callee_token == caller_token async def test_from_thread_no_token(): # Test that a "raw call" to trio.from_thread.run() fails because no token # has been provided with pytest.raises(RuntimeError): from_thread_run_sync(_core.current_time) def test_run_fn_as_system_task_catched_badly_typed_token(): with pytest.raises(RuntimeError): from_thread_run_sync(_core.current_time, trio_token="Not TrioTokentype") async def test_from_thread_inside_trio_thread(): def not_called(): # pragma: no cover assert False trio_token = _core.current_trio_token() with pytest.raises(RuntimeError): from_thread_run_sync(not_called, trio_token=trio_token) @pytest.mark.skipif(buggy_pypy_asyncgens, reason="pypy 7.2.0 is buggy") def test_from_thread_run_during_shutdown(): save = [] record = [] async def agen(): try: yield finally: with pytest.raises(_core.RunFinishedError), _core.CancelScope(shield=True): await to_thread_run_sync(from_thread_run, sleep, 0) record.append("ok") async def main(): save.append(agen()) await save[-1].asend(None) _core.run(main) assert record == ["ok"] async def test_trio_token_weak_referenceable(): token = current_trio_token() assert isinstance(token, TrioToken) weak_reference = weakref.ref(token) assert token is weak_reference() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_timeouts.py0000644000076500000000000000640700000000000017530 0ustar00quentinwheelimport outcome import pytest import time from .._core.tests.tutil import slow from .. import _core from ..testing import assert_checkpoints from .._timeouts import * async def check_takes_about(f, expected_dur): start = time.perf_counter() result = await outcome.acapture(f) dur = time.perf_counter() - start print(dur / expected_dur) # 1.5 is an arbitrary fudge factor because there's always some delay # between when we become eligible to wake up and when we actually do. We # used to sleep for 0.05, and regularly observed overruns of 1.6x on # Appveyor, and then started seeing overruns of 2.3x on Travis's macOS, so # now we bumped up the sleep to 1 second, marked the tests as slow, and # hopefully now the proportional error will be less huge. # # We also also for durations that are a hair shorter than expected. For # example, here's a run on Windows where a 1.0 second sleep was measured # to take 0.9999999999999858 seconds: # https://ci.appveyor.com/project/njsmith/trio/build/1.0.768/job/3lbdyxl63q3h9s21 # I believe that what happened here is that Windows's low clock resolution # meant that our calls to time.monotonic() returned exactly the same # values as the calls inside the actual run loop, but the two subtractions # returned slightly different values because the run loop's clock adds a # random floating point offset to both times, which should cancel out, but # lol floating point we got slightly different rounding errors. (That # value above is exactly 128 ULPs below 1.0, which would make sense if it # started as a 1 ULP error at a different dynamic range.) assert (1 - 1e-8) <= (dur / expected_dur) < 1.5 return result.unwrap() # How long to (attempt to) sleep for when testing. Smaller numbers make the # test suite go faster. TARGET = 1.0 @slow async def test_sleep(): async def sleep_1(): await sleep_until(_core.current_time() + TARGET) await check_takes_about(sleep_1, TARGET) async def sleep_2(): await sleep(TARGET) await check_takes_about(sleep_2, TARGET) with pytest.raises(ValueError): await sleep(-1) with assert_checkpoints(): await sleep(0) # This also serves as a test of the trivial move_on_at with move_on_at(_core.current_time()): with pytest.raises(_core.Cancelled): await sleep(0) @slow async def test_move_on_after(): with pytest.raises(ValueError): with move_on_after(-1): pass # pragma: no cover async def sleep_3(): with move_on_after(TARGET): await sleep(100) await check_takes_about(sleep_3, TARGET) @slow async def test_fail(): async def sleep_4(): with fail_at(_core.current_time() + TARGET): await sleep(100) with pytest.raises(TooSlowError): await check_takes_about(sleep_4, TARGET) with fail_at(_core.current_time() + 100): await sleep(0) async def sleep_5(): with fail_after(TARGET): await sleep(100) with pytest.raises(TooSlowError): await check_takes_about(sleep_5, TARGET) with fail_after(100): await sleep(0) with pytest.raises(ValueError): with fail_after(-1): pass # pragma: no cover ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_unix_pipes.py0000644000076500000000000002130300000000000020032 0ustar00quentinwheelimport errno import select import os import tempfile import sys import pytest from .._core.tests.tutil import gc_collect_harder, skip_if_fbsd_pipes_broken from .. import _core, move_on_after from ..testing import wait_all_tasks_blocked, check_one_way_stream posix = os.name == "posix" pytestmark = pytest.mark.skipif(not posix, reason="posix only") if posix: from .._unix_pipes import FdStream else: with pytest.raises(ImportError): from .._unix_pipes import FdStream # Have to use quoted types so import doesn't crash on windows async def make_pipe() -> "Tuple[FdStream, FdStream]": """Makes a new pair of pipes.""" (r, w) = os.pipe() return FdStream(w), FdStream(r) async def make_clogged_pipe(): s, r = await make_pipe() try: while True: # We want to totally fill up the pipe buffer. # This requires working around a weird feature that POSIX pipes # have. # If you do a write of <= PIPE_BUF bytes, then it's guaranteed # to either complete entirely, or not at all. So if we tried to # write PIPE_BUF bytes, and the buffer's free space is only # PIPE_BUF/2, then the write will raise BlockingIOError... even # though a smaller write could still succeed! To avoid this, # make sure to write >PIPE_BUF bytes each time, which disables # the special behavior. # For details, search for PIPE_BUF here: # http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html # for the getattr: # https://bitbucket.org/pypy/pypy/issues/2876/selectpipe_buf-is-missing-on-pypy3 buf_size = getattr(select, "PIPE_BUF", 8192) os.write(s.fileno(), b"x" * buf_size * 2) except BlockingIOError: pass return s, r async def test_send_pipe(): r, w = os.pipe() async with FdStream(w) as send: assert send.fileno() == w await send.send_all(b"123") assert (os.read(r, 8)) == b"123" os.close(r) async def test_receive_pipe(): r, w = os.pipe() async with FdStream(r) as recv: assert (recv.fileno()) == r os.write(w, b"123") assert (await recv.receive_some(8)) == b"123" os.close(w) async def test_pipes_combined(): write, read = await make_pipe() count = 2 ** 20 async def sender(): big = bytearray(count) await write.send_all(big) async def reader(): await wait_all_tasks_blocked() received = 0 while received < count: received += len(await read.receive_some(4096)) assert received == count async with _core.open_nursery() as n: n.start_soon(sender) n.start_soon(reader) await read.aclose() await write.aclose() async def test_pipe_errors(): with pytest.raises(TypeError): FdStream(None) r, w = os.pipe() os.close(w) async with FdStream(r) as s: with pytest.raises(ValueError): await s.receive_some(0) async def test_del(): w, r = await make_pipe() f1, f2 = w.fileno(), r.fileno() del w, r gc_collect_harder() with pytest.raises(OSError) as excinfo: os.close(f1) assert excinfo.value.errno == errno.EBADF with pytest.raises(OSError) as excinfo: os.close(f2) assert excinfo.value.errno == errno.EBADF async def test_async_with(): w, r = await make_pipe() async with w, r: pass assert w.fileno() == -1 assert r.fileno() == -1 with pytest.raises(OSError) as excinfo: os.close(w.fileno()) assert excinfo.value.errno == errno.EBADF with pytest.raises(OSError) as excinfo: os.close(r.fileno()) assert excinfo.value.errno == errno.EBADF async def test_misdirected_aclose_regression(): # https://github.com/python-trio/trio/issues/661#issuecomment-456582356 w, r = await make_pipe() old_r_fd = r.fileno() # Close the original objects await w.aclose() await r.aclose() # Do a little dance to get a new pipe whose receive handle matches the old # receive handle. r2_fd, w2_fd = os.pipe() if r2_fd != old_r_fd: # pragma: no cover os.dup2(r2_fd, old_r_fd) os.close(r2_fd) async with FdStream(old_r_fd) as r2: assert r2.fileno() == old_r_fd # And now set up a background task that's working on the new receive # handle async def expect_eof(): assert await r2.receive_some(10) == b"" async with _core.open_nursery() as nursery: nursery.start_soon(expect_eof) await wait_all_tasks_blocked() # Here's the key test: does calling aclose() again on the *old* # handle, cause the task blocked on the *new* handle to raise # ClosedResourceError? await r.aclose() await wait_all_tasks_blocked() # Guess we survived! Close the new write handle so that the task # gets an EOF and can exit cleanly. os.close(w2_fd) async def test_close_at_bad_time_for_receive_some(monkeypatch): # We used to have race conditions where if one task was using the pipe, # and another closed it at *just* the wrong moment, it would give an # unexpected error instead of ClosedResourceError: # https://github.com/python-trio/trio/issues/661 # # This tests what happens if the pipe gets closed in the moment *between* # when receive_some wakes up, and when it tries to call os.read async def expect_closedresourceerror(): with pytest.raises(_core.ClosedResourceError): await r.receive_some(10) orig_wait_readable = _core._run.TheIOManager.wait_readable async def patched_wait_readable(*args, **kwargs): await orig_wait_readable(*args, **kwargs) await r.aclose() monkeypatch.setattr(_core._run.TheIOManager, "wait_readable", patched_wait_readable) s, r = await make_pipe() async with s, r: async with _core.open_nursery() as nursery: nursery.start_soon(expect_closedresourceerror) await wait_all_tasks_blocked() # Trigger everything by waking up the receiver await s.send_all(b"x") async def test_close_at_bad_time_for_send_all(monkeypatch): # We used to have race conditions where if one task was using the pipe, # and another closed it at *just* the wrong moment, it would give an # unexpected error instead of ClosedResourceError: # https://github.com/python-trio/trio/issues/661 # # This tests what happens if the pipe gets closed in the moment *between* # when send_all wakes up, and when it tries to call os.write async def expect_closedresourceerror(): with pytest.raises(_core.ClosedResourceError): await s.send_all(b"x" * 100) orig_wait_writable = _core._run.TheIOManager.wait_writable async def patched_wait_writable(*args, **kwargs): await orig_wait_writable(*args, **kwargs) await s.aclose() monkeypatch.setattr(_core._run.TheIOManager, "wait_writable", patched_wait_writable) s, r = await make_clogged_pipe() async with s, r: async with _core.open_nursery() as nursery: nursery.start_soon(expect_closedresourceerror) await wait_all_tasks_blocked() # Trigger everything by waking up the sender await r.receive_some(10000) # On FreeBSD, directories are readable, and we haven't found any other trick # for making an unreadable fd, so there's no way to run this test. Fortunately # the logic this is testing doesn't depend on the platform, so testing on # other platforms is probably good enough. @pytest.mark.skipif( sys.platform.startswith("freebsd"), reason="no way to make read() return a bizarro error on FreeBSD", ) async def test_bizarro_OSError_from_receive(): # Make sure that if the read syscall returns some bizarro error, then we # get a BrokenResourceError. This is incredibly unlikely; there's almost # no way to trigger a failure here intentionally (except for EBADF, but we # exploit that to detect file closure, so it takes a different path). So # we set up a strange scenario where the pipe fd somehow transmutes into a # directory fd, causing os.read to raise IsADirectoryError (yes, that's a # real built-in exception type). s, r = await make_pipe() async with s, r: dir_fd = os.open("/", os.O_DIRECTORY, 0) try: os.dup2(dir_fd, r.fileno()) with pytest.raises(_core.BrokenResourceError): await r.receive_some(10) finally: os.close(dir_fd) @skip_if_fbsd_pipes_broken async def test_pipe_fully(): await check_one_way_stream(make_pipe, make_clogged_pipe) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_util.py0000644000076500000000000001253200000000000016630 0ustar00quentinwheelimport signal import pytest import trio from .. import _core from .._core.tests.tutil import ( ignore_coroutine_never_awaited_warnings, create_asyncio_future_in_new_loop, ) from .._util import ( signal_raise, ConflictDetector, is_main_thread, coroutine_or_error, generic_function, Final, NoPublicConstructor, ) from ..testing import wait_all_tasks_blocked def test_signal_raise(): record = [] def handler(signum, _): record.append(signum) old = signal.signal(signal.SIGFPE, handler) try: signal_raise(signal.SIGFPE) finally: signal.signal(signal.SIGFPE, old) assert record == [signal.SIGFPE] async def test_ConflictDetector(): ul1 = ConflictDetector("ul1") ul2 = ConflictDetector("ul2") with ul1: with ul2: print("ok") with pytest.raises(_core.BusyResourceError) as excinfo: with ul1: with ul1: pass # pragma: no cover assert "ul1" in str(excinfo.value) async def wait_with_ul1(): with ul1: await wait_all_tasks_blocked() with pytest.raises(_core.BusyResourceError) as excinfo: async with _core.open_nursery() as nursery: nursery.start_soon(wait_with_ul1) nursery.start_soon(wait_with_ul1) assert "ul1" in str(excinfo.value) def test_module_metadata_is_fixed_up(): import trio import trio.testing assert trio.Cancelled.__module__ == "trio" assert trio.open_nursery.__module__ == "trio" assert trio.abc.Stream.__module__ == "trio.abc" assert trio.lowlevel.wait_task_rescheduled.__module__ == "trio.lowlevel" assert trio.testing.trio_test.__module__ == "trio.testing" # Also check methods assert trio.lowlevel.ParkingLot.__init__.__module__ == "trio.lowlevel" assert trio.abc.Stream.send_all.__module__ == "trio.abc" # And names assert trio.Cancelled.__name__ == "Cancelled" assert trio.Cancelled.__qualname__ == "Cancelled" assert trio.abc.SendStream.send_all.__name__ == "send_all" assert trio.abc.SendStream.send_all.__qualname__ == "SendStream.send_all" assert trio.to_thread.__name__ == "trio.to_thread" assert trio.to_thread.run_sync.__name__ == "run_sync" assert trio.to_thread.run_sync.__qualname__ == "run_sync" async def test_is_main_thread(): assert is_main_thread() def not_main_thread(): assert not is_main_thread() await trio.to_thread.run_sync(not_main_thread) # @coroutine is deprecated since python 3.8, which is fine with us. @pytest.mark.filterwarnings("ignore:.*@coroutine.*:DeprecationWarning") def test_coroutine_or_error(): class Deferred: "Just kidding" with ignore_coroutine_never_awaited_warnings(): async def f(): # pragma: no cover pass with pytest.raises(TypeError) as excinfo: coroutine_or_error(f()) assert "expecting an async function" in str(excinfo.value) import asyncio @asyncio.coroutine def generator_based_coro(): # pragma: no cover yield from asyncio.sleep(1) with pytest.raises(TypeError) as excinfo: coroutine_or_error(generator_based_coro()) assert "asyncio" in str(excinfo.value) with pytest.raises(TypeError) as excinfo: coroutine_or_error(create_asyncio_future_in_new_loop()) assert "asyncio" in str(excinfo.value) with pytest.raises(TypeError) as excinfo: coroutine_or_error(create_asyncio_future_in_new_loop) assert "asyncio" in str(excinfo.value) with pytest.raises(TypeError) as excinfo: coroutine_or_error(Deferred()) assert "twisted" in str(excinfo.value) with pytest.raises(TypeError) as excinfo: coroutine_or_error(lambda: Deferred()) assert "twisted" in str(excinfo.value) with pytest.raises(TypeError) as excinfo: coroutine_or_error(len, [[1, 2, 3]]) assert "appears to be synchronous" in str(excinfo.value) async def async_gen(arg): # pragma: no cover yield with pytest.raises(TypeError) as excinfo: coroutine_or_error(async_gen, [0]) msg = "expected an async function but got an async generator" assert msg in str(excinfo.value) # Make sure no references are kept around to keep anything alive del excinfo def test_generic_function(): @generic_function def test_func(arg): """Look, a docstring!""" return arg assert test_func is test_func[int] is test_func[int, str] assert test_func(42) == test_func[int](42) == 42 assert test_func.__doc__ == "Look, a docstring!" assert test_func.__qualname__ == "test_generic_function..test_func" assert test_func.__name__ == "test_func" assert test_func.__module__ == __name__ def test_final_metaclass(): class FinalClass(metaclass=Final): pass with pytest.raises(TypeError): class SubClass(FinalClass): pass def test_no_public_constructor_metaclass(): class SpecialClass(metaclass=NoPublicConstructor): pass with pytest.raises(TypeError): SpecialClass() with pytest.raises(TypeError): class SubClass(SpecialClass): pass # Private constructor should not raise assert isinstance(SpecialClass._create(), SpecialClass) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_wait_for_object.py0000644000076500000000000001756200000000000021023 0ustar00quentinwheelimport os import pytest on_windows = os.name == "nt" # Mark all the tests in this file as being windows-only pytestmark = pytest.mark.skipif(not on_windows, reason="windows only") from .._core.tests.tutil import slow import trio from .. import _core from .. import _timeouts if on_windows: from .._core._windows_cffi import ffi, kernel32 from .._wait_for_object import ( WaitForSingleObject, WaitForMultipleObjects_sync, ) async def test_WaitForMultipleObjects_sync(): # This does a series of tests where we set/close the handle before # initiating the waiting for it. # # Note that closing the handle (not signaling) will cause the # *initiation* of a wait to return immediately. But closing a handle # that is already being waited on will not stop whatever is waiting # for it. # One handle handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.SetEvent(handle1) WaitForMultipleObjects_sync(handle1) kernel32.CloseHandle(handle1) print("test_WaitForMultipleObjects_sync one OK") # Two handles, signal first handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.SetEvent(handle1) WaitForMultipleObjects_sync(handle1, handle2) kernel32.CloseHandle(handle1) kernel32.CloseHandle(handle2) print("test_WaitForMultipleObjects_sync set first OK") # Two handles, signal second handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.SetEvent(handle2) WaitForMultipleObjects_sync(handle1, handle2) kernel32.CloseHandle(handle1) kernel32.CloseHandle(handle2) print("test_WaitForMultipleObjects_sync set second OK") # Two handles, close first handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.CloseHandle(handle1) with pytest.raises(OSError): WaitForMultipleObjects_sync(handle1, handle2) kernel32.CloseHandle(handle2) print("test_WaitForMultipleObjects_sync close first OK") # Two handles, close second handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.CloseHandle(handle2) with pytest.raises(OSError): WaitForMultipleObjects_sync(handle1, handle2) kernel32.CloseHandle(handle1) print("test_WaitForMultipleObjects_sync close second OK") @slow async def test_WaitForMultipleObjects_sync_slow(): # This does a series of test in which the main thread sync-waits for # handles, while we spawn a thread to set the handles after a short while. TIMEOUT = 0.3 # One handle handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) t0 = _core.current_time() async with _core.open_nursery() as nursery: nursery.start_soon( trio.to_thread.run_sync, WaitForMultipleObjects_sync, handle1 ) await _timeouts.sleep(TIMEOUT) # If we would comment the line below, the above thread will be stuck, # and Trio won't exit this scope kernel32.SetEvent(handle1) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT kernel32.CloseHandle(handle1) print("test_WaitForMultipleObjects_sync_slow one OK") # Two handles, signal first handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) t0 = _core.current_time() async with _core.open_nursery() as nursery: nursery.start_soon( trio.to_thread.run_sync, WaitForMultipleObjects_sync, handle1, handle2 ) await _timeouts.sleep(TIMEOUT) kernel32.SetEvent(handle1) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT kernel32.CloseHandle(handle1) kernel32.CloseHandle(handle2) print("test_WaitForMultipleObjects_sync_slow thread-set first OK") # Two handles, signal second handle1 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle2 = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) t0 = _core.current_time() async with _core.open_nursery() as nursery: nursery.start_soon( trio.to_thread.run_sync, WaitForMultipleObjects_sync, handle1, handle2 ) await _timeouts.sleep(TIMEOUT) kernel32.SetEvent(handle2) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT kernel32.CloseHandle(handle1) kernel32.CloseHandle(handle2) print("test_WaitForMultipleObjects_sync_slow thread-set second OK") async def test_WaitForSingleObject(): # This does a series of test for setting/closing the handle before # initiating the wait. # Test already set handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.SetEvent(handle) await WaitForSingleObject(handle) # should return at once kernel32.CloseHandle(handle) print("test_WaitForSingleObject already set OK") # Test already set, as int handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle_int = int(ffi.cast("intptr_t", handle)) kernel32.SetEvent(handle) await WaitForSingleObject(handle_int) # should return at once kernel32.CloseHandle(handle) print("test_WaitForSingleObject already set OK") # Test already closed handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) kernel32.CloseHandle(handle) with pytest.raises(OSError): await WaitForSingleObject(handle) # should return at once print("test_WaitForSingleObject already closed OK") # Not a handle with pytest.raises(TypeError): await WaitForSingleObject("not a handle") # Wrong type # with pytest.raises(OSError): # await WaitForSingleObject(99) # If you're unlucky, it actually IS a handle :( print("test_WaitForSingleObject not a handle OK") @slow async def test_WaitForSingleObject_slow(): # This does a series of test for setting the handle in another task, # and cancelling the wait task. # Set the timeout used in the tests. We test the waiting time against # the timeout with a certain margin. TIMEOUT = 0.3 async def signal_soon_async(handle): await _timeouts.sleep(TIMEOUT) kernel32.SetEvent(handle) # Test handle is SET after TIMEOUT in separate coroutine handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) t0 = _core.current_time() async with _core.open_nursery() as nursery: nursery.start_soon(WaitForSingleObject, handle) nursery.start_soon(signal_soon_async, handle) kernel32.CloseHandle(handle) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT print("test_WaitForSingleObject_slow set from task OK") # Test handle is SET after TIMEOUT in separate coroutine, as int handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) handle_int = int(ffi.cast("intptr_t", handle)) t0 = _core.current_time() async with _core.open_nursery() as nursery: nursery.start_soon(WaitForSingleObject, handle_int) nursery.start_soon(signal_soon_async, handle) kernel32.CloseHandle(handle) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT print("test_WaitForSingleObject_slow set from task as int OK") # Test handle is CLOSED after 1 sec - NOPE see comment above # Test cancellation handle = kernel32.CreateEventA(ffi.NULL, True, False, ffi.NULL) t0 = _core.current_time() with _timeouts.move_on_after(TIMEOUT): await WaitForSingleObject(handle) kernel32.CloseHandle(handle) t1 = _core.current_time() assert TIMEOUT <= (t1 - t0) < 2.0 * TIMEOUT print("test_WaitForSingleObject_slow cancellation OK") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/test_windows_pipes.py0000644000076500000000000000603400000000000020545 0ustar00quentinwheelimport errno import select import os import sys import pytest from .._core.tests.tutil import gc_collect_harder from .. import _core, move_on_after from ..testing import wait_all_tasks_blocked, check_one_way_stream if sys.platform == "win32": from .._windows_pipes import PipeSendStream, PipeReceiveStream from .._core._windows_cffi import _handle, kernel32 from asyncio.windows_utils import pipe else: pytestmark = pytest.mark.skip(reason="windows only") pipe = None # type: Any PipeSendStream = None # type: Any PipeReceiveStream = None # type: Any async def make_pipe() -> "Tuple[PipeSendStream, PipeReceiveStream]": """Makes a new pair of pipes.""" (r, w) = pipe() return PipeSendStream(w), PipeReceiveStream(r) async def test_pipe_typecheck(): with pytest.raises(TypeError): PipeSendStream(1.0) with pytest.raises(TypeError): PipeReceiveStream(None) async def test_pipe_error_on_close(): # Make sure we correctly handle a failure from kernel32.CloseHandle r, w = pipe() send_stream = PipeSendStream(w) receive_stream = PipeReceiveStream(r) assert kernel32.CloseHandle(_handle(r)) assert kernel32.CloseHandle(_handle(w)) with pytest.raises(OSError): await send_stream.aclose() with pytest.raises(OSError): await receive_stream.aclose() async def test_pipes_combined(): write, read = await make_pipe() count = 2 ** 20 replicas = 3 async def sender(): async with write: big = bytearray(count) for _ in range(replicas): await write.send_all(big) async def reader(): async with read: await wait_all_tasks_blocked() total_received = 0 while True: # 5000 is chosen because it doesn't evenly divide 2**20 received = len(await read.receive_some(5000)) if not received: break total_received += received assert total_received == count * replicas async with _core.open_nursery() as n: n.start_soon(sender) n.start_soon(reader) async def test_async_with(): w, r = await make_pipe() async with w, r: pass with pytest.raises(_core.ClosedResourceError): await w.send_all(b"") with pytest.raises(_core.ClosedResourceError): await r.receive_some(10) async def test_close_during_write(): w, r = await make_pipe() async with _core.open_nursery() as nursery: async def write_forever(): with pytest.raises(_core.ClosedResourceError) as excinfo: while True: await w.send_all(b"x" * 4096) assert "another task" in str(excinfo.value) nursery.start_soon(write_forever) await wait_all_tasks_blocked(0.1) await w.aclose() async def test_pipe_fully(): # passing make_clogged_pipe tests wait_send_all_might_not_block, and we # can't implement that on Windows await check_one_way_stream(make_pipe, None) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.872929 trio-0.19.0/trio/tests/tools/0000755000076500000000000000000000000000000015377 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/tools/__init__.py0000644000076500000000000000000000000000000017476 0ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/tests/tools/test_gen_exports.py0000644000076500000000000000363100000000000021350 0ustar00quentinwheelimport ast import astor import pytest import os import sys from shutil import copyfile from trio._tools.gen_exports import ( get_public_methods, create_passthrough_args, process, ) SOURCE = '''from _run import _public class Test: @_public def public_func(self): """With doc string""" @ignore_this @_public @another_decorator async def public_async_func(self): pass # no doc string def not_public(self): pass async def not_public_async(self): pass ''' def test_get_public_methods(): methods = list(get_public_methods(ast.parse(SOURCE))) assert {m.name for m in methods} == {"public_func", "public_async_func"} def test_create_pass_through_args(): testcases = [ ("def f()", "()"), ("def f(one)", "(one)"), ("def f(one, two)", "(one, two)"), ("def f(one, *args)", "(one, *args)"), ( "def f(one, *args, kw1, kw2=None, **kwargs)", "(one, *args, kw1=kw1, kw2=kw2, **kwargs)", ), ] for (funcdef, expected) in testcases: func_node = ast.parse(funcdef + ":\n pass").body[0] assert isinstance(func_node, ast.FunctionDef) assert create_passthrough_args(func_node) == expected def test_process(tmp_path): modpath = tmp_path / "_module.py" genpath = tmp_path / "_generated_module.py" modpath.write_text(SOURCE, encoding="utf-8") assert not genpath.exists() with pytest.raises(SystemExit) as excinfo: process([(str(modpath), "runner")], do_test=True) assert excinfo.value.code == 1 process([(str(modpath), "runner")], do_test=False) assert genpath.exists() process([(str(modpath), "runner")], do_test=True) # But if we change the lookup path it notices with pytest.raises(SystemExit) as excinfo: process([(str(modpath), "runner.io_manager")], do_test=True) assert excinfo.value.code == 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751797.0 trio-0.19.0/trio/to_thread.py0000644000076500000000000000015200000000000015416 0ustar00quentinwheelfrom ._threads import to_thread_run_sync as run_sync from ._threads import current_default_thread_limiter ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1623751952.701758 trio-0.19.0/trio.egg-info/0000755000076500000000000000000000000000000014567 5ustar00quentinwheel././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751951.0 trio-0.19.0/trio.egg-info/PKG-INFO0000644000076500000000000001020600000000000015663 0ustar00quentinwheelMetadata-Version: 2.1 Name: trio Version: 0.19.0 Summary: A friendly Python library for async concurrency and I/O Home-page: https://github.com/python-trio/trio Author: Nathaniel J. Smith Author-email: njs@pobox.com License: MIT -or- Apache License 2.0 Keywords: async,io,networking,trio Platform: UNKNOWN Classifier: Development Status :: 3 - Alpha Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: BSD Classifier: Operating System :: Microsoft :: Windows Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Topic :: System :: Networking Classifier: Framework :: Trio Requires-Python: >=3.6 License-File: LICENSE.APACHE2 License-File: LICENSE.MIT License-File: LICENSE .. image:: https://raw.githubusercontent.com/python-trio/trio/9b0bec646a31e0d0f67b8b6ecc6939726faf3e17/logo/logo-with-background.svg :width: 200px :align: right The Trio project's goal is to produce a production-quality, `permissively licensed `__, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do **multiple things at the same time** with **parallelized I/O**. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections at the same time, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on **usability** and **correctness**. Concurrency is complicated; we try to make it *easy* to get things *right*. Trio was built from the ground up to take advantage of the `latest Python features `__, and draws inspiration from `many sources `__, in particular Dave Beazley's `Curio `__. The resulting design is radically simpler than older competitors like `asyncio `__ and `Twisted `__, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. `Perhaps you'll find the same `__. This project is young and still somewhat experimental: the overall design is solid and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We *do* encourage you do use it, but you should `read and subscribe to issue #1 `__ to get warning and a chance to give feedback about any compatibility-breaking changes. Vital statistics: * Supported environments: Linux, macOS, or Windows running some kind of Python 3.6-or-better (either CPython or PyPy3 is fine). \*BSD and illumos likely work too, but are not tested. * Install: ``python3 -m pip install -U trio`` (or on Windows, maybe ``py -3 -m pip install -U trio``). No compiler needed. * Tutorial and reference manual: https://trio.readthedocs.io * Bug tracker and source code: https://github.com/python-trio/trio * Real-time chat: https://gitter.im/python-trio/general * Discussion forum: https://trio.discourse.group * License: MIT or Apache 2, your choice * Contributor guide: https://trio.readthedocs.io/en/latest/contributing.html * Code of conduct: Contributors are requested to follow our `code of conduct `_ in all project spaces. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751951.0 trio-0.19.0/trio.egg-info/SOURCES.txt0000644000076500000000000001233400000000000016456 0ustar00quentinwheelCODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE LICENSE.APACHE2 LICENSE.MIT MANIFEST.in README.rst pyproject.toml setup.py test-requirements.txt docs/Makefile docs/make.bat docs/notes.txt docs/source/awesome-trio-libraries.rst docs/source/code-of-conduct.rst docs/source/conf.py docs/source/contributing.rst docs/source/design.rst docs/source/glossary.rst docs/source/history.rst docs/source/index.rst docs/source/local_customization.py docs/source/reference-core.rst docs/source/reference-io.rst docs/source/reference-lowlevel.rst docs/source/reference-testing.rst docs/source/releasing.rst docs/source/tutorial.rst docs/source/_static/.gitkeep docs/source/_static/favicon-32.png docs/source/_static/favicon.svg docs/source/_static/hackrtd.css docs/source/_static/ornament.svg docs/source/_templates/.gitkeep docs/source/_templates/layout.html docs/source/reference-core/channels-backpressure.py docs/source/reference-core/channels-mpmc-broken.py docs/source/reference-core/channels-mpmc-fixed.py docs/source/reference-core/channels-shutdown.py docs/source/reference-core/channels-simple.py docs/source/reference-core/contextvar-example.py docs/source/reference-core/from-thread-example.py docs/source/reference-testing/across-realtime.out docs/source/reference-testing/across-realtime.py docs/source/tutorial/echo-client.py docs/source/tutorial/echo-server.py docs/source/tutorial/tasks-intro.py docs/source/tutorial/tasks-with-trace.py trio/__init__.py trio/_abc.py trio/_channel.py trio/_deprecate.py trio/_file_io.py trio/_highlevel_generic.py trio/_highlevel_open_tcp_listeners.py trio/_highlevel_open_tcp_stream.py trio/_highlevel_open_unix_stream.py trio/_highlevel_serve_listeners.py trio/_highlevel_socket.py trio/_highlevel_ssl_helpers.py trio/_path.py trio/_signals.py trio/_socket.py trio/_ssl.py trio/_subprocess.py trio/_sync.py trio/_threads.py trio/_timeouts.py trio/_unix_pipes.py trio/_util.py trio/_version.py trio/_wait_for_object.py trio/_windows_pipes.py trio/abc.py trio/from_thread.py trio/lowlevel.py trio/socket.py trio/to_thread.py trio.egg-info/PKG-INFO trio.egg-info/SOURCES.txt trio.egg-info/dependency_links.txt trio.egg-info/requires.txt trio.egg-info/top_level.txt trio/_core/__init__.py trio/_core/_asyncgens.py trio/_core/_entry_queue.py trio/_core/_exceptions.py trio/_core/_generated_instrumentation.py trio/_core/_generated_io_epoll.py trio/_core/_generated_io_kqueue.py trio/_core/_generated_io_windows.py trio/_core/_generated_run.py trio/_core/_instrumentation.py trio/_core/_io_common.py trio/_core/_io_epoll.py trio/_core/_io_kqueue.py trio/_core/_io_windows.py trio/_core/_ki.py trio/_core/_local.py trio/_core/_mock_clock.py trio/_core/_multierror.py trio/_core/_parking_lot.py trio/_core/_run.py trio/_core/_thread_cache.py trio/_core/_traps.py trio/_core/_unbounded_queue.py trio/_core/_wakeup_socketpair.py trio/_core/_windows_cffi.py trio/_core/tests/__init__.py trio/_core/tests/conftest.py trio/_core/tests/test_asyncgen.py trio/_core/tests/test_guest_mode.py trio/_core/tests/test_instrumentation.py trio/_core/tests/test_io.py trio/_core/tests/test_ki.py trio/_core/tests/test_local.py trio/_core/tests/test_mock_clock.py trio/_core/tests/test_multierror.py trio/_core/tests/test_parking_lot.py trio/_core/tests/test_run.py trio/_core/tests/test_thread_cache.py trio/_core/tests/test_tutil.py trio/_core/tests/test_unbounded_queue.py trio/_core/tests/test_util.py trio/_core/tests/test_windows.py trio/_core/tests/tutil.py trio/_core/tests/test_multierror_scripts/__init__.py trio/_core/tests/test_multierror_scripts/_common.py trio/_core/tests/test_multierror_scripts/apport_excepthook.py trio/_core/tests/test_multierror_scripts/custom_excepthook.py trio/_core/tests/test_multierror_scripts/ipython_custom_exc.py trio/_core/tests/test_multierror_scripts/simple_excepthook.py trio/_core/tests/test_multierror_scripts/simple_excepthook_IPython.py trio/_core/tests/test_multierror_scripts/simple_excepthook_partial.py trio/_subprocess_platform/__init__.py trio/_subprocess_platform/kqueue.py trio/_subprocess_platform/waitid.py trio/_subprocess_platform/windows.py trio/_tools/__init__.py trio/_tools/gen_exports.py trio/testing/__init__.py trio/testing/_check_streams.py trio/testing/_checkpoints.py trio/testing/_memory_streams.py trio/testing/_network.py trio/testing/_sequencer.py trio/testing/_trio_test.py trio/tests/__init__.py trio/tests/conftest.py trio/tests/module_with_deprecations.py trio/tests/test_abc.py trio/tests/test_channel.py trio/tests/test_deprecate.py trio/tests/test_exports.py trio/tests/test_file_io.py trio/tests/test_highlevel_generic.py trio/tests/test_highlevel_open_tcp_listeners.py trio/tests/test_highlevel_open_tcp_stream.py trio/tests/test_highlevel_open_unix_stream.py trio/tests/test_highlevel_serve_listeners.py trio/tests/test_highlevel_socket.py trio/tests/test_highlevel_ssl_helpers.py trio/tests/test_path.py trio/tests/test_scheduler_determinism.py trio/tests/test_signals.py trio/tests/test_socket.py trio/tests/test_ssl.py trio/tests/test_subprocess.py trio/tests/test_sync.py trio/tests/test_testing.py trio/tests/test_threads.py trio/tests/test_timeouts.py trio/tests/test_unix_pipes.py trio/tests/test_util.py trio/tests/test_wait_for_object.py trio/tests/test_windows_pipes.py trio/tests/tools/__init__.py trio/tests/tools/test_gen_exports.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751951.0 trio-0.19.0/trio.egg-info/dependency_links.txt0000644000076500000000000000000100000000000020635 0ustar00quentinwheel ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751951.0 trio-0.19.0/trio.egg-info/requires.txt0000644000076500000000000000026600000000000017173 0ustar00quentinwheelattrs>=19.2.0 sortedcontainers async_generator>=1.9 idna outcome sniffio [:os_name == "nt" and implementation_name != "pypy"] cffi>=1.14 [:python_version < "3.7"] contextvars>=2.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1623751951.0 trio-0.19.0/trio.egg-info/top_level.txt0000644000076500000000000000000500000000000017314 0ustar00quentinwheeltrio