././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6995263 pytest-xdist-3.4.0/0000755000175100001770000000000014523717176013621 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/.readthedocs.yaml0000644000175100001770000000026514523717146017050 0ustar00runnerdockerversion: 2 build: os: ubuntu-22.04 tools: python: "3.11" sphinx: configuration: docs/conf.py python: install: - path: . - requirements: docs/requirements.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/CHANGELOG.rst0000644000175100001770000010226714523717146015647 0ustar00runnerdockerpytest-xdist 3.4.0 (2023-11-11) =============================== Features -------- - `#963 `_: Wait for workers to finish reporting when test run stops early. This makes sure that the results of in-progress tests are displayed. Previously these reports were being discarded, losing information about the test run. - `#965 `_: Added support for Python 3.12. pytest-xdist 3.3.1 (2023-05-19) =============================== Bug Fixes --------- - `#907 `_: Avoid remote calls during startup as ``execnet`` by default does not ensure remote affinity with the main thread and might accidentally schedule the pytest worker into a non-main thread, which breaks numerous frameworks, for example ``asyncio``, ``anyio``, ``PyQt/PySide``, etc. A more safe correction will require thread affinity in ``execnet`` (`pytest-dev/execnet#96 `__). pytest-xdist 3.3.0 (2023-05-12) =============================== Features -------- - `#555 `_: Improved progress output when collecting nodes to be less verbose. pytest-xdist 3.2.1 (2023-03-12) =============================== Bug Fixes --------- - `#884 `_: Fixed hang in ``worksteal`` scheduler. pytest-xdist 3.2.0 (2023-02-07) =============================== Improved Documentation ---------------------- - `#863 `_: Document limitations for debugging due to standard I/O of workers not being forwarded. Also, mention remote debugging as a possible workaround. Features -------- - `#855 `_: Users can now configure ``load`` scheduling precision using ``--maxschedchunk`` command line option. - `#858 `_: New ``worksteal`` scheduler, based on the idea of `work stealing `_. It's similar to ``load`` scheduler, but it should handle tests with significantly differing duration better, and, at the same time, it should provide similar or better reuse of fixtures. Trivial Changes --------------- - `#870 `_: Make the tests pass even when ``$PYTEST_XDIST_AUTO_NUM_WORKERS`` is set. pytest-xdist 3.1.0 (2022-12-01) =============================== Features -------- - `#789 `_: Users can now set a default distribution mode in their configuration file: .. code-block:: ini [pytest] addopts = --dist loadscope - `#842 `_: Python 3.11 is now officially supported. Removals -------- - `#842 `_: Python 3.6 is no longer supported. pytest-xdist 3.0.2 (2022-10-25) =============================== Bug Fixes --------- - `#813 `_: Cancel shutdown when a crashed worker is restarted. Deprecations ------------ - `#825 `_: The ``--rsyncdir`` command line argument and ``rsyncdirs`` config variable are deprecated. The rsync feature will be removed in pytest-xdist 4.0. - `#826 `_: The ``--looponfail`` command line argument and ``looponfailroots`` config variable are deprecated. The loop-on-fail feature will be removed in pytest-xdist 4.0. Improved Documentation ---------------------- - `#791 `_: Document the ``pytest_xdist_auto_num_workers`` hook. - `#796 `_: Added known limitations section to documentation. - `#829 `_: Document the ``-n logical`` option. Features -------- - `#792 `_: The environment variable ``PYTEST_XDIST_AUTO_NUM_WORKERS`` can now be used to specify the default for ``-n auto`` and ``-n logical``. - `#812 `_: Partially restore old initial batch distribution algorithm in ``LoadScheduling``. pytest orders tests for optimal sequential execution - i. e. avoiding unnecessary setup and teardown of fixtures. So executing tests in consecutive chunks is important for optimal performance. In v1.14, initial test distribution in ``LoadScheduling`` was changed to round-robin, optimized for the corner case, when the number of tests is less than ``2 * number of nodes``. At the same time, it became worse for all other cases. For example: if some tests use some "heavy" fixture, and these tests fit into the initial batch, with round-robin distribution the fixture will be created ``min(n_tests, n_workers)`` times, no matter how many other tests there are. With the old algorithm (before v1.14), if there are enough tests not using the fixture, the fixture was created only once. So restore the old behavior for typical cases where the number of tests is much greater than the number of workers (or, strictly speaking, when there are at least 2 tests for every node). Removals -------- - `#468 `_: The ``--boxed`` command-line option has been removed. If you still need this functionality, install `pytest-forked `__ separately. Trivial Changes --------------- - `#468 `_: The ``py`` dependency has been dropped. - `#822 `_: Replace internal usage of ``py.log`` with a custom solution (but with the same interface). - `#823 `_: Remove usage of ``py._pydir`` as an rsync candidate. - `#824 `_: Replace internal usages of ``py.path.local`` by ``pathlib.Path``. pytest-xdist 2.5.0 (2021-12-10) =============================== Deprecations and Removals ------------------------- - `#468 `_: The ``--boxed`` command line argument is deprecated. Install `pytest-forked `__ and use ``--forked`` instead. pytest-xdist 3.0.0 will remove the ``--boxed`` argument and ``pytest-forked`` dependency. Features -------- - `#722 `_: Full compatibility with pytest 7 - no deprecation warnings or use of legacy features. - `#733 `_: New ``--dist=loadgroup`` option, which ensures all tests marked with ``@pytest.mark.xdist_group`` run in the same session/worker. Other tests run distributed as in ``--dist=load``. Trivial Changes --------------- - `#708 `_: Use ``@pytest.hookspec`` decorator to declare hook options in ``newhooks.py`` to avoid warnings in ``pytest 7.0``. - `#719 `_: Use up-to-date ``setup.cfg``/``pyproject.toml`` packaging setup. - `#720 `_: Require pytest>=6.2.0. - `#721 `_: Started using type annotations and mypy checking internally. The types are incomplete and not published. pytest-xdist 2.4.0 (2021-09-20) =============================== Features -------- - `#696 `_: On Linux, the process title now changes to indicate the current worker state (running/idle). Depends on the `setproctitle `__ package, which can be installed with ``pip install pytest-xdist[setproctitle]``. - `#704 `_: Add support for Python 3.10. pytest-xdist 2.3.0 (2021-06-16) =============================== Deprecations and Removals ------------------------- - `#654 `_: Python 3.5 is no longer supported. Features -------- - `#646 `_: Add ``--numprocesses=logical`` flag, which automatically uses the number of logical CPUs available, instead of physical CPUs with ``auto``. This is very useful for test suites which are not CPU-bound. - `#650 `_: Added new ``pytest_handlecrashitem`` hook to allow handling and rescheduling crashed items. Bug Fixes --------- - `#421 `_: Copy the parent process sys.path into local workers, to work around execnet's python -c adding the current directory to sys.path. - `#638 `_: Fix issue caused by changing the branch name of the pytest repository. Trivial Changes --------------- - `#592 `_: Replace master with controller where ever possible. - `#643 `_: Use 'main' to refer to pytest default branch in tox env names. pytest-xdist 2.2.1 (2021-02-09) =============================== Bug Fixes --------- - `#623 `_: Gracefully handle the pending deprecation of Node.fspath by using config.rootpath for topdir. pytest-xdist 2.2.0 (2020-12-14) =============================== Features -------- - `#608 `_: Internal errors in workers are now propagated to the master node. pytest-xdist 2.1.0 (2020-08-25) =============================== Features -------- - `#585 `_: New ``pytest_xdist_auto_num_workers`` hook can be implemented by plugins or ``conftest.py`` files to control the number of workers when ``--numprocesses=auto`` is given in the command-line. Trivial Changes --------------- - `#585 `_: ``psutil`` has proven to make ``pytest-xdist`` installation in certain platforms and containers problematic, so to use it for automatic number of CPUs detection users need to install the ``psutil`` extra:: pip install pytest-xdist[psutil] pytest-xdist 2.0.0 (2020-08-12) =============================== Deprecations and Removals ------------------------- - `#541 `_: Drop backward-compatibility "slave" aliases related to worker nodes. We deliberately moved away from this terminology years ago, and it seems like the right time to finish the deprecation and removal process. - `#569 `_: ``pytest-xdist`` no longer supports Python 2.7. Features -------- - `#504 `_: New functions ``xdist.is_xdist_worker``, ``xdist.is_xdist_master``, ``xdist.get_xdist_worker_id``, to easily identify the current node. Bug Fixes --------- - `#471 `_: Fix issue with Rsync reporting in quiet mode. - `#553 `_: When using ``-n auto``, count the number of physical CPU cores instead of logical ones. Trivial Changes --------------- - `#541 `_: ``pytest-xdist`` now requires ``pytest>=6.0``. pytest-xdist 1.34.0 (2020-07-27) ================================ Features -------- - `#549 `_: Make ``--pdb`` imply ``--dist no``, as the two options cannot really work together at the moment. Bug Fixes --------- - `#478 `_: Fix regression with duplicated arguments via $PYTEST_ADDOPTS in 1.30.0. - `#558 `_: Fix ``rsyncdirs`` usage with pytest 6.0. - `#562 `_: Do not trigger the deprecated ``pytest_warning_captured`` in pytest 6.0+. pytest-xdist 1.33.0 (2020-07-09) ================================ Features -------- - `#554 `_: Fix warnings support for upcoming pytest 6.0. Trivial Changes --------------- - `#548 `_: SCM and CI files are no longer included in the source distribution. pytest-xdist 1.32.0 (2020-05-03) ================================ Deprecations and Removals ------------------------- - `#475 `_: Drop support for EOL Python 3.4. Features -------- - `#524 `_: Add `testrun_uid` fixture. This is a shared value that uniquely identifies a test run among all workers. This also adds a `PYTEST_XDIST_TESTRUNUID` environment variable that is accessible within a test as well as a command line option `--testrunuid` to manually set the value from outside. pytest-xdist 1.31.0 (2019-12-19) ================================ Features -------- - `#486 `_: Add support for Python 3.8. Bug Fixes --------- - `#491 `_: Fix regression that caused custom plugin command-line arguments to be discarded when using ``--tx`` mode. pytest-xdist 1.30.0 (2019-10-01) ================================ Features -------- - `#448 `_: Initialization between workers and master nodes is now more consistent, which fixes a number of long-standing issues related to startup with the ``-c`` option. Issues: * `#6 `__: Poor interaction between ``-n#`` and ``-c X.cfg`` * `#445 `__: pytest-xdist is not reporting the same nodeid as pytest does This however only works with **pytest 5.1 or later**, as it required changes in pytest itself. Bug Fixes --------- - `#467 `_: Fix crash issues related to running xdist with the terminal plugin disabled. pytest-xdist 1.29.0 (2019-06-14) ================================ Features -------- - `#226 `_: ``--max-worker-restart`` now assumes a more reasonable value (4 times the number of nodes) when not given explicitly. This prevents test suites from running forever when the suite crashes during collection. - `#435 `_: When the test session is interrupted due to running out of workers, the reason is shown in the test summary for easier viewing. - `#442 `_: Compatibility fix for upcoming pytest 5.0: ``session.exitstatus`` is now an ``IntEnum`` object. Bug Fixes --------- - `#435 `_: No longer show an internal error when we run out of workers due to crashes. pytest-xdist 1.28.0 (2019-04-02) ================================ Features -------- - `#426 `_: ``pytest-xdist`` now uses the new ``pytest_report_to_serializable`` and ``pytest_report_from_serializable`` hooks from ``pytest 4.4`` (still experimental). This will make report serialization more reliable and extensible. This also means that ``pytest-xdist`` now requires ``pytest>=4.4``. pytest-xdist 1.27.0 (2019-02-15) ================================ Features -------- - `#374 `_: The new ``pytest_xdist_getremotemodule`` hook allows overriding the module run on remote nodes. - `#415 `_: Improve behavior of ``--numprocesses=auto`` to work well with ``--pdb`` option. pytest-xdist 1.26.1 (2019-01-28) ================================ Bug Fixes --------- - `#406 `_: Do not implement deprecated ``pytest_logwarning`` hook in pytest versions where it is deprecated. pytest-xdist 1.26.0 (2019-01-11) ================================ Features -------- - `#376 `_: The current directory is no longer added ``sys.path`` for local workers, only for remote connections. This behavior is surprising because it makes xdist runs and non-xdist runs to potentially behave differently. Bug Fixes --------- - `#379 `_: Warning attributes are checked to make sure they can be dumped prior to serializing the warning for submission to the master node. pytest-xdist 1.25.0 (2018-12-12) ================================ Deprecations and Removals ------------------------- - `#372 `_: Pytest versions older than 3.6 are no longer supported. Features -------- - `#373 `_: Node setup information is hidden when pytest is run in quiet mode to reduce noise on many-core machines. - `#388 `_: ``mainargv`` is made available in ``workerinput`` from the host's ``sys.argv``. This can be used via ``request.config.workerinput["mainargv"]``. Bug Fixes --------- - `#332 `_: Fix report of module-level skips (``pytest.skip(reason, allow_module_level=True)``). - `#378 `_: Fix support for gevent monkeypatching - `#384 `_: pytest 4.1 support: ``ExceptionInfo`` API changes. - `#390 `_: pytest 4.1 support: ``pytest_logwarning`` hook removed. pytest-xdist 1.24.1 (2018-11-09) ================================ Bug Fixes --------- - `#349 `_: Correctly handle warnings created with arguments that can't be serialized during the transfer from workers to master node. pytest-xdist 1.24.0 (2018-10-18) ================================ Features -------- - `#337 `_: New ``--maxprocesses`` command-line option that limits the maximum number of workers when using ``--numprocesses=auto``. Bug Fixes --------- - `#351 `_: Fix scheduling deadlock in case of inter-test locking. pytest-xdist 1.23.2 (2018-09-28) ================================ Bug Fixes --------- - `#344 `_: Fix issue where Warnings could cause pytest to fail if they do not set the args attribute correctly. pytest-xdist 1.23.1 (2018-09-25) ================================ Bug Fixes --------- - `#341 `_: Fix warnings transfer between workers and master node with pytest >= 3.8. pytest-xdist 1.23.0 (2018-08-23) ================================ Features -------- - `#330 `_: Improve collection performance by reducing the number of events sent to ``master`` node. pytest-xdist 1.22.5 (2018-07-27) ================================ Bug Fixes --------- - `#321 `_: Revert change that dropped support for ``pytest<3.4`` and require ``six``. This change caused problems in some installations, and was a mistaken in the first place as we should not change version requirements in bug-fix releases unless they fix an actual bug. pytest-xdist 1.22.4 (2018-07-27) ================================ Bug Fixes --------- - `#305 `_: Remove last references to obsolete ``py.code``. Remove some unnecessary references to ``py.builtin``. - `#316 `_: Workaround cpu detection on Travis CI. pytest-xdist 1.22.3 (2018-07-23) ================================ Bug Fixes --------- - Fix issue of virtualized or containerized environments not reporting the number of CPUs correctly. (`#9 `_) Trivial Changes --------------- - Make all classes subclass from ``object`` and fix ``super()`` call in ``LoadFileScheduling``; (`#297 `_) pytest-xdist 1.22.2 (2018-02-26) ================================ Bug Fixes --------- - Add backward compatibility for ``slaveoutput`` attribute to ``WorkerController`` instances. (`#285 `_) pytest-xdist 1.22.1 (2018-02-19) ================================ Bug Fixes --------- - Fix issue when using ``loadscope`` or ``loadfile`` where tests would fail to start if the first scope had only one test. (`#257 `_) Trivial Changes --------------- - Change terminology used by ``pytest-xdist`` to *master* and *worker* in arguments and messages (for example ``--max-worker-reset``). (`#234 `_) pytest-xdist 1.22.0 (2018-01-11) ================================ Features -------- - Add support for the ``pytest_runtest_logfinish`` hook which will be released in pytest 3.4. (`#266 `_) pytest-xdist 1.21.0 (2017-12-22) ================================ Deprecations and Removals ------------------------- - Drop support for EOL Python 2.6. (`#259 `_) Features -------- - New ``--dist=loadfile`` option which load-distributes test to workers grouped by the file the tests live in. (`#242 `_) Bug Fixes --------- - Fix accidental mutation of test report during serialization causing longrepr string-ification to break. (`#241 `_) pytest-xdist 1.20.1 (2017-10-05) ================================ Bug Fixes --------- - Fix hang when all worker nodes crash and restart limit is reached (`#45 `_) - Fix issue where the -n option would still run distributed tests when pytest was run with the --collect-only option (`#5 `_) pytest-xdist 1.20.0 (2017-08-17) ================================ Features -------- - ``xdist`` now supports tests to log results multiple times, improving integration with plugins which require it like `pytest-rerunfailures `_ and `flaky `_. (`#206 `_) Bug Fixes --------- - Fix issue where tests were being incorrectly identified if a worker crashed during the ``teardown`` stage of the test. (`#124 `_) pytest-xdist 1.19.1 (2017-08-10) ================================ Bug Fixes --------- - Fix crash when transferring internal pytest warnings from workers to the master node. (`#214 `_) pytest-xdist 1.19.0 (2017-08-09) ================================ Deprecations and Removals ------------------------- - ``--boxed`` functionality has been moved to a separate plugin, `pytest-forked `_. This release now depends on `` pytest-forked`` and provides ``--boxed`` as a backward compatibility option. (`#1 `_) Features -------- - New ``--dist=loadscope`` option: sends group of related tests to the same worker. Tests are grouped by module for test functions and by class for test methods. See ``README.rst`` for more information. (`#191 `_) - Warnings are now properly transferred from workers to the master node. (`#92 `_) Bug Fixes --------- - Fix serialization of native tracebacks (``--tb=native``). (`#196 `_) pytest-xdist 1.18.2 (2017-07-28) ================================ Bug Fixes --------- - Removal of unnecessary dependency on incorrect version of py. (`#105 `_) - Fix bug in internal event-loop error handler in the master node. This bug would shadow the original errors making extremely hard/impossible for users to diagnose the problem properly. (`#175 `_) pytest-xdist 1.18.1 (2017-07-05) ================================ Bug Fixes --------- - Fixed serialization of ``longrepr.sections`` during error reporting from workers. (`#171 `_) - Fix ``ReprLocal`` not being unserialized breaking --showlocals usages. (`#176 `_) pytest-xdist 1.18.0 (2017-06-26) ================================ - ``pytest-xdist`` now requires ``pytest>=3.0.0``. Features -------- - Add long option `--numprocesses` as alternative for `-n`. (#168) Bug Fixes --------- - Fix serialization and deserialization dropping longrepr details. (#133) pytest-xdist 1.17.1 (2017-06-10) ================================ Bug Fixes --------- - Hot fix release reverting the change introduced by #124, unfortunately it broke a number of test suites so we are reversing this change while we investigate the problem. (#157) Improved Documentation ---------------------- - Introduced ``towncrier`` for ``CHANGELOG`` management. (#154) - Added ``HOWTORELEASE`` documentation. (#155) 1.17.0 ------ - fix #124: xdist would mark test as complete after 'call' step. As a result, xdist could identify the wrong test as failing when test crashes at teardown. To address this issue, xdist now marks test as complete at teardown. 1.16.0 ------ - ``pytest-xdist`` now requires pytest 2.7 or later. - Add ``worker_id`` attribute in the TestReport - new hook: ``pytest_xdist_make_scheduler(config, log)``, can return custom tests items distribution logic implementation. You can take a look at built-in ``LoadScheduling`` and ``EachScheduling`` implementations. Note that required scheduler class public API may change in next ``pytest-xdist`` versions. 1.15.0 ------ - new ``worker_id`` fixture, returns the id of the worker in a test or fixture. Thanks Jared Hellman for the PR. - display progress during collection only when in a terminal, similar to pytest #1397 issue. Thanks Bruno Oliveira for the PR. - fix internal error message when ``--maxfail`` is used (#62, #65). Thanks Collin RM Stocks and Bryan A. Jones for reports and Bruno Oliveira for the PR. 1.14 ---- - new hook: ``pytest_xdist_node_collection_finished(node, ids)``, called when a worker has finished collection. Thanks Omer Katz for the request and Bruno Oliveira for the PR. - fix README display on pypi - fix #22: xdist now works if the internal tmpdir plugin is disabled. Thanks Bruno Oliveira for the PR. - fix #32: xdist now works if looponfail or boxed are disabled. Thanks Bruno Oliveira for the PR. 1.13.1 ------- - fix a regression -n 0 now disables xdist again 1.13 ------------------------- - extended the tox matrix with the supported py.test versions - split up the plugin into 3 plugin's to prepare the departure of boxed and looponfail. looponfail will be a part of core and forked boxed will be replaced with a more reliable primitive based on xdist - conforming with new pytest-2.8 behavior of returning non-zero when all tests were skipped or deselected. - new "--max-slave-restart" option that can be used to control maximum number of times pytest-xdist can restart slaves due to crashes. Thanks to Anatoly Bubenkov for the report and Bruno Oliveira for the PR. - release as wheel - "-n" option now can be set to "auto" for automatic detection of number of cpus in the host system. Thanks Suloev Dmitry for the PR. 1.12 ------------------------- - fix issue594: properly report errors when the test collection is random. Thanks Bruno Oliveira. - some internal test suite adaptation (to become forward compatible with the upcoming pytest-2.8) 1.11 ------------------------- - fix pytest/xdist issue485 (also depends on py-1.4.22): attach stdout/stderr on --boxed processes that die. - fix pytest/xdist issue503: make sure that a node has usually two items to execute to avoid scoped fixtures to be torn down pre-maturely (fixture teardown/setup is "nextitem" sensitive). Thanks to Andreas Pelme for bug analysis and failing test. - restart crashed nodes by internally refactoring setup handling of nodes. Also includes better code documentation. Many thanks to Floris Bruynooghe for the complete PR. 1.10 ------------------------- - add glob support for rsyncignores, add command line option to pass additional rsyncignores. Thanks Anatoly Bubenkov. - fix pytest issue382 - produce "pytest_runtest_logstart" event again in master. Thanks Aron Curzon. - fix pytest issue419 by sending/receiving indices into the test collection instead of node ids (which are not necessarily unique for functions parametrized with duplicate values) - send multiple "to test" indices in one network message to a slave and improve heuristics for sending chunks where the chunksize depends on the number of remaining tests rather than fixed numbers. This reduces the number of master -> node messages (but not the reverse direction) 1.9 ------------------------- - changed LICENSE to MIT - fix duplicate reported test ids with --looponfailing (thanks Jeremy Thurgood) - fix pytest issue41: re-run tests on all file changes, not just randomly select ones like .py/.c. - fix pytest issue347: slaves running on top of Python3.2 will set PYTHONDONTWRITEYBTECODE to 1 to avoid import concurrency bugs. 1.8 ------------------------- - fix pytest-issue93 - use the refined pytest-2.2.1 runtestprotocol interface to perform eager teardowns for test items. 1.7 ------------------------- - fix incompatibilities with pytest-2.2.0 (allow multiple pytest_runtest_logreport reports for a test item) 1.6 ------------------------- - terser collection reporting - fix issue34 - distributed testing with -p plugin now works correctly - fix race condition in looponfail mode where a concurrent file removal could cause a crash 1.5 ------------------------- - adapt to and require pytest-2.0 changes, rsyncdirs and rsyncignore can now only be specified in [pytest] sections of ini files, see "py.test -h" for details. - major internal refactoring to match the pytest-2.0 event refactoring - perform test collection always at slave side instead of at the master - make python2/python3 bridging work, remove usage of pickling - improve initial reporting by using line-rewriting - remove all trailing whitespace from source 1.4 ------------------------- - perform distributed testing related reporting in the plugin rather than having dist-related code in the generic py.test distribution - depend on execnet-1.0.7 which adds "env1:NAME=value" keys to gateway specification strings. - show detailed gateway setup and platform information only when "-v" or "--verbose" is specified. 1.3 ------------------------- - fix --looponfailing - it would not actually run against the fully changed source tree when initial conftest files load application state. - adapt for py-1.3.1's new --maxfailure option 1.2 ------------------------- - fix issue79: sessionfinish/teardown hooks are now called systematically on the slave side - introduce a new data input/output mechanism to allow the master side to send and receive data from a slave. - fix race condition in underlying pickling/unpickling handling - use and require new register hooks facility of py.test>=1.3.0 - require improved execnet>=1.0.6 because of various race conditions that can arise in xdist testing modes. - fix some python3 related pickling related race conditions - fix PyPI description 1.1 ------------------------- - fix an indefinite hang which would wait for events although no events are pending - this happened if items arrive very quickly while the "reschedule-event" tried unconditionally avoiding a busy-loop and not schedule new work. 1.0 ------------------------- - moved code out of py-1.1.1 into its own plugin - use a new, faster and more sensible model to do load-balancing of tests - now no magic "MAXITEMSPERHOST" is needed and load-testing works effectively even with very few tests. - cleaned up termination handling - make -x cause hard killing of test nodes to decrease wait time until the traceback shows up on first failure ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/LICENSE0000644000175100001770000000203614523717146014624 0ustar00runnerdocker Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/MANIFEST.in0000644000175100001770000000015314523717146015353 0ustar00runnerdockerexclude .appveyor.yml exclude .gitignore exclude .pre-commit-config.yaml exclude .travis.yml prune .github ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6995263 pytest-xdist-3.4.0/PKG-INFO0000644000175100001770000000602614523717176014722 0ustar00runnerdockerMetadata-Version: 2.1 Name: pytest-xdist Version: 3.4.0 Summary: pytest xdist plugin for distributed testing, most importantly across multiple CPUs Home-page: https://github.com/pytest-dev/pytest-xdist Author: holger krekel and contributors Author-email: pytest-dev@python.org,holger@merlinux.eu License: MIT Project-URL: Documentation, https://pytest-xdist.readthedocs.io/en/latest Project-URL: Changelog, https://pytest-xdist.readthedocs.io/en/latest/changelog.html Project-URL: Source, https://github.com/pytest-dev/pytest-xdist Project-URL: Tracker, https://github.com/pytest-dev/pytest-xdist/issues Platform: linux Platform: osx Platform: win32 Classifier: Development Status :: 5 - Production/Stable Classifier: Framework :: Pytest Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: POSIX Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: MacOS :: MacOS X Classifier: Topic :: Software Development :: Testing Classifier: Topic :: Software Development :: Quality Assurance Classifier: Topic :: Utilities Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Requires-Python: >=3.7 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: execnet>=1.1 Requires-Dist: pytest>=6.2.0 Provides-Extra: testing Requires-Dist: filelock; extra == "testing" Provides-Extra: psutil Requires-Dist: psutil>=3.0; extra == "psutil" Provides-Extra: setproctitle Requires-Dist: setproctitle; extra == "setproctitle" ============ pytest-xdist ============ .. image:: http://img.shields.io/pypi/v/pytest-xdist.svg :alt: PyPI version :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://img.shields.io/conda/vn/conda-forge/pytest-xdist.svg :target: https://anaconda.org/conda-forge/pytest-xdist .. image:: https://img.shields.io/pypi/pyversions/pytest-xdist.svg :alt: Python versions :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://github.com/pytest-dev/pytest-xdist/workflows/test/badge.svg :target: https://github.com/pytest-dev/pytest-xdist/actions .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black The `pytest-xdist`_ plugin extends pytest with new test execution modes, the most used being distributing tests across multiple CPUs to speed up test execution:: pytest -n auto With this call, pytest will spawn a number of workers processes equal to the number of available CPUs, and distribute the tests randomly across them. Documentation ============= Documentation is available at `Read The Docs `__. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/README.rst0000644000175100001770000000225614523717146015312 0ustar00runnerdocker============ pytest-xdist ============ .. image:: http://img.shields.io/pypi/v/pytest-xdist.svg :alt: PyPI version :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://img.shields.io/conda/vn/conda-forge/pytest-xdist.svg :target: https://anaconda.org/conda-forge/pytest-xdist .. image:: https://img.shields.io/pypi/pyversions/pytest-xdist.svg :alt: Python versions :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://github.com/pytest-dev/pytest-xdist/workflows/test/badge.svg :target: https://github.com/pytest-dev/pytest-xdist/actions .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black The `pytest-xdist`_ plugin extends pytest with new test execution modes, the most used being distributing tests across multiple CPUs to speed up test execution:: pytest -n auto With this call, pytest will spawn a number of workers processes equal to the number of available CPUs, and distribute the tests randomly across them. Documentation ============= Documentation is available at `Read The Docs `__. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/RELEASING.rst0000644000175100001770000000215214523717146015661 0ustar00runnerdocker====================== Releasing pytest-xdist ====================== This document describes the steps to make a new ``pytest-xdist`` release. Version ------- ``master`` should always be green and a potential release candidate. ``pytest-xdist`` follows semantic versioning, so given that the current version is ``X.Y.Z``, to find the next version number one needs to look at the ``changelog`` folder: - If there is any file named ``*.feature``, then we must make a new **minor** release: next release will be ``X.Y+1.0``. - Otherwise it is just a **bug fix** release: ``X.Y.Z+1``. Steps ----- To publish a new release ``X.Y.Z``, the steps are as follows: #. Create a new branch named ``release-X.Y.Z`` from the latest ``master``. #. Install ``tox`` in a virtualenv:: $ pip install tox #. Update the necessary files with:: $ tox -e release -- X.Y.Z #. Commit and push the branch to ``upstream`` and open a PR. #. Once the PR is **green** and **approved**, start the ``deploy`` workflow manually from the branch ``release-VERSION``, passing ``VERSION`` as parameter. #. Merge the release PR to ``master``. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6915262 pytest-xdist-3.4.0/changelog/0000755000175100001770000000000014523717176015550 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/changelog/_template.rst0000644000175100001770000000152414523717146020253 0ustar00runnerdocker{% for section in sections %} {% set underline = "-" %} {% if section %} {{section}} {{ underline * section|length }}{% set underline = "~" %} {% endif %} {% if sections[section] %} {% for category, val in definitions.items() if category in sections[section] %} {{ definitions[category]['name'] }} {{ underline * definitions[category]['name']|length }} {% if definitions[category]['showcontent'] %} {% for text, values in sections[section][category]|dictsort(by='value') %} - `{{ values[0] }} `_: {{ text }} {% endfor %} {% else %} - {{ sections[section][category]['']|sort|join(', ') }} {% endif %} {% if sections[section][category]|length == 0 %} No significant changes. {% else %} {% endif %} {% endfor %} {% else %} No significant changes. {% endif %} {% endfor %} ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/docs/0000755000175100001770000000000014523717176014551 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/.gitignore0000644000175100001770000000001014523717146016525 0ustar00runnerdocker_build/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/changelog.rst0000644000175100001770000000007514523717146017231 0ustar00runnerdocker========= Changelog ========= .. include:: ../CHANGELOG.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/conf.py0000644000175100001770000000367214523717146016055 0ustar00runnerdocker# Configuration file for the Sphinx documentation builder. # # This file only contains a selection of the most common options. For a full # list see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- Project information ----------------------------------------------------- project = "pytest-xdist" copyright = "2022, holger krekel and contributors" author = "holger krekel and contributors" master_doc = "index" # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx_rtd_theme", "sphinx.ext.autodoc", ] # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "sphinx_rtd_theme" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/crash.rst0000644000175100001770000000050014523717146016373 0ustar00runnerdockerWhen tests crash ================ If a test crashes a worker, pytest-xdist will automatically restart that worker and report the test’s failure. You can use the ``--max-worker-restart`` option to limit the number of worker restarts that are allowed, or disable restarting altogether using ``--max-worker-restart 0``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/distribution.rst0000644000175100001770000000747614523717146020035 0ustar00runnerdocker.. _parallelization: Running tests across multiple CPUs ================================== To send tests to multiple CPUs, use the ``-n`` (or ``--numprocesses``) option:: pytest -n auto This can lead to considerable speed ups, especially if your test suite takes a noticeable amount of time. With ``-n auto``, pytest-xdist will use as many processes as your computer has CPU cores. Use ``-n logical`` to use the number of *logical* CPU cores rather than physical ones. This currently requires the ``psutil`` package to be installed; if it is not, pytest-xdist will fall back to ``-n auto`` behavior. Pass a number, e.g. ``-n 8``, to specify the number of processes explicitly. To specify a different meaning for ``-n auto`` and ``-n logical`` for your tests, you can: * Set the environment variable ``PYTEST_XDIST_AUTO_NUM_WORKERS`` to the desired number of processes. * Implement the ``pytest_xdist_auto_num_workers`` `pytest hook `__ (a ``pytest_xdist_auto_num_workers(config)`` function in e.g. ``conftest.py``) that returns the number of processes to use. The hook can use ``config.option.numprocesses`` to determine if the user asked for ``"auto"`` or ``"logical"``, and it can return ``None`` to fall back to the default. If both the hook and environment variable are specified, the hook takes priority. Parallelization can be configured further with these options: * ``--maxprocesses=maxprocesses``: limit the maximum number of workers to process the tests. * ``--max-worker-restart``: maximum number of workers that can be restarted when crashed (set to zero to disable this feature). The test distribution algorithm is configured with the ``--dist`` command-line option: .. _distribution modes: * ``--dist load`` **(default)**: Sends pending tests to any worker that is available, without any guaranteed order. Scheduling can be fine-tuned with the `--maxschedchunk` option, see output of `pytest --help`. * ``--dist loadscope``: Tests are grouped by **module** for *test functions* and by **class** for *test methods*. Groups are distributed to available workers as whole units. This guarantees that all tests in a group run in the same process. This can be useful if you have expensive module-level or class-level fixtures. Grouping by class takes priority over grouping by module. * ``--dist loadfile``: Tests are grouped by their containing file. Groups are distributed to available workers as whole units. This guarantees that all tests in a file run in the same worker. * ``--dist loadgroup``: Tests are grouped by the ``xdist_group`` mark. Groups are distributed to available workers as whole units. This guarantees that all tests with same ``xdist_group`` name run in the same worker. .. code-block:: python @pytest.mark.xdist_group(name="group1") def test1(): pass class TestA: @pytest.mark.xdist_group("group1") def test2(): pass This will make sure ``test1`` and ``TestA::test2`` will run in the same worker. Tests without the ``xdist_group`` mark are distributed normally as in the ``--dist=load`` mode. * ``--dist worksteal``: Initially, tests are distributed evenly among all available workers. When a worker completes most of its assigned tests and doesn't have enough tests to continue (currently, every worker needs at least two tests in its queue), an attempt is made to reassign ("steal") a portion of tests from some other worker's queue. The results should be similar to the ``load`` method, but ``worksteal`` should handle tests with significantly differing duration better, and, at the same time, it should provide similar or better reuse of fixtures. * ``--dist no``: The normal pytest execution mode, runs one test at a time (no distribution at all). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/how-it-works.rst0000644000175100001770000001074214523717146017656 0ustar00runnerdockerHow it works? ============= ``xdist`` works by spawning one or more **workers**, which are controlled by the **controller**. Each **worker** is responsible for performing a full test collection and afterwards running tests as dictated by the **controller**. The execution flow is: 1. **controller** spawns one or more **workers** at the beginning of the test session. The communication between **controller** and **worker** nodes makes use of `execnet `__ and its `gateways `__. The actual interpreters executing the code for the **workers** might be remote or local. 2. Each **worker** itself is a mini pytest runner. **workers** at this point perform a full test collection, sending back the collected test-ids back to the **controller** which does not perform any collection itself. 3. The **controller** receives the result of the collection from all nodes. At this point the **controller** performs some sanity check to ensure that all **workers** collected the same tests (including order), bailing out otherwise. If all is well, it converts the list of test-ids into a list of simple indexes, where each index corresponds to the position of that test in the original collection list. This works because all nodes have the same collection list, and saves bandwidth because the **controller** can now tell one of the workers to just *execute test index 3* instead of passing the full test id. 4. If **dist-mode** is **each**: the **controller** just sends the full list of test indexes to each node at this moment. 5. If **dist-mode** is **load**: the **controller** takes around 25% of the tests and sends them one by one to each **worker** in a round robin fashion. The rest of the tests will be distributed later as **workers** finish tests (see below). 6. Note that ``pytest_xdist_make_scheduler`` hook can be used to implement custom tests distribution logic. 7. **workers** re-implement ``pytest_runtestloop``: pytest’s default implementation basically loops over all collected items in the ``session`` object and executes the ``pytest_runtest_protocol`` for each test item, but in xdist **workers** sit idly waiting for **controller** to send tests for execution. As tests are received by **workers**, ``pytest_runtest_protocol`` is executed for each test. Here it worth noting an implementation detail: **workers** always must keep at least one test item on their queue due to how the ``pytest_runtest_protocol(item, nextitem)`` hook is defined: in order to pass the ``nextitem`` to the hook, the worker must wait for more instructions from controller before executing that remaining test. If it receives more tests, then it can safely call ``pytest_runtest_protocol`` because it knows what the ``nextitem`` parameter will be. If it receives a “shutdown” signal, then it can execute the hook passing ``nextitem`` as ``None``. 8. As tests are started and completed at the **workers**, the results are sent back to the **controller**, which then just forwards the results to the appropriate pytest hooks: ``pytest_runtest_logstart`` and ``pytest_runtest_logreport``. This way other plugins (for example ``junitxml``) can work normally. The **controller** (when in dist-mode **load**) decides to send more tests to a node when a test completes, using some heuristics such as test durations and how many tests each **worker** still has to run. 9. When the **controller** has no more pending tests it will send a “shutdown” signal to all **workers**, which will then run their remaining tests to completion and shut down. At this point the **controller** will sit waiting for **workers** to shut down, still processing events such as ``pytest_runtest_logreport``. FAQ --- **Question**: Why does each worker do its own collection, as opposed to having the controller collect once and distribute from that collection to the workers? If collection was performed by controller then it would have to serialize collected items to send them through the wire, as workers live in another process. The problem is that test items are not easily (impossible?) to serialize, as they contain references to the test functions, fixture managers, config objects, etc. Even if one manages to serialize it, it seems it would be very hard to get it right and easy to break by any small change in pytest. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/how-to.rst0000644000175100001770000001704214523717146016521 0ustar00runnerdockerHow-tos ------- This section show cases how to accomplish some specialized tasks with ``pytest-xdist``. Identifying the worker process during a test ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *New in version 1.15.* If you need to determine the identity of a worker process in a test or fixture, you may use the ``worker_id`` fixture to do so: .. code-block:: python @pytest.fixture() def user_account(worker_id): """use a different account in each xdist worker""" return "account_%s" % worker_id When ``xdist`` is disabled (running with ``-n0`` for example), then ``worker_id`` will return ``"master"``. Worker processes also have the following environment variables defined: .. envvar:: PYTEST_XDIST_WORKER The name of the worker, e.g., ``"gw2"``. .. envvar:: PYTEST_XDIST_WORKER_COUNT The total number of workers in this session, e.g., ``"4"`` when ``-n 4`` is given in the command-line. The information about the worker_id in a test is stored in the ``TestReport`` as well, under the ``worker_id`` attribute. Since version 2.0, the following functions are also available in the ``xdist`` module: .. autofunction:: xdist.is_xdist_worker .. autofunction:: xdist.is_xdist_controller .. autofunction:: xdist.is_xdist_master .. autofunction:: xdist.get_xdist_worker_id Identifying workers from the system environment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *New in version 2.4* If the `setproctitle`_ package is installed, ``pytest-xdist`` will use it to update the process title (command line) on its workers to show their current state. The titles used are ``[pytest-xdist running] file.py/node::id`` and ``[pytest-xdist idle]``, visible in standard tools like ``ps`` and ``top`` on Linux, Mac OS X and BSD systems. For Windows, please follow `setproctitle`_'s pointer regarding the Process Explorer tool. This is intended purely as an UX enhancement, e.g. to track down issues with long-running or CPU intensive tests. Errors in changing the title are ignored silently. Please try not to rely on the title format or title changes in external scripts. .. _`setproctitle`: https://pypi.org/project/setproctitle/ Uniquely identifying the current test run ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *New in version 1.32.* If you need to globally distinguish one test run from others in your workers, you can use the ``testrun_uid`` fixture. For instance, let's say you wanted to create a separate database for each test run: .. code-block:: python import pytest from posix_ipc import Semaphore, O_CREAT @pytest.fixture(scope="session", autouse=True) def create_unique_database(testrun_uid): """create a unique database for this particular test run""" database_url = f"psql://myapp-{testrun_uid}" with Semaphore(f"/{testrun_uid}-lock", flags=O_CREAT, initial_value=1): if not database_exists(database_url): create_database(database_url) @pytest.fixture() def db(testrun_uid): """retrieve unique database""" database_url = f"psql://myapp-{testrun_uid}" return database_get_instance(database_url) Additionally, during a test run, the following environment variable is defined: .. envvar:: PYTEST_XDIST_TESTRUNUID The unique id of the test run. Accessing ``sys.argv`` from the controller node in workers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To access the ``sys.argv`` passed to the command-line of the controller node, use ``request.config.workerinput["mainargv"]``. Specifying test exec environments in an ini file ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can use pytest's ini file configuration to avoid typing common options. You can for example make running with three subprocesses your default like this: .. code-block:: ini [pytest] addopts = -n3 You can also add default environments like this: .. code-block:: ini [pytest] addopts = --tx ssh=myhost//python=python3.9 --tx ssh=myhost//python=python3.6 and then just type:: pytest --dist=each to run tests in each of the environments. Specifying "rsync" dirs in an ini-file ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In a ``tox.ini`` or ``setup.cfg`` file in your root project directory you may specify directories to include or to exclude in synchronisation: .. code-block:: ini [pytest] rsyncdirs = . mypkg helperpkg rsyncignore = .hg These directory specifications are relative to the directory where the configuration file was found. .. _`pytest-xdist`: http://pypi.python.org/pypi/pytest-xdist .. _`pytest-xdist repository`: https://github.com/pytest-dev/pytest-xdist .. _`pytest`: http://pytest.org Making session-scoped fixtures execute only once ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``pytest-xdist`` is designed so that each worker process will perform its own collection and execute a subset of all tests. This means that tests in different processes requesting a high-level scoped fixture (for example ``session``) will execute the fixture code more than once, which breaks expectations and might be undesired in certain situations. While ``pytest-xdist`` does not have a builtin support for ensuring a session-scoped fixture is executed exactly once, this can be achieved by using a lock file for inter-process communication. The example below needs to execute the fixture ``session_data`` only once (because it is resource intensive, or needs to execute only once to define configuration options, etc), so it makes use of a `FileLock `_ to produce the fixture data only once when the first process requests the fixture, while the other processes will then read the data from a file. Here is the code: .. code-block:: python import json import pytest from filelock import FileLock @pytest.fixture(scope="session") def session_data(tmp_path_factory, worker_id): if worker_id == "master": # not executing in with multiple workers, just produce the data and let # pytest's fixture caching do its job return produce_expensive_data() # get the temp directory shared by all workers root_tmp_dir = tmp_path_factory.getbasetemp().parent fn = root_tmp_dir / "data.json" with FileLock(str(fn) + ".lock"): if fn.is_file(): data = json.loads(fn.read_text()) else: data = produce_expensive_data() fn.write_text(json.dumps(data)) return data The example above can also be use in cases a fixture needs to execute exactly once per test session, like initializing a database service and populating initial tables. This technique might not work for every case, but should be a starting point for many situations where executing a high-scope fixture exactly once is important. Creating one log file for each worker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To create one log file for each worker with ``pytest-xdist``, you can leverage :envvar:`PYTEST_XDIST_WORKER` to generate a unique filename for each worker. Example: .. code-block:: python # content of conftest.py def pytest_configure(config): worker_id = os.environ.get("PYTEST_XDIST_WORKER") if worker_id is not None: logging.basicConfig( format=config.getini("log_file_format"), filename=f"tests_{worker_id}.log", level=config.getini("log_file_level"), ) When running the tests with ``-n3``, for example, three files will be created in the current directory: ``tests_gw0.log``, ``tests_gw1.log`` and ``tests_gw2.log``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/index.rst0000644000175100001770000000351414523717146016412 0ustar00runnerdockerpytest-xdist ============ The `pytest-xdist`_ plugin extends pytest with new test execution modes, the most used being distributing tests across multiple CPUs to speed up test execution:: pytest -n auto With this call, pytest will spawn a number of workers processes equal to the number of available CPUs, and distribute the tests randomly across them. .. note:: Due to how pytest-xdist is implemented, the ``-s/--capture=no`` option does not work. Installation ------------ Install the plugin with:: pip install pytest-xdist To use ``psutil`` for detection of the number of CPUs available, install the ``psutil`` extra:: pip install pytest-xdist[psutil] Features -------- * Test run :ref:`parallelization`: tests can be executed across multiple CPUs or hosts. This allows to speed up development or to use special resources of :ref:`remote machines`. * ``--looponfail``: run your tests repeatedly in a subprocess. After each run pytest waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed (DEPRECATED). * :ref:`Multi-Platform` coverage: you can specify different Python interpreters or different platforms and run tests in parallel on all of them. Before running tests remotely, ``pytest`` efficiently "rsyncs" your program source code to the remote place. You may specify different Python versions and interpreters. It does not installs/synchronize dependencies however. **Note**: this mode exists mostly for backward compatibility, as modern development relies on continuous integration for multi-platform testing. .. toctree:: :maxdepth: 2 :caption: Contents: distribution subprocess remote crash how-to how-it-works known-limitations changelog ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/known-limitations.rst0000644000175100001770000000506114523717146020770 0ustar00runnerdockerKnown limitations ================= pytest-xdist has some limitations that may be supported in pytest but can't be supported in pytest-xdist. Order and amount of test must be consistent ------------------------------------------- It is not possible to have tests that differ in order or their amount across workers. This is especially true with ``pytest.mark.parametrize``, when values are produced with sets or other unordered iterables/generators. Example: .. code-block:: python import pytest @pytest.mark.parametrize("param", {"a", "b"}) def test_pytest_parametrize_unordered(param): pass In the example above, the fact that ``set`` are not necessarily ordered can cause different workers to collect tests in different order, which will throw an error. Workarounds ~~~~~~~~~~~ A solution to this is to guarantee that the parametrized values have the same order. Some solutions: * Convert your sequence to a ``list``. .. code-block:: python import pytest @pytest.mark.parametrize("param", ["a", "b"]) def test_pytest_parametrize_unordered(param): pass * Sort your sequence, guaranteeing order. .. code-block:: python import pytest @pytest.mark.parametrize("param", sorted({"a", "b"})) def test_pytest_parametrize_unordered(param): pass Output (stdout and stderr) from workers --------------------------------------- The ``-s``/``--capture=no`` option is meant to disable pytest capture, so users can then see stdout and stderr output in the terminal from tests and application code in real time. However, this option does not work with ``pytest-xdist`` because `execnet `__ the underlying library used for communication between master and workers, does not support transferring stdout/stderr from workers. Currently, there are no plans to support this in ``pytest-xdist``. Debugging ~~~~~~~~~ This also means that debugging using PDB (or any other debugger that wants to use standard I/O) will not work. The ``--pdb`` option is disabled when distributing tests with ``pytest-xdist`` for this reason. It is generally likely best to use ``pytest-xdist`` to find failing tests and then debug them without distribution; however, if you need to debug from within a worker process (for example, to address failures that only happen when running tests concurrently), remote debuggers (for example, `python-remote-pdb `__ or `python-web-pdb `__) have been reported to work for this purpose. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/remote.rst0000644000175100001770000000544114523717146016577 0ustar00runnerdocker .. _`Multi-Platform`: .. _`remote machines`: Sending tests to remote SSH accounts ==================================== .. deprecated:: 3.0 .. warning:: The ``rsync`` feature is deprecated because its implementation is faulty in terms of reproducing the development environment in the remote worker, and there is no clear solution moving forward. For that reason, ``rsync`` is scheduled to be removed in release 4.0, to let the team focus on a smaller set of features. Note that SSH and socket server are not planned for removal, as they are part of the ``execnet`` feature set. Suppose you have a package ``mypkg`` which contains some tests that you can successfully run locally. And you have a ssh-reachable machine ``myhost``. Then you can ad-hoc distribute your tests by typing:: pytest -d --rsyncdir mypkg --tx ssh=myhostpopen mypkg/tests/unit/test_something.py This will synchronize your :code:`mypkg` package directory to a remote ssh account and then locally collect tests and send them to remote places for execution. You can specify multiple :code:`--rsyncdir` directories to be sent to the remote side. .. note:: For pytest to collect and send tests correctly you not only need to make sure all code and tests directories are rsynced, but that any test (sub) directory also has an :code:`__init__.py` file because internally pytest references tests as a fully qualified python module path. **You will otherwise get strange errors** during setup of the remote side. You can specify multiple :code:`--rsyncignore` glob patterns to be ignored when file are sent to the remote side. There are also internal ignores: :code:`.*, *.pyc, *.pyo, *~` Those you cannot override using rsyncignore command-line or ini-file option(s). Sending tests to remote Socket Servers -------------------------------------- Download the single-module `socketserver.py`_ Python program and run it like this:: python socketserver.py It will tell you that it starts listening on the default port. You can now on your home machine specify this new socket host with something like this:: pytest -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg Running tests on many platforms at once --------------------------------------- The basic command to run tests on multiple platforms is:: pytest --dist=each --tx=spec1 --tx=spec2 If you specify a windows host, an OSX host and a Linux environment this command will send each tests to all platforms - and report back failures from all platforms at once. The specifications strings use the `xspec syntax`_. .. _`xspec syntax`: https://codespeak.net/execnet/basics.html#xspec .. _`execnet`: https://codespeak.net/execnet .. _`socketserver.py`: https://raw.githubusercontent.com/pytest-dev/execnet/master/execnet/script/socketserver.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/requirements.txt0000644000175100001770000000003014523717146020023 0ustar00runnerdockersphinx sphinx-rtd-theme ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/docs/subprocess.rst0000644000175100001770000000100214523717146017461 0ustar00runnerdockerRunning tests in a Python subprocess ==================================== To instantiate a ``python3.9`` subprocess and send tests to it, you may type:: pytest -d --tx popen//python=python3.9 This will start a subprocess which is run with the ``python3.9`` Python interpreter, found in your system binary lookup path. If you prefix the --tx option value like this:: --tx 3*popen//python=python3.9 then three subprocesses would be created and tests will be load-balanced across these three processes. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/example/0000755000175100001770000000000014523717176015254 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/boxed.txt0000644000175100001770000000645414523717146017124 0ustar00runnerdocker.. warning:: Since 1.19.0, the actual implementation of the ``--boxed`` option has been moved to a separate plugin, `pytest-forked `_ which can be installed independently. The ``--boxed`` command-line option is deprecated and will be removed in pytest-xdist 3.0.0; use ``--forked`` from pytest-forked instead. If your testing involves C or C++ libraries you might have to deal with crashing processes. The xdist-plugin provides the ``--boxed`` option to run each test in a controlled subprocess. Here is a basic example:: # content of test_module.py import pytest import os import time # run test function 50 times with different argument @pytest.mark.parametrize("arg", range(50)) def test_func(arg): time.sleep(0.05) # each tests takes a while if arg % 19 == 0: os.kill(os.getpid(), 15) If you run this with:: $ pytest -n1 =========================== test session starts ============================ platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8 plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov collecting ... collected 50 items test_module.py f..................f..................f........... ================================= FAILURES ================================= _______________________________ test_func[0] _______________________________ /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 ______________________________ test_func[19] _______________________________ /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 ______________________________ test_func[38] _______________________________ /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 =================== 3 failed, 47 passed in 3.41 seconds ==================== You'll see that a couple of tests are reported as crashing, indicated by lower-case ``f`` and the respective failure summary. You can also use the xdist-provided parallelization feature to speed up your testing:: $ pytest -n3 =========================== test session starts ============================ platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8 plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov gw0 I / gw1 I / gw2 I gw0 [50] / gw1 [50] / gw2 [50] scheduling tests via LoadScheduling ..f...............f..................f............ ================================= FAILURES ================================= _______________________________ test_func[0] _______________________________ [gw0] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 ______________________________ test_func[19] _______________________________ [gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 ______________________________ test_func[38] _______________________________ [gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python /home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15 =================== 3 failed, 47 passed in 2.03 seconds ==================== ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/example/loadscope/0000755000175100001770000000000014523717176017225 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/example/loadscope/epsilon/0000755000175100001770000000000014523717176020676 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/epsilon/__init__.py0000644000175100001770000000074514523717146023012 0ustar00runnerdockerdef epsilon1(arg1, arg2=1000): """Do epsilon1 Usage: >>> epsilon1(10, 20) 40 >>> epsilon1(30) 1040 """ return arg1 + arg2 + 10 def epsilon2(arg1, arg2=1000): """Do epsilon2 Usage: >>> epsilon2(10, 20) -20 >>> epsilon2(30) -980 """ return arg1 - arg2 - 10 def epsilon3(arg1, arg2=1000): """Do epsilon3 Usage: >>> epsilon3(10, 20) 200 >>> epsilon3(30) 30000 """ return arg1 * arg2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/requirements.txt0000644000175100001770000000002314523717146022501 0ustar00runnerdockeripdb pytest ../../ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/example/loadscope/test/0000755000175100001770000000000014523717176020204 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/test/test_alpha.py0000644000175100001770000000101314523717146022672 0ustar00runnerdockerfrom time import sleep def test_alpha0(): sleep(5) assert True def test_alpha1(): sleep(5) assert True def test_alpha2(): sleep(5) assert True def test_alpha3(): sleep(5) assert True def test_alpha4(): sleep(5) assert True def test_alpha5(): sleep(5) assert True def test_alpha6(): sleep(5) assert True def test_alpha7(): sleep(5) assert True def test_alpha8(): sleep(5) assert True def test_alpha9(): sleep(5) assert True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/test/test_beta.py0000644000175100001770000000100114523717146022515 0ustar00runnerdockerfrom time import sleep def test_beta0(): sleep(5) assert True def test_beta1(): sleep(5) assert True def test_beta2(): sleep(5) assert True def test_beta3(): sleep(5) assert True def test_beta4(): sleep(5) assert True def test_beta5(): sleep(5) assert True def test_beta6(): sleep(5) assert True def test_beta7(): sleep(5) assert True def test_beta8(): sleep(5) assert True def test_beta9(): sleep(5) assert True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/test/test_delta.py0000644000175100001770000000257314523717146022712 0ustar00runnerdockerfrom time import sleep from unittest import TestCase class Delta1(TestCase): def test_delta0(self): sleep(5) assert True def test_delta1(self): sleep(5) assert True def test_delta2(self): sleep(5) assert True def test_delta3(self): sleep(5) assert True def test_delta4(self): sleep(5) assert True def test_delta5(self): sleep(5) assert True def test_delta6(self): sleep(5) assert True def test_delta7(self): sleep(5) assert True def test_delta8(self): sleep(5) assert True def test_delta9(self): sleep(5) assert True class Delta2(TestCase): def test_delta0(self): sleep(5) assert True def test_delta1(self): sleep(5) assert True def test_delta2(self): sleep(5) assert True def test_delta3(self): sleep(5) assert True def test_delta4(self): sleep(5) assert True def test_delta5(self): sleep(5) assert True def test_delta6(self): sleep(5) assert True def test_delta7(self): sleep(5) assert True def test_delta8(self): sleep(5) assert True def test_delta9(self): sleep(5) assert True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/test/test_gamma.py0000644000175100001770000000101314523717146022667 0ustar00runnerdockerfrom time import sleep def test_gamma0(): sleep(5) assert True def test_gamma1(): sleep(5) assert True def test_gamma2(): sleep(5) assert True def test_gamma3(): sleep(5) assert True def test_gamma4(): sleep(5) assert True def test_gamma5(): sleep(5) assert True def test_gamma6(): sleep(5) assert True def test_gamma7(): sleep(5) assert True def test_gamma8(): sleep(5) assert True def test_gamma9(): sleep(5) assert True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/example/loadscope/tox.ini0000644000175100001770000000055514523717146020542 0ustar00runnerdocker[tox] envlist = test setupdir = {toxinidir}/../../ [testenv:test] basepython = python3 passenv = http_proxy https_proxy deps = -rrequirements.txt changedir = {envtmpdir} commands = pytest -s -v \ --doctest-modules \ --junitxml=tests.xml \ --dist=loadscope \ --tx=8*popen \ {toxinidir}/test \ {toxinidir}/epsilon ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/pyproject.toml0000644000175100001770000000141514523717146016533 0ustar00runnerdocker[build-system] requires = [ "setuptools>=45.0", "setuptools-scm[toml]>=6.2.3", "wheel", ] build-backend = "setuptools.build_meta" [tool.setuptools_scm] write_to = "src/xdist/_version.py" [tool.towncrier] package = "xdist" filename = "CHANGELOG.rst" directory = "changelog/" title_format = "pytest-xdist {version} ({project_date})" template = "changelog/_template.rst" [tool.towncrier.fragment.removal] name = "Removals" [tool.towncrier.fragment.deprecation] name = "Deprecations" [tool.towncrier.fragment.feature] name = "Features" [tool.towncrier.fragment.bugfix] name = "Bug Fixes" [tool.towncrier.fragment.vendor] name = "Vendored Libraries" [tool.towncrier.fragment.doc] name = "Improved Documentation" [tool.towncrier.fragment.trivial] name = "Trivial Changes" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6995263 pytest-xdist-3.4.0/setup.cfg0000644000175100001770000000412014523717176015437 0ustar00runnerdocker[metadata] name = pytest-xdist description = pytest xdist plugin for distributed testing, most importantly across multiple CPUs long_description = file: README.rst long_description_content_type = text/x-rst license = MIT author = holger krekel and contributors author_email = pytest-dev@python.org,holger@merlinux.eu url = https://github.com/pytest-dev/pytest-xdist platforms = linux osx win32 classifiers = Development Status :: 5 - Production/Stable Framework :: Pytest Intended Audience :: Developers License :: OSI Approved :: MIT License Operating System :: POSIX Operating System :: Microsoft :: Windows Operating System :: MacOS :: MacOS X Topic :: Software Development :: Testing Topic :: Software Development :: Quality Assurance Topic :: Utilities Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: 3.11 Programming Language :: Python :: 3.12 license_file = LICENSE project_urls = Documentation=https://pytest-xdist.readthedocs.io/en/latest Changelog=https://pytest-xdist.readthedocs.io/en/latest/changelog.html Source=https://github.com/pytest-dev/pytest-xdist Tracker=https://github.com/pytest-dev/pytest-xdist/issues [options] packages = find: package_dir = =src zip_safe = False python_requires = >=3.7 install_requires = execnet>=1.1 pytest>=6.2.0 [options.packages.find] where = src [options.entry_points] pytest11 = xdist = xdist.plugin xdist.looponfail = xdist.looponfail [options.extras_require] testing = filelock psutil = psutil>=3.0 setproctitle = setproctitle [flake8] ignore = E501, W503, E203 max-line-length = 100 [mypy] mypy_path = src disallow_any_generics = True ignore_missing_imports = True no_implicit_optional = True show_error_codes = True strict_equality = True warn_redundant_casts = True warn_return_any = True warn_unreachable = True warn_unused_configs = True [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6915262 pytest-xdist-3.4.0/src/0000755000175100001770000000000014523717176014410 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/0000755000175100001770000000000014523717176020645 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/PKG-INFO0000644000175100001770000000602614523717175021745 0ustar00runnerdockerMetadata-Version: 2.1 Name: pytest-xdist Version: 3.4.0 Summary: pytest xdist plugin for distributed testing, most importantly across multiple CPUs Home-page: https://github.com/pytest-dev/pytest-xdist Author: holger krekel and contributors Author-email: pytest-dev@python.org,holger@merlinux.eu License: MIT Project-URL: Documentation, https://pytest-xdist.readthedocs.io/en/latest Project-URL: Changelog, https://pytest-xdist.readthedocs.io/en/latest/changelog.html Project-URL: Source, https://github.com/pytest-dev/pytest-xdist Project-URL: Tracker, https://github.com/pytest-dev/pytest-xdist/issues Platform: linux Platform: osx Platform: win32 Classifier: Development Status :: 5 - Production/Stable Classifier: Framework :: Pytest Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: POSIX Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: MacOS :: MacOS X Classifier: Topic :: Software Development :: Testing Classifier: Topic :: Software Development :: Quality Assurance Classifier: Topic :: Utilities Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Requires-Python: >=3.7 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: execnet>=1.1 Requires-Dist: pytest>=6.2.0 Provides-Extra: testing Requires-Dist: filelock; extra == "testing" Provides-Extra: psutil Requires-Dist: psutil>=3.0; extra == "psutil" Provides-Extra: setproctitle Requires-Dist: setproctitle; extra == "setproctitle" ============ pytest-xdist ============ .. image:: http://img.shields.io/pypi/v/pytest-xdist.svg :alt: PyPI version :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://img.shields.io/conda/vn/conda-forge/pytest-xdist.svg :target: https://anaconda.org/conda-forge/pytest-xdist .. image:: https://img.shields.io/pypi/pyversions/pytest-xdist.svg :alt: Python versions :target: https://pypi.python.org/pypi/pytest-xdist .. image:: https://github.com/pytest-dev/pytest-xdist/workflows/test/badge.svg :target: https://github.com/pytest-dev/pytest-xdist/actions .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black The `pytest-xdist`_ plugin extends pytest with new test execution modes, the most used being distributing tests across multiple CPUs to speed up test execution:: pytest -n auto With this call, pytest will spawn a number of workers processes equal to the number of available CPUs, and distribute the tests randomly across them. Documentation ============= Documentation is available at `Read The Docs `__. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/SOURCES.txt0000644000175100001770000000302014523717175022523 0ustar00runnerdocker.readthedocs.yaml CHANGELOG.rst LICENSE MANIFEST.in README.rst RELEASING.rst pyproject.toml setup.cfg tox.ini changelog/_template.rst docs/.gitignore docs/changelog.rst docs/conf.py docs/crash.rst docs/distribution.rst docs/how-it-works.rst docs/how-to.rst docs/index.rst docs/known-limitations.rst docs/remote.rst docs/requirements.txt docs/subprocess.rst example/boxed.txt example/loadscope/requirements.txt example/loadscope/tox.ini example/loadscope/epsilon/__init__.py example/loadscope/test/test_alpha.py example/loadscope/test/test_beta.py example/loadscope/test/test_delta.py example/loadscope/test/test_gamma.py src/pytest_xdist.egg-info/PKG-INFO src/pytest_xdist.egg-info/SOURCES.txt src/pytest_xdist.egg-info/dependency_links.txt src/pytest_xdist.egg-info/entry_points.txt src/pytest_xdist.egg-info/not-zip-safe src/pytest_xdist.egg-info/requires.txt src/pytest_xdist.egg-info/top_level.txt src/xdist/__init__.py src/xdist/_path.py src/xdist/_version.py src/xdist/dsession.py src/xdist/looponfail.py src/xdist/newhooks.py src/xdist/plugin.py src/xdist/remote.py src/xdist/report.py src/xdist/workermanage.py src/xdist/scheduler/__init__.py src/xdist/scheduler/each.py src/xdist/scheduler/load.py src/xdist/scheduler/loadfile.py src/xdist/scheduler/loadgroup.py src/xdist/scheduler/loadscope.py src/xdist/scheduler/worksteal.py testing/acceptance_test.py testing/conftest.py testing/test_dsession.py testing/test_looponfail.py testing/test_newhooks.py testing/test_plugin.py testing/test_remote.py testing/test_workermanage.py testing/util.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/dependency_links.txt0000644000175100001770000000000114523717175024712 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/entry_points.txt0000644000175100001770000000010414523717175024135 0ustar00runnerdocker[pytest11] xdist = xdist.plugin xdist.looponfail = xdist.looponfail ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/not-zip-safe0000644000175100001770000000000114523717175023072 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/requires.txt0000644000175100001770000000014214523717175023241 0ustar00runnerdockerexecnet>=1.1 pytest>=6.2.0 [psutil] psutil>=3.0 [setproctitle] setproctitle [testing] filelock ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/pytest_xdist.egg-info/top_level.txt0000644000175100001770000000000614523717175023372 0ustar00runnerdockerxdist ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6955261 pytest-xdist-3.4.0/src/xdist/0000755000175100001770000000000014523717176015543 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/__init__.py0000644000175100001770000000046114523717146017652 0ustar00runnerdockerfrom xdist.plugin import ( is_xdist_worker, is_xdist_master, get_xdist_worker_id, is_xdist_controller, ) from xdist._version import version as __version__ __all__ = [ "__version__", "is_xdist_worker", "is_xdist_master", "is_xdist_controller", "get_xdist_worker_id", ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/_path.py0000644000175100001770000000117714523717146017213 0ustar00runnerdockerimport os from itertools import chain from pathlib import Path from typing import Callable, Iterator def visit_path( path: Path, *, filter: Callable[[Path], bool], recurse: Callable[[Path], bool] ) -> Iterator[Path]: """ Implements the interface of ``py.path.local.visit()`` for Path objects, to simplify porting the code over from ``py.path.local``. """ for dirpath, dirnames, filenames in os.walk(path): dirnames[:] = [x for x in dirnames if recurse(Path(dirpath, x))] for name in chain(dirnames, filenames): p = Path(dirpath, name) if filter(p): yield p ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716733.0 pytest-xdist-3.4.0/src/xdist/_version.py0000644000175100001770000000063314523717175017742 0ustar00runnerdocker# file generated by setuptools_scm # don't change, don't track in version control TYPE_CHECKING = False if TYPE_CHECKING: from typing import Tuple, Union VERSION_TUPLE = Tuple[Union[int, str], ...] else: VERSION_TUPLE = object version: str __version__: str __version_tuple__: VERSION_TUPLE version_tuple: VERSION_TUPLE __version__ = version = '3.4.0' __version_tuple__ = version_tuple = (3, 4, 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/dsession.py0000644000175100001770000004740014523717146017746 0ustar00runnerdockerfrom __future__ import annotations import sys from enum import Enum, auto from typing import Sequence import pytest from xdist.remote import Producer from xdist.workermanage import NodeManager from xdist.scheduler import ( EachScheduling, LoadScheduling, LoadScopeScheduling, LoadFileScheduling, LoadGroupScheduling, WorkStealingScheduling, ) from queue import Empty, Queue class Interrupted(KeyboardInterrupt): """signals an immediate interruption.""" class DSession: """A pytest plugin which runs a distributed test session At the beginning of the test session this creates a NodeManager instance which creates and starts all nodes. Nodes then emit events processed in the pytest_runtestloop hook using the worker_* methods. Once a node is started it will automatically start running the pytest mainloop with some custom hooks. This means a node automatically starts collecting tests. Once tests are collected it will wait for instructions. """ def __init__(self, config): self.config = config self.log = Producer("dsession", enabled=config.option.debug) self.nodemanager = None self.sched = None self.shuttingdown = False self.countfailures = 0 self.maxfail = config.getvalue("maxfail") self.queue = Queue() self._session = None self._failed_collection_errors = {} self._active_nodes = set() self._failed_nodes_count = 0 self._max_worker_restart = get_default_max_worker_restart(self.config) # summary message to print at the end of the session self._summary_report = None self.terminal = config.pluginmanager.getplugin("terminalreporter") if self.terminal: self.trdist = TerminalDistReporter(config) config.pluginmanager.register(self.trdist, "terminaldistreporter") @property def session_finished(self): """Return True if the distributed session has finished This means all nodes have executed all test items. This is used by pytest_runtestloop to break out of its loop. """ return bool(self.shuttingdown and not self._active_nodes) def report_line(self, line): if self.terminal and self.config.option.verbose >= 0: self.terminal.write_line(line) @pytest.hookimpl(trylast=True) def pytest_sessionstart(self, session): """Creates and starts the nodes. The nodes are setup to put their events onto self.queue. As soon as nodes start they will emit the worker_workerready event. """ self.nodemanager = NodeManager(self.config) nodes = self.nodemanager.setup_nodes(putevent=self.queue.put) self._active_nodes.update(nodes) self._session = session @pytest.hookimpl def pytest_sessionfinish(self, session): """Shutdown all nodes.""" nm = getattr(self, "nodemanager", None) # if not fully initialized if nm is not None: nm.teardown_nodes() self._session = None @pytest.hookimpl def pytest_collection(self): # prohibit collection of test items in controller process return True @pytest.hookimpl(trylast=True) def pytest_xdist_make_scheduler(self, config, log): dist = config.getvalue("dist") schedulers = { "each": EachScheduling, "load": LoadScheduling, "loadscope": LoadScopeScheduling, "loadfile": LoadFileScheduling, "loadgroup": LoadGroupScheduling, "worksteal": WorkStealingScheduling, } return schedulers[dist](config, log) @pytest.hookimpl def pytest_runtestloop(self): self.sched = self.config.hook.pytest_xdist_make_scheduler( config=self.config, log=self.log ) assert self.sched is not None self.shouldstop = False pending_exception = None while not self.session_finished: self.loop_once() if self.shouldstop: self.triggershutdown() pending_exception = Interrupted(str(self.shouldstop)) if pending_exception: raise pending_exception return True def loop_once(self): """Process one callback from one of the workers.""" while 1: if not self._active_nodes: # If everything has died stop looping self.triggershutdown() raise RuntimeError("Unexpectedly no active workers available") try: eventcall = self.queue.get(timeout=2.0) break except Empty: continue callname, kwargs = eventcall assert callname, kwargs method = "worker_" + callname call = getattr(self, method) self.log("calling method", method, kwargs) call(**kwargs) if self.sched.tests_finished: self.triggershutdown() # # callbacks for processing events from workers # def worker_workerready(self, node, workerinfo): """Emitted when a node first starts up. This adds the node to the scheduler, nodes continue with collection without any further input. """ node.workerinfo = workerinfo node.workerinfo["id"] = node.gateway.id node.workerinfo["spec"] = node.gateway.spec self.config.hook.pytest_testnodeready(node=node) if self.shuttingdown: node.shutdown() else: self.sched.add_node(node) def worker_workerfinished(self, node): """Emitted when node executes its pytest_sessionfinish hook. Removes the node from the scheduler. The node might not be in the scheduler if it had not emitted workerready before shutdown was triggered. """ self.config.hook.pytest_testnodedown(node=node, error=None) if node.workeroutput["exitstatus"] == 2: # keyboard-interrupt self.shouldstop = f"{node} received keyboard-interrupt" self.worker_errordown(node, "keyboard-interrupt") return if node in self.sched.nodes: crashitem = self.sched.remove_node(node) assert not crashitem, (crashitem, node) self._active_nodes.remove(node) def worker_internal_error(self, node, formatted_error): """ pytest_internalerror() was called on the worker. pytest_internalerror() arguments are an excinfo and an excrepr, which can't be serialized, so we go with a poor man's solution of raising an exception here ourselves using the formatted message. """ self._active_nodes.remove(node) try: assert False, formatted_error except AssertionError: from _pytest._code import ExceptionInfo excinfo = ExceptionInfo.from_current() excrepr = excinfo.getrepr() self.config.hook.pytest_internalerror(excrepr=excrepr, excinfo=excinfo) def worker_errordown(self, node, error): """Emitted by the WorkerController when a node dies.""" self.config.hook.pytest_testnodedown(node=node, error=error) try: crashitem = self.sched.remove_node(node) except KeyError: pass else: if crashitem: self.handle_crashitem(crashitem, node) self._failed_nodes_count += 1 maximum_reached = ( self._max_worker_restart is not None and self._failed_nodes_count > self._max_worker_restart ) if maximum_reached: if self._max_worker_restart == 0: msg = "worker {} crashed and worker restarting disabled".format( node.gateway.id ) else: msg = "maximum crashed workers reached: %d" % self._max_worker_restart self._summary_report = msg self.report_line("\n" + msg) self.triggershutdown() else: self.report_line("\nreplacing crashed worker %s" % node.gateway.id) self.shuttingdown = False self._clone_node(node) self._active_nodes.remove(node) @pytest.hookimpl def pytest_terminal_summary(self, terminalreporter): if self.config.option.verbose >= 0 and self._summary_report: terminalreporter.write_sep("=", f"xdist: {self._summary_report}") def worker_collectionfinish(self, node, ids): """worker has finished test collection. This adds the collection for this node to the scheduler. If the scheduler indicates collection is finished (i.e. all initial nodes have submitted their collections), then tells the scheduler to schedule the collected items. When initiating scheduling the first time it logs which scheduler is in use. """ if self.shuttingdown: return self.config.hook.pytest_xdist_node_collection_finished(node=node, ids=ids) # tell session which items were effectively collected otherwise # the controller node will finish the session with EXIT_NOTESTSCOLLECTED self._session.testscollected = len(ids) self.sched.add_node_collection(node, ids) if self.terminal: self.trdist.setstatus( node.gateway.spec, WorkerStatus.CollectionDone, tests_collected=len(ids) ) if self.sched.collection_is_completed: if self.terminal and not self.sched.has_pending: self.trdist.ensure_show_status() self.terminal.write_line("") if self.config.option.verbose > 0: self.terminal.write_line( f"scheduling tests via {self.sched.__class__.__name__}" ) self.sched.schedule() def worker_logstart(self, node, nodeid, location): """Emitted when a node calls the pytest_runtest_logstart hook.""" self.config.hook.pytest_runtest_logstart(nodeid=nodeid, location=location) def worker_logfinish(self, node, nodeid, location): """Emitted when a node calls the pytest_runtest_logfinish hook.""" self.config.hook.pytest_runtest_logfinish(nodeid=nodeid, location=location) def worker_testreport(self, node, rep): """Emitted when a node calls the pytest_runtest_logreport hook.""" rep.node = node self.config.hook.pytest_runtest_logreport(report=rep) self._handlefailures(rep) def worker_runtest_protocol_complete(self, node, item_index, duration): """ Emitted when a node fires the 'runtest_protocol_complete' event, signalling that a test has completed the runtestprotocol and should be removed from the pending list in the scheduler. """ self.sched.mark_test_complete(node, item_index, duration) def worker_unscheduled(self, node, indices): """ Emitted when a node fires the 'unscheduled' event, signalling that some tests have been removed from the worker's queue and should be sent to some worker again. This should happen only in response to 'steal' command, so schedulers not using 'steal' command don't have to implement it. """ self.sched.remove_pending_tests_from_node(node, indices) def worker_collectreport(self, node, rep): """Emitted when a node calls the pytest_collectreport hook. Because we only need the report when there's a failure/skip, as optimization we only expect to receive failed/skipped reports from workers (#330). """ assert not rep.passed self._failed_worker_collectreport(node, rep) def worker_warning_captured(self, warning_message, when, item): """Emitted when a node calls the pytest_warning_captured hook (deprecated in 6.0).""" # This hook as been removed in pytest 7.1, and we can remove support once we only # support pytest >=7.1. kwargs = dict(warning_message=warning_message, when=when, item=item) self.config.hook.pytest_warning_captured.call_historic(kwargs=kwargs) def worker_warning_recorded(self, warning_message, when, nodeid, location): """Emitted when a node calls the pytest_warning_recorded hook.""" kwargs = dict( warning_message=warning_message, when=when, nodeid=nodeid, location=location ) self.config.hook.pytest_warning_recorded.call_historic(kwargs=kwargs) def _clone_node(self, node): """Return new node based on an existing one. This is normally for when a node dies, this will copy the spec of the existing node and create a new one with a new id. The new node will have been setup so it will start calling the "worker_*" hooks and do work soon. """ spec = node.gateway.spec spec.id = None self.nodemanager.group.allocate_id(spec) node = self.nodemanager.setup_node(spec, self.queue.put) self._active_nodes.add(node) return node def _failed_worker_collectreport(self, node, rep): # Check we haven't already seen this report (from # another worker). if rep.longrepr not in self._failed_collection_errors: self._failed_collection_errors[rep.longrepr] = True self.config.hook.pytest_collectreport(report=rep) self._handlefailures(rep) def _handlefailures(self, rep): if rep.failed: self.countfailures += 1 if ( self.maxfail and self.countfailures >= self.maxfail and not self.shouldstop ): self.shouldstop = f"stopping after {self.countfailures} failures" def triggershutdown(self): if not self.shuttingdown: self.log("triggering shutdown") self.shuttingdown = True for node in self.sched.nodes: node.shutdown() def handle_crashitem(self, nodeid, worker): # XXX get more reporting info by recording pytest_runtest_logstart? # XXX count no of failures and retry N times runner = self.config.pluginmanager.getplugin("runner") fspath = nodeid.split("::")[0] msg = f"worker {worker.gateway.id!r} crashed while running {nodeid!r}" rep = runner.TestReport( nodeid, (fspath, None, fspath), (), "failed", msg, "???" ) rep.node = worker self.config.hook.pytest_handlecrashitem( crashitem=nodeid, report=rep, sched=self.sched, ) self.config.hook.pytest_runtest_logreport(report=rep) class WorkerStatus(Enum): """Status of each worker during creation/collection.""" # Worker spec has just been created. Created = auto() # Worker has been initialized. Initialized = auto() # Worker is now ready for collection. ReadyForCollection = auto() # Worker has finished collection. CollectionDone = auto() class TerminalDistReporter: def __init__(self, config) -> None: self.config = config self.tr = config.pluginmanager.getplugin("terminalreporter") self._status: dict[str, tuple[WorkerStatus, int]] = {} self._lastlen = 0 self._isatty = getattr(self.tr, "isatty", self.tr.hasmarkup) def write_line(self, msg: str) -> None: self.tr.write_line(msg) def ensure_show_status(self) -> None: if not self._isatty: self.write_line(self.getstatus()) def setstatus( self, spec, status: WorkerStatus, *, tests_collected: int, show: bool = True ) -> None: self._status[spec.id] = (status, tests_collected) if show and self._isatty: self.rewrite(self.getstatus()) def getstatus(self) -> str: if self.config.option.verbose >= 0: line = get_workers_status_line(list(self._status.values())) if line: return line return "bringing up nodes..." def rewrite(self, line, newline=False): pline = line + " " * max(self._lastlen - len(line), 0) if newline: self._lastlen = 0 pline += "\n" else: self._lastlen = len(line) self.tr.rewrite(pline, bold=True) @pytest.hookimpl def pytest_xdist_setupnodes(self, specs) -> None: self._specs = specs for spec in specs: self.setstatus(spec, WorkerStatus.Created, tests_collected=0, show=False) self.setstatus(spec, WorkerStatus.Created, tests_collected=0, show=True) self.ensure_show_status() @pytest.hookimpl def pytest_xdist_newgateway(self, gateway) -> None: if self.config.option.verbose > 0: rinfo = gateway._rinfo() different_interpreter = rinfo.executable != sys.executable if different_interpreter: version = "%s.%s.%s" % rinfo.version_info[:3] self.rewrite( f"[{gateway.id}] {rinfo.platform} Python {version} cwd: {rinfo.cwd}", newline=True, ) self.setstatus(gateway.spec, WorkerStatus.Initialized, tests_collected=0) @pytest.hookimpl def pytest_testnodeready(self, node) -> None: if self.config.option.verbose > 0: d = node.workerinfo different_interpreter = d.get("executable") != sys.executable if different_interpreter: version = d["version"].replace("\n", " -- ") self.rewrite(f"[{d['id']}] Python {version}", newline=True) self.setstatus( node.gateway.spec, WorkerStatus.ReadyForCollection, tests_collected=0 ) @pytest.hookimpl def pytest_testnodedown(self, node, error) -> None: if not error: return self.write_line(f"[{node.gateway.id}] node down: {error}") def get_default_max_worker_restart(config): """gets the default value of --max-worker-restart option if it is not provided. Use a reasonable default to avoid workers from restarting endlessly due to crashing collections (#226). """ result = config.option.maxworkerrestart if result is not None: result = int(result) elif config.option.numprocesses: # if --max-worker-restart was not provided, use a reasonable default (#226) result = config.option.numprocesses * 4 return result def get_workers_status_line( status_and_items: Sequence[tuple[WorkerStatus, int]] ) -> str: """ Return the line to display during worker setup/collection based on the status of the workers and number of tests collected for each. """ statuses = [s for s, c in status_and_items] total_workers = len(statuses) workers_noun = "worker" if total_workers == 1 else "workers" if status_and_items and all(s == WorkerStatus.CollectionDone for s in statuses): # All workers collect the same number of items, so we grab # the total number of items from the first worker. first = status_and_items[0] status, tests_collected = first tests_noun = "item" if tests_collected == 1 else "items" return f"{total_workers} {workers_noun} [{tests_collected} {tests_noun}]" if WorkerStatus.CollectionDone in statuses: done = sum(1 for s, c in status_and_items if c > 0) return f"collecting: {done}/{total_workers} {workers_noun}" if WorkerStatus.ReadyForCollection in statuses: ready = statuses.count(WorkerStatus.ReadyForCollection) return f"ready: {ready}/{total_workers} {workers_noun}" if WorkerStatus.Initialized in statuses: initialized = statuses.count(WorkerStatus.Initialized) return f"initialized: {initialized}/{total_workers} {workers_noun}" if WorkerStatus.Created in statuses: created = statuses.count(WorkerStatus.Created) return f"created: {created}/{total_workers} {workers_noun}" return "" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/looponfail.py0000644000175100001770000002221514523717146020256 0ustar00runnerdocker""" Implement -f aka looponfailing for pytest. NOTE that we try to avoid loading and depending on application modules within the controlling process (the one that starts repeatedly test processes) otherwise changes to source code can crash the controlling process which should best never happen. """ import os from pathlib import Path from typing import Dict, Sequence import pytest import sys import time import execnet from _pytest._io import TerminalWriter from xdist._path import visit_path @pytest.hookimpl def pytest_addoption(parser): group = parser.getgroup("xdist", "distributed and subprocess testing") group._addoption( "-f", "--looponfail", action="store_true", dest="looponfail", default=False, help="run tests in subprocess, wait for modified files " "and re-run failing test set until all pass.", ) @pytest.hookimpl def pytest_cmdline_main(config): if config.getoption("looponfail"): usepdb = config.getoption("usepdb", False) # a core option if usepdb: raise pytest.UsageError("--pdb is incompatible with --looponfail.") looponfail_main(config) return 2 # looponfail only can get stop with ctrl-C anyway def looponfail_main(config: "pytest.Config") -> None: remotecontrol = RemoteControl(config) config_roots = config.getini("looponfailroots") if not config_roots: config_roots = [Path.cwd()] rootdirs = [Path(root) for root in config_roots] statrecorder = StatRecorder(rootdirs) try: while 1: remotecontrol.loop_once() if not remotecontrol.failures and remotecontrol.wasfailing: # the last failures passed, let's immediately rerun all continue repr_pytest_looponfailinfo( failreports=remotecontrol.failures, rootdirs=rootdirs ) statrecorder.waitonchange(checkinterval=2.0) except KeyboardInterrupt: print() class RemoteControl: def __init__(self, config): self.config = config self.failures = [] def trace(self, *args): if self.config.option.debug: msg = " ".join(str(x) for x in args) print("RemoteControl:", msg) def initgateway(self): return execnet.makegateway("popen") def setup(self, out=None): if out is None: out = TerminalWriter() if hasattr(self, "gateway"): raise ValueError("already have gateway %r" % self.gateway) self.trace("setting up worker session") self.gateway = self.initgateway() self.channel = channel = self.gateway.remote_exec( init_worker_session, args=self.config.args, option_dict=vars(self.config.option), ) remote_outchannel = channel.receive() def write(s): out._file.write(s) out._file.flush() remote_outchannel.setcallback(write) def ensure_teardown(self): if hasattr(self, "channel"): if not self.channel.isclosed(): self.trace("closing", self.channel) self.channel.close() del self.channel if hasattr(self, "gateway"): self.trace("exiting", self.gateway) self.gateway.exit() del self.gateway def runsession(self): try: self.trace("sending", self.failures) self.channel.send(self.failures) try: return self.channel.receive() except self.channel.RemoteError: e = sys.exc_info()[1] self.trace("ERROR", e) raise finally: self.ensure_teardown() def loop_once(self): self.setup() self.wasfailing = self.failures and len(self.failures) result = self.runsession() failures, reports, collection_failed = result if collection_failed: pass # "Collection failed, keeping previous failure set" else: uniq_failures = [] for failure in failures: if failure not in uniq_failures: uniq_failures.append(failure) self.failures = uniq_failures def repr_pytest_looponfailinfo(failreports, rootdirs): tr = TerminalWriter() if failreports: tr.sep("#", "LOOPONFAILING", bold=True) for report in failreports: if report: tr.line(report, red=True) tr.sep("#", "waiting for changes", bold=True) for rootdir in rootdirs: tr.line(f"### Watching: {rootdir}", bold=True) def init_worker_session(channel, args, option_dict): import os import sys outchannel = channel.gateway.newchannel() sys.stdout = sys.stderr = outchannel.makefile("w") channel.send(outchannel) # prune sys.path to not contain relative paths newpaths = [] for p in sys.path: if p: if not os.path.isabs(p): p = os.path.abspath(p) newpaths.append(p) sys.path[:] = newpaths # fullwidth, hasmarkup = channel.receive() from _pytest.config import Config config = Config.fromdictargs(option_dict, list(args)) config.args = args from xdist.looponfail import WorkerFailSession WorkerFailSession(config, channel).main() class WorkerFailSession: def __init__(self, config, channel): self.config = config self.channel = channel self.recorded_failures = [] self.collection_failed = False config.pluginmanager.register(self) config.option.looponfail = False config.option.usepdb = False def DEBUG(self, *args): if self.config.option.debug: print(" ".join(map(str, args))) @pytest.hookimpl def pytest_collection(self, session): self.session = session self.trails = self.current_command hook = self.session.ihook try: items = session.perform_collect(self.trails or None) except pytest.UsageError: items = session.perform_collect(None) hook.pytest_collection_modifyitems( session=session, config=session.config, items=items ) hook.pytest_collection_finish(session=session) return True @pytest.hookimpl def pytest_runtest_logreport(self, report): if report.failed: self.recorded_failures.append(report) @pytest.hookimpl def pytest_collectreport(self, report): if report.failed: self.recorded_failures.append(report) self.collection_failed = True def main(self): self.DEBUG("WORKER: received configuration, waiting for command trails") try: command = self.channel.receive() except KeyboardInterrupt: return # in the worker we can't do much about this self.DEBUG("received", command) self.current_command = command self.config.hook.pytest_cmdline_main(config=self.config) trails, failreports = [], [] for rep in self.recorded_failures: trails.append(rep.nodeid) loc = rep.longrepr loc = str(getattr(loc, "reprcrash", loc)) failreports.append(loc) self.channel.send((trails, failreports, self.collection_failed)) class StatRecorder: def __init__(self, rootdirlist: Sequence[Path]) -> None: self.rootdirlist = rootdirlist self.statcache: Dict[Path, os.stat_result] = {} self.check() # snapshot state def fil(self, p: Path) -> bool: return p.is_file() and not p.name.startswith(".") and p.suffix != ".pyc" def rec(self, p: Path) -> bool: return not p.name.startswith(".") and p.exists() def waitonchange(self, checkinterval=1.0): while 1: changed = self.check() if changed: return time.sleep(checkinterval) def check(self, removepycfiles: bool = True) -> bool: # noqa, too complex changed = False newstat: Dict[Path, os.stat_result] = {} for rootdir in self.rootdirlist: for path in visit_path(rootdir, filter=self.fil, recurse=self.rec): oldstat = self.statcache.pop(path, None) try: curstat = path.stat() except OSError: if oldstat: changed = True else: newstat[path] = curstat if oldstat is not None: if ( oldstat.st_mtime != curstat.st_mtime or oldstat.st_size != curstat.st_size ): changed = True print("# MODIFIED", path) if removepycfiles and path.suffix == ".py": pycfile = path.with_suffix(".pyc") if pycfile.is_file(): os.unlink(pycfile) else: changed = True if self.statcache: changed = True self.statcache = newstat return changed ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/newhooks.py0000644000175100001770000000515314523717146017753 0ustar00runnerdocker""" xdist hooks. Additionally, pytest-xdist will also decorate a few other hooks with the worker instance that executed the hook originally: ``pytest_runtest_logreport``: ``rep`` parameter has a ``node`` attribute. You can use this hooks just as you would use normal pytest hooks, but some care must be taken in plugins in case ``xdist`` is not installed. Please see: http://pytest.org/en/latest/writing_plugins.html#optionally-using-hooks-from-3rd-party-plugins """ import pytest @pytest.hookspec() def pytest_xdist_setupnodes(config, specs): """called before any remote node is set up.""" @pytest.hookspec() def pytest_xdist_newgateway(gateway): """called on new raw gateway creation.""" @pytest.hookspec( warn_on_impl=DeprecationWarning( "rsync feature is deprecated and will be removed in pytest-xdist 4.0" ) ) def pytest_xdist_rsyncstart(source, gateways): """called before rsyncing a directory to remote gateways takes place.""" @pytest.hookspec( warn_on_impl=DeprecationWarning( "rsync feature is deprecated and will be removed in pytest-xdist 4.0" ) ) def pytest_xdist_rsyncfinish(source, gateways): """called after rsyncing a directory to remote gateways takes place.""" @pytest.hookspec(firstresult=True) def pytest_xdist_getremotemodule(): """called when creating remote node""" @pytest.hookspec() def pytest_configure_node(node): """configure node information before it gets instantiated.""" @pytest.hookspec() def pytest_testnodeready(node): """Test Node is ready to operate.""" @pytest.hookspec() def pytest_testnodedown(node, error): """Test Node is down.""" @pytest.hookspec() def pytest_xdist_node_collection_finished(node, ids): """called by the controller node when a worker node finishes collecting.""" @pytest.hookspec(firstresult=True) def pytest_xdist_make_scheduler(config, log): """return a node scheduler implementation""" @pytest.hookspec(firstresult=True) def pytest_xdist_auto_num_workers(config): """ Return the number of workers to spawn when ``--numprocesses=auto`` is given in the command-line. .. versionadded:: 2.1 """ @pytest.hookspec(firstresult=True) def pytest_handlecrashitem(crashitem, report, sched): """ Handle a crashitem, modifying the report if necessary. The scheduler is provided as a parameter to reschedule the test if desired with `sched.mark_test_pending`. def pytest_handlecrashitem(crashitem, report, sched): if should_rerun(crashitem): sched.mark_test_pending(crashitem) report.outcome = "rerun" .. versionadded:: 2.2.1 """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/plugin.py0000644000175100001770000002645114523717146017420 0ustar00runnerdockerimport os import uuid import sys import warnings import pytest PYTEST_GTE_7 = hasattr(pytest, "version_tuple") and pytest.version_tuple >= (7, 0) # type: ignore[attr-defined] _sys_path = list(sys.path) # freeze a copy of sys.path at interpreter startup @pytest.hookimpl def pytest_xdist_auto_num_workers(config): env_var = os.environ.get("PYTEST_XDIST_AUTO_NUM_WORKERS") if env_var: try: return int(env_var) except ValueError: warnings.warn( "PYTEST_XDIST_AUTO_NUM_WORKERS is not a number: {env_var!r}. Ignoring it." ) try: import psutil except ImportError: pass else: use_logical = config.option.numprocesses == "logical" count = psutil.cpu_count(logical=use_logical) or psutil.cpu_count() if count: return count try: from os import sched_getaffinity def cpu_count(): return len(sched_getaffinity(0)) except ImportError: if os.environ.get("TRAVIS") == "true": # workaround https://bitbucket.org/pypy/pypy/issues/2375 return 2 try: from os import cpu_count except ImportError: from multiprocessing import cpu_count try: n = cpu_count() except NotImplementedError: return 1 return n if n else 1 def parse_numprocesses(s): if s in ("auto", "logical"): return s elif s is not None: return int(s) @pytest.hookimpl def pytest_addoption(parser): group = parser.getgroup("xdist", "distributed and subprocess testing") group._addoption( "-n", "--numprocesses", dest="numprocesses", metavar="numprocesses", action="store", type=parse_numprocesses, help="Shortcut for '--dist=load --tx=NUM*popen'. With 'auto', attempt " "to detect physical CPU count. With 'logical', detect logical CPU " "count. If physical CPU count cannot be found, falls back to logical " "count. This will be 0 when used with --pdb.", ) group.addoption( "--maxprocesses", dest="maxprocesses", metavar="maxprocesses", action="store", type=int, help="limit the maximum number of workers to process the tests when using --numprocesses=auto", ) group.addoption( "--max-worker-restart", action="store", default=None, dest="maxworkerrestart", help="maximum number of workers that can be restarted " "when crashed (set to zero to disable this feature)", ) group.addoption( "--dist", metavar="distmode", action="store", choices=[ "each", "load", "loadscope", "loadfile", "loadgroup", "worksteal", "no", ], dest="dist", default="no", help=( "set mode for distributing tests to exec environments.\n\n" "each: send each test to all available environments.\n\n" "load: load balance by sending any pending test to any" " available environment.\n\n" "loadscope: load balance by sending pending groups of tests in" " the same scope to any available environment.\n\n" "loadfile: load balance by sending test grouped by file" " to any available environment.\n\n" "loadgroup: like load, but sends tests marked with 'xdist_group' to the same worker.\n\n" "worksteal: split the test suite between available environments," " then rebalance when any worker runs out of tests.\n\n" "(default) no: run tests inprocess, don't distribute." ), ) group.addoption( "--tx", dest="tx", action="append", default=[], metavar="xspec", help=( "add a test execution environment. some examples: " "--tx popen//python=python2.5 --tx socket=192.168.1.102:8888 " "--tx ssh=user@codespeak.net//chdir=testcache" ), ) group._addoption( "-d", action="store_true", dest="distload", default=False, help="load-balance tests. shortcut for '--dist=load'", ) group.addoption( "--rsyncdir", action="append", default=[], metavar="DIR", help="add directory for rsyncing to remote tx nodes.", ) group.addoption( "--rsyncignore", action="append", default=[], metavar="GLOB", help="add expression for ignores when rsyncing to remote tx nodes.", ) group.addoption( "--testrunuid", action="store", help=( "provide an identifier shared amongst all workers as the value of " "the 'testrun_uid' fixture,\n\n," "if not provided, 'testrun_uid' is filled with a new unique string " "on every test run." ), ) group.addoption( "--maxschedchunk", action="store", type=int, help=( "Maximum number of tests scheduled in one step for --dist=load. " "Setting it to 1 will force pytest to send tests to workers one by " "one - might be useful for a small number of slow tests. " "Larger numbers will allow the scheduler to submit consecutive " "chunks of tests to workers - allows reusing fixtures. " "Due to implementation reasons, at least 2 tests are scheduled per " "worker at the start. Only later tests can be scheduled one by one. " "Unlimited if not set." ), ) parser.addini( "rsyncdirs", "list of (relative) paths to be rsynced for remote distributed testing.", type="paths" if PYTEST_GTE_7 else "pathlist", ) parser.addini( "rsyncignore", "list of (relative) glob-style paths to be ignored for rsyncing.", type="paths" if PYTEST_GTE_7 else "pathlist", ) parser.addini( "looponfailroots", type="paths" if PYTEST_GTE_7 else "pathlist", help="directories to check for changes. Default: current directory.", ) # ------------------------------------------------------------------------- # distributed testing hooks # ------------------------------------------------------------------------- @pytest.hookimpl def pytest_addhooks(pluginmanager): from xdist import newhooks pluginmanager.add_hookspecs(newhooks) # ------------------------------------------------------------------------- # distributed testing initialization # ------------------------------------------------------------------------- @pytest.hookimpl(trylast=True) def pytest_configure(config): config_line = ( "xdist_group: specify group for tests should run in same session." "in relation to one another. Provided by pytest-xdist." ) config.addinivalue_line("markers", config_line) # Skip this plugin entirely when only doing collection. if config.getvalue("collectonly"): return # Create the distributed session in case we have a valid distribution # mode and test environments. if config.getoption("dist") != "no" and config.getoption("tx"): from xdist.dsession import DSession session = DSession(config) config.pluginmanager.register(session, "dsession") tr = config.pluginmanager.getplugin("terminalreporter") if tr: tr.showfspath = False # Deprecation warnings for deprecated command-line/configuration options. if config.getoption("looponfail", None) or config.getini("looponfailroots"): warning = DeprecationWarning( "The --looponfail command line argument and looponfailroots config variable are deprecated.\n" "The loop-on-fail feature will be removed in pytest-xdist 4.0." ) config.issue_config_time_warning(warning, 2) if config.getoption("rsyncdir", None) or config.getini("rsyncdirs"): warning = DeprecationWarning( "The --rsyncdir command line argument and rsyncdirs config variable are deprecated.\n" "The rsync feature will be removed in pytest-xdist 4.0." ) config.issue_config_time_warning(warning, 2) @pytest.hookimpl(tryfirst=True) def pytest_cmdline_main(config): usepdb = config.getoption("usepdb", False) # a core option if config.option.numprocesses in ("auto", "logical"): if usepdb: config.option.numprocesses = 0 config.option.dist = "no" else: auto_num_cpus = config.hook.pytest_xdist_auto_num_workers(config=config) config.option.numprocesses = auto_num_cpus if config.option.numprocesses: if config.option.dist == "no": config.option.dist = "load" numprocesses = config.option.numprocesses if config.option.maxprocesses: numprocesses = min(numprocesses, config.option.maxprocesses) config.option.tx = ["popen"] * numprocesses if config.option.distload: config.option.dist = "load" val = config.getvalue if not val("collectonly") and val("dist") != "no" and usepdb: raise pytest.UsageError( "--pdb is incompatible with distributing tests; try using -n0 or -nauto." ) # noqa: E501 # ------------------------------------------------------------------------- # fixtures and API to easily know the role of current node # ------------------------------------------------------------------------- def is_xdist_worker(request_or_session) -> bool: """Return `True` if this is an xdist worker, `False` otherwise :param request_or_session: the `pytest` `request` or `session` object """ return hasattr(request_or_session.config, "workerinput") def is_xdist_controller(request_or_session) -> bool: """Return `True` if this is the xdist controller, `False` otherwise Note: this method also returns `False` when distribution has not been activated at all. :param request_or_session: the `pytest` `request` or `session` object """ return ( not is_xdist_worker(request_or_session) and request_or_session.config.option.dist != "no" ) # ALIAS: TODO, deprecate (#592) is_xdist_master = is_xdist_controller def get_xdist_worker_id(request_or_session): """Return the id of the current worker ('gw0', 'gw1', etc) or 'master' if running on the controller node. If not distributing tests (for example passing `-n0` or not passing `-n` at all) also return 'master'. :param request_or_session: the `pytest` `request` or `session` object """ if hasattr(request_or_session.config, "workerinput"): return request_or_session.config.workerinput["workerid"] else: # TODO: remove "master", ideally for a None return "master" @pytest.fixture(scope="session") def worker_id(request): """Return the id of the current worker ('gw0', 'gw1', etc) or 'master' if running on the master node. """ # TODO: remove "master", ideally for a None return get_xdist_worker_id(request) @pytest.fixture(scope="session") def testrun_uid(request): """Return the unique id of the current test.""" if hasattr(request.config, "workerinput"): return request.config.workerinput["testrunuid"] else: return uuid.uuid4().hex ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/remote.py0000644000175100001770000002750414523717146017415 0ustar00runnerdocker""" This module is executed in remote subprocesses and helps to control a remote testing session and relay back information. It assumes that 'py' is importable and does not have dependencies on the rest of the xdist code. This means that the xdist-plugin needs not to be installed in remote environments. """ import contextlib import sys import os import time from typing import Any import pytest from execnet.gateway_base import dumps, DumpError from _pytest.config import _prepareconfig, Config try: from setproctitle import setproctitle except ImportError: def setproctitle(title): pass class Producer: """ Simplified implementation of the same interface as py.log, for backward compatibility since we dropped the dependency on pylib. Note: this is defined here because this module can't depend on xdist, so we need to have the other way around. """ def __init__(self, name: str, *, enabled: bool = True): self.name = name self.enabled = enabled def __repr__(self) -> str: return f"{type(self).__name__}({self.name!r}, enabled={self.enabled})" def __call__(self, *a: Any, **k: Any) -> None: if self.enabled: print(f"[{self.name}]", *a, **k, file=sys.stderr) def __getattr__(self, name: str) -> "Producer": return type(self)(name, enabled=self.enabled) def worker_title(title): try: setproctitle(title) except Exception: # changing the process name is very optional, no errors please pass class WorkerInteractor: SHUTDOWN_MARK = object() QUEUE_REPLACED_MARK = object() def __init__(self, config, channel): self.config = config self.workerid = config.workerinput.get("workerid", "?") self.testrunuid = config.workerinput["testrunuid"] self.log = Producer(f"worker-{self.workerid}", enabled=config.option.debug) self.channel = channel self.torun = self._make_queue() self.nextitem_index = None config.pluginmanager.register(self) def _make_queue(self): return self.channel.gateway.execmodel.queue.Queue() def _get_next_item_index(self): """Gets the next item from test queue. Handles the case when the queue is replaced concurrently in another thread. """ result = self.torun.get() while result is self.QUEUE_REPLACED_MARK: result = self.torun.get() return result def sendevent(self, name, **kwargs): self.log("sending", name, kwargs) self.channel.send((name, kwargs)) @pytest.hookimpl def pytest_internalerror(self, excrepr): formatted_error = str(excrepr) for line in formatted_error.split("\n"): self.log("IERROR>", line) interactor.sendevent("internal_error", formatted_error=formatted_error) @pytest.hookimpl def pytest_sessionstart(self, session): self.session = session workerinfo = getinfodict() self.sendevent("workerready", workerinfo=workerinfo) @pytest.hookimpl(hookwrapper=True) def pytest_sessionfinish(self, exitstatus): # in pytest 5.0+, exitstatus is an IntEnum object self.config.workeroutput["exitstatus"] = int(exitstatus) yield self.sendevent("workerfinished", workeroutput=self.config.workeroutput) @pytest.hookimpl def pytest_collection(self, session): self.sendevent("collectionstart") def handle_command(self, command): if command is self.SHUTDOWN_MARK: self.torun.put(self.SHUTDOWN_MARK) return name, kwargs = command self.log("received command", name, kwargs) if name == "runtests": for i in kwargs["indices"]: self.torun.put(i) elif name == "runtests_all": for i in range(len(self.session.items)): self.torun.put(i) elif name == "shutdown": self.torun.put(self.SHUTDOWN_MARK) elif name == "steal": self.steal(kwargs["indices"]) def steal(self, indices): indices = set(indices) stolen = [] old_queue, self.torun = self.torun, self._make_queue() def old_queue_get_nowait_noraise(): with contextlib.suppress(self.channel.gateway.execmodel.queue.Empty): return old_queue.get_nowait() for i in iter(old_queue_get_nowait_noraise, None): if i in indices: stolen.append(i) else: self.torun.put(i) self.sendevent("unscheduled", indices=stolen) old_queue.put(self.QUEUE_REPLACED_MARK) @pytest.hookimpl def pytest_runtestloop(self, session): self.log("entering main loop") self.channel.setcallback(self.handle_command, endmarker=self.SHUTDOWN_MARK) self.nextitem_index = self._get_next_item_index() while self.nextitem_index is not self.SHUTDOWN_MARK: self.run_one_test() return True def run_one_test(self): self.item_index = self.nextitem_index self.nextitem_index = self._get_next_item_index() items = self.session.items item = items[self.item_index] if self.nextitem_index is self.SHUTDOWN_MARK: nextitem = None else: nextitem = items[self.nextitem_index] worker_title("[pytest-xdist running] %s" % item.nodeid) start = time.time() self.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) duration = time.time() - start worker_title("[pytest-xdist idle]") self.sendevent( "runtest_protocol_complete", item_index=self.item_index, duration=duration ) def pytest_collection_modifyitems(self, session, config, items): # add the group name to nodeid as suffix if --dist=loadgroup if config.getvalue("loadgroup"): for item in items: mark = item.get_closest_marker("xdist_group") if not mark: continue gname = ( mark.args[0] if len(mark.args) > 0 else mark.kwargs.get("name", "default") ) item._nodeid = f"{item.nodeid}@{gname}" @pytest.hookimpl def pytest_collection_finish(self, session): try: topdir = str(self.config.rootpath) except AttributeError: # pytest <= 6.1.0 topdir = str(self.config.rootdir) self.sendevent( "collectionfinish", topdir=topdir, ids=[item.nodeid for item in session.items], ) @pytest.hookimpl def pytest_runtest_logstart(self, nodeid, location): self.sendevent("logstart", nodeid=nodeid, location=location) @pytest.hookimpl def pytest_runtest_logfinish(self, nodeid, location): self.sendevent("logfinish", nodeid=nodeid, location=location) @pytest.hookimpl def pytest_runtest_logreport(self, report): data = self.config.hook.pytest_report_to_serializable( config=self.config, report=report ) data["item_index"] = self.item_index data["worker_id"] = self.workerid data["testrun_uid"] = self.testrunuid assert self.session.items[self.item_index].nodeid == report.nodeid self.sendevent("testreport", data=data) @pytest.hookimpl def pytest_collectreport(self, report): # send only reports that have not passed to controller as optimization (#330) if not report.passed: data = self.config.hook.pytest_report_to_serializable( config=self.config, report=report ) self.sendevent("collectreport", data=data) @pytest.hookimpl def pytest_warning_recorded(self, warning_message, when, nodeid, location): self.sendevent( "warning_recorded", warning_message_data=serialize_warning_message(warning_message), when=when, nodeid=nodeid, location=location, ) def serialize_warning_message(warning_message): if isinstance(warning_message.message, Warning): message_module = type(warning_message.message).__module__ message_class_name = type(warning_message.message).__name__ message_str = str(warning_message.message) # check now if we can serialize the warning arguments (#349) # if not, we will just use the exception message on the controller node try: dumps(warning_message.message.args) except DumpError: message_args = None else: message_args = warning_message.message.args else: message_str = warning_message.message message_module = None message_class_name = None message_args = None if warning_message.category: category_module = warning_message.category.__module__ category_class_name = warning_message.category.__name__ else: category_module = None category_class_name = None result = { "message_str": message_str, "message_module": message_module, "message_class_name": message_class_name, "message_args": message_args, "category_module": category_module, "category_class_name": category_class_name, } # access private _WARNING_DETAILS because the attributes vary between Python versions for attr_name in warning_message._WARNING_DETAILS: if attr_name in ("message", "category"): continue attr = getattr(warning_message, attr_name) # Check if we can serialize the warning detail, marking `None` otherwise # Note that we need to define the attr (even as `None`) to allow deserializing try: dumps(attr) except DumpError: result[attr_name] = repr(attr) else: result[attr_name] = attr return result def getinfodict(): import platform return dict( version=sys.version, version_info=tuple(sys.version_info), sysplatform=sys.platform, platform=platform.platform(), executable=sys.executable, cwd=os.getcwd(), ) def remote_initconfig(option_dict, args): option_dict["plugins"].append("no:terminal") return Config.fromdictargs(option_dict, args) def setup_config(config, basetemp): config.option.loadgroup = config.getvalue("dist") == "loadgroup" config.option.looponfail = False config.option.usepdb = False config.option.dist = "no" config.option.distload = False config.option.numprocesses = None config.option.maxprocesses = None config.option.basetemp = basetemp if __name__ == "__channelexec__": channel = channel # type: ignore[name-defined] # noqa: F821 workerinput, args, option_dict, change_sys_path = channel.receive() # type: ignore[name-defined] if change_sys_path is None: importpath = os.getcwd() sys.path.insert(0, importpath) os.environ["PYTHONPATH"] = ( importpath + os.pathsep + os.environ.get("PYTHONPATH", "") ) else: sys.path = change_sys_path os.environ["PYTEST_XDIST_TESTRUNUID"] = workerinput["testrunuid"] os.environ["PYTEST_XDIST_WORKER"] = workerinput["workerid"] os.environ["PYTEST_XDIST_WORKER_COUNT"] = str(workerinput["workercount"]) if hasattr(Config, "InvocationParams"): config = _prepareconfig(args, None) else: config = remote_initconfig(option_dict, args) config.args = args setup_config(config, option_dict.get("basetemp")) config._parser.prog = os.path.basename(workerinput["mainargv"][0]) config.workerinput = workerinput # type: ignore[attr-defined] config.workeroutput = {} # type: ignore[attr-defined] interactor = WorkerInteractor(config, channel) # type: ignore[name-defined] config.hook.pytest_cmdline_main(config=config) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/report.py0000644000175100001770000000146114523717146017427 0ustar00runnerdockerfrom difflib import unified_diff def report_collection_diff(from_collection, to_collection, from_id, to_id): """Report the collected test difference between two nodes. :returns: detailed message describing the difference between the given collections, or None if they are equal. """ if from_collection == to_collection: return None diff = unified_diff(from_collection, to_collection, fromfile=from_id, tofile=to_id) error_message = ( "Different tests were collected between {from_id} and {to_id}. " "The difference is:\n" "{diff}\n" "To see why this happens see Known limitations in documentation" ).format(from_id=from_id, to_id=to_id, diff="\n".join(diff)) msg = "\n".join(x.rstrip() for x in error_message.split("\n")) return msg ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6995263 pytest-xdist-3.4.0/src/xdist/scheduler/0000755000175100001770000000000014523717176017521 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/__init__.py0000644000175100001770000000057114523717146021632 0ustar00runnerdockerfrom xdist.scheduler.each import EachScheduling # noqa from xdist.scheduler.load import LoadScheduling # noqa from xdist.scheduler.loadfile import LoadFileScheduling # noqa from xdist.scheduler.loadscope import LoadScopeScheduling # noqa from xdist.scheduler.loadgroup import LoadGroupScheduling # noqa from xdist.scheduler.worksteal import WorkStealingScheduling # noqa ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/each.py0000644000175100001770000001202514523717146020770 0ustar00runnerdockerfrom xdist.remote import Producer from xdist.workermanage import parse_spec_config from xdist.report import report_collection_diff class EachScheduling: """Implement scheduling of test items on all nodes If a node gets added after the test run is started then it is assumed to replace a node which got removed before it finished its collection. In this case it will only be used if a node with the same spec got removed earlier. Any nodes added after the run is started will only get items assigned if a node with a matching spec was removed before it finished all its pending items. The new node will then be assigned the remaining items from the removed node. """ def __init__(self, config, log=None): self.config = config self.numnodes = len(parse_spec_config(config)) self.node2collection = {} self.node2pending = {} self._started = [] self._removed2pending = {} if log is None: self.log = Producer("eachsched") else: self.log = log.eachsched self.collection_is_completed = False @property def nodes(self): """A list of all nodes in the scheduler.""" return list(self.node2pending.keys()) @property def tests_finished(self): if not self.collection_is_completed: return False if self._removed2pending: return False for pending in self.node2pending.values(): if len(pending) >= 2: return False return True @property def has_pending(self): """Return True if there are pending test items This indicates that collection has finished and nodes are still processing test items, so this can be thought of as "the scheduler is active". """ for pending in self.node2pending.values(): if pending: return True return False def add_node(self, node): assert node not in self.node2pending self.node2pending[node] = [] def add_node_collection(self, node, collection): """Add the collected test items from a node Collection is complete once all nodes have submitted their collection. In this case its pending list is set to an empty list. When the collection is already completed this submission is from a node which was restarted to replace a dead node. In this case we already assign the pending items here. In either case ``.schedule()`` will instruct the node to start running the required tests. """ assert node in self.node2pending if not self.collection_is_completed: self.node2collection[node] = list(collection) self.node2pending[node] = [] if len(self.node2collection) >= self.numnodes: self.collection_is_completed = True elif self._removed2pending: for deadnode in self._removed2pending: if deadnode.gateway.spec == node.gateway.spec: dead_collection = self.node2collection[deadnode] if collection != dead_collection: msg = report_collection_diff( dead_collection, collection, deadnode.gateway.id, node.gateway.id, ) self.log(msg) return pending = self._removed2pending.pop(deadnode) self.node2pending[node] = pending break def mark_test_complete(self, node, item_index, duration=0): self.node2pending[node].remove(item_index) def mark_test_pending(self, item): self.pending.insert( 0, self.collection.index(item), ) for node in self.node2pending: self.check_schedule(node) def remove_node(self, node): # KeyError if we didn't get an add_node() yet pending = self.node2pending.pop(node) if not pending: return crashitem = self.node2collection[node][pending.pop(0)] if pending: self._removed2pending[node] = pending return crashitem def schedule(self): """Schedule the test items on the nodes If the node's pending list is empty it is a new node which needs to run all the tests. If the pending list is already populated (by ``.add_node_collection()``) then it replaces a dead node and we only need to run those tests. """ assert self.collection_is_completed for node, pending in self.node2pending.items(): if node in self._started: continue if not pending: pending[:] = range(len(self.node2collection[node])) node.send_runtest_all() node.shutdown() else: node.send_runtest_some(pending) self._started.append(node) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/load.py0000644000175100001770000002775514523717146021027 0ustar00runnerdockerfrom itertools import cycle from _pytest.runner import CollectReport from xdist.remote import Producer from xdist.workermanage import parse_spec_config from xdist.report import report_collection_diff class LoadScheduling: """Implement load scheduling across nodes. This distributes the tests collected across all nodes so each test is run just once. All nodes collect and submit the test suite and when all collections are received it is verified they are identical collections. Then the collection gets divided up in chunks and chunks get submitted to nodes. Whenever a node finishes an item, it calls ``.mark_test_complete()`` which will trigger the scheduler to assign more tests if the number of pending tests for the node falls below a low-watermark. When created, ``numnodes`` defines how many nodes are expected to submit a collection. This is used to know when all nodes have finished collection or how large the chunks need to be created. Attributes: :numnodes: The expected number of nodes taking part. The actual number of nodes will vary during the scheduler's lifetime as nodes are added by the DSession as they are brought up and removed either because of a dead node or normal shutdown. This number is primarily used to know when the initial collection is completed. :node2collection: Map of nodes and their test collection. All collections should always be identical. :node2pending: Map of nodes and the indices of their pending tests. The indices are an index into ``.pending`` (which is identical to their own collection stored in ``.node2collection``). :collection: The one collection once it is validated to be identical between all the nodes. It is initialised to None until ``.schedule()`` is called. :pending: List of indices of globally pending tests. These are tests which have not yet been allocated to a chunk for a node to process. :log: A py.log.Producer instance. :config: Config object, used for handling hooks. """ def __init__(self, config, log=None): self.numnodes = len(parse_spec_config(config)) self.node2collection = {} self.node2pending = {} self.pending = [] self.collection = None if log is None: self.log = Producer("loadsched") else: self.log = log.loadsched self.config = config self.maxschedchunk = self.config.getoption("maxschedchunk") @property def nodes(self): """A list of all nodes in the scheduler.""" return list(self.node2pending.keys()) @property def collection_is_completed(self): """Boolean indication initial test collection is complete. This is a boolean indicating all initial participating nodes have finished collection. The required number of initial nodes is defined by ``.numnodes``. """ return len(self.node2collection) >= self.numnodes @property def tests_finished(self): """Return True if all tests have been executed by the nodes.""" if not self.collection_is_completed: return False if self.pending: return False for pending in self.node2pending.values(): if len(pending) >= 2: return False return True @property def has_pending(self): """Return True if there are pending test items This indicates that collection has finished and nodes are still processing test items, so this can be thought of as "the scheduler is active". """ if self.pending: return True for pending in self.node2pending.values(): if pending: return True return False def add_node(self, node): """Add a new node to the scheduler. From now on the node will be allocated chunks of tests to execute. Called by the ``DSession.worker_workerready`` hook when it successfully bootstraps a new node. """ assert node not in self.node2pending self.node2pending[node] = [] def add_node_collection(self, node, collection): """Add the collected test items from a node The collection is stored in the ``.node2collection`` map. Called by the ``DSession.worker_collectionfinish`` hook. """ assert node in self.node2pending if self.collection_is_completed: # A new node has been added later, perhaps an original one died. # .schedule() should have # been called by now assert self.collection if collection != self.collection: other_node = next(iter(self.node2collection.keys())) msg = report_collection_diff( self.collection, collection, other_node.gateway.id, node.gateway.id ) self.log(msg) return self.node2collection[node] = list(collection) def mark_test_complete(self, node, item_index, duration=0): """Mark test item as completed by node The duration it took to execute the item is used as a hint to the scheduler. This is called by the ``DSession.worker_testreport`` hook. """ self.node2pending[node].remove(item_index) self.check_schedule(node, duration=duration) def mark_test_pending(self, item): self.pending.insert( 0, self.collection.index(item), ) for node in self.node2pending: self.check_schedule(node) def check_schedule(self, node, duration=0): """Maybe schedule new items on the node If there are any globally pending nodes left then this will check if the given node should be given any more tests. The ``duration`` of the last test is optionally used as a heuristic to influence how many tests the node is assigned. """ if node.shutting_down: return if self.pending: # how many nodes do we have? num_nodes = len(self.node2pending) # if our node goes below a heuristic minimum, fill it out to # heuristic maximum items_per_node_min = max(2, len(self.pending) // num_nodes // 4) items_per_node_max = max(2, len(self.pending) // num_nodes // 2) node_pending = self.node2pending[node] if len(node_pending) < items_per_node_min: if duration >= 0.1 and len(node_pending) >= 2: # seems the node is doing long-running tests # and has enough items to continue # so let's rather wait with sending new items return num_send = items_per_node_max - len(node_pending) # keep at least 2 tests pending even if --maxschedchunk=1 maxschedchunk = max(2 - len(node_pending), self.maxschedchunk) self._send_tests(node, min(num_send, maxschedchunk)) else: node.shutdown() self.log("num items waiting for node:", len(self.pending)) def remove_node(self, node): """Remove a node from the scheduler This should be called either when the node crashed or at shutdown time. In the former case any pending items assigned to the node will be re-scheduled. Called by the ``DSession.worker_workerfinished`` and ``DSession.worker_errordown`` hooks. Return the item which was being executing while the node crashed or None if the node has no more pending items. """ pending = self.node2pending.pop(node) if not pending: return # The node crashed, reassing pending items crashitem = self.collection[pending.pop(0)] self.pending.extend(pending) for node in self.node2pending: self.check_schedule(node) return crashitem def schedule(self): """Initiate distribution of the test collection Initiate scheduling of the items across the nodes. If this gets called again later it behaves the same as calling ``.check_schedule()`` on all nodes so that newly added nodes will start to be used. This is called by the ``DSession.worker_collectionfinish`` hook if ``.collection_is_completed`` is True. """ assert self.collection_is_completed # Initial distribution already happened, reschedule on all nodes if self.collection is not None: for node in self.nodes: self.check_schedule(node) return # XXX allow nodes to have different collections if not self._check_nodes_have_same_collection(): self.log("**Different tests collected, aborting run**") return # Collections are identical, create the index of pending items. self.collection = list(self.node2collection.values())[0] self.pending[:] = range(len(self.collection)) if not self.collection: return if self.maxschedchunk is None: self.maxschedchunk = len(self.collection) # Send a batch of tests to run. If we don't have at least two # tests per node, we have to send them all so that we can send # shutdown signals and get all nodes working. if len(self.pending) < 2 * len(self.nodes): # Distribute tests round-robin. Try to load all nodes if there are # enough tests. The other branch tries sends at least 2 tests # to each node - which is suboptimal when you have less than # 2 * len(nodes) tests. nodes = cycle(self.nodes) for i in range(len(self.pending)): self._send_tests(next(nodes), 1) else: # Send batches of consecutive tests. By default, pytest sorts tests # in order for optimal single-threaded execution, minimizing the # number of necessary fixture setup/teardown. Try to keep that # optimal order for every worker. # how many items per node do we have about? items_per_node = len(self.collection) // len(self.node2pending) # take a fraction of tests for initial distribution node_chunksize = min(items_per_node // 4, self.maxschedchunk) node_chunksize = max(node_chunksize, 2) # and initialize each node with a chunk of tests for node in self.nodes: self._send_tests(node, node_chunksize) if not self.pending: # initial distribution sent all tests, start node shutdown for node in self.nodes: node.shutdown() def _send_tests(self, node, num): tests_per_node = self.pending[:num] if tests_per_node: del self.pending[:num] self.node2pending[node].extend(tests_per_node) node.send_runtest_some(tests_per_node) def _check_nodes_have_same_collection(self): """Return True if all nodes have collected the same items. If collections differ, this method returns False while logging the collection differences and posting collection errors to pytest_collectreport hook. """ node_collection_items = list(self.node2collection.items()) first_node, col = node_collection_items[0] same_collection = True for node, collection in node_collection_items[1:]: msg = report_collection_diff( col, collection, first_node.gateway.id, node.gateway.id ) if msg: same_collection = False self.log(msg) if self.config is not None: rep = CollectReport( node.gateway.id, "failed", longrepr=msg, result=[] ) self.config.hook.pytest_collectreport(report=rep) return same_collection ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/loadfile.py0000644000175100001770000000417414523717146021655 0ustar00runnerdockerfrom .loadscope import LoadScopeScheduling from xdist.remote import Producer class LoadFileScheduling(LoadScopeScheduling): """Implement load scheduling across nodes, but grouping test test file. This distributes the tests collected across all nodes so each test is run just once. All nodes collect and submit the list of tests and when all collections are received it is verified they are identical collections. Then the collection gets divided up in work units, grouped by test file, and those work units get submitted to nodes. Whenever a node finishes an item, it calls ``.mark_test_complete()`` which will trigger the scheduler to assign more work units if the number of pending tests for the node falls below a low-watermark. When created, ``numnodes`` defines how many nodes are expected to submit a collection. This is used to know when all nodes have finished collection. This class behaves very much like LoadScopeScheduling, but with a file-level scope. """ def __init__(self, config, log=None): super().__init__(config, log) if log is None: self.log = Producer("loadfilesched") else: self.log = log.loadfilesched def _split_scope(self, nodeid): """Determine the scope (grouping) of a nodeid. There are usually 3 cases for a nodeid:: example/loadsuite/test/test_beta.py::test_beta0 example/loadsuite/test/test_delta.py::Delta1::test_delta0 example/loadsuite/epsilon/__init__.py::epsilon.epsilon #. Function in a test module. #. Method of a class in a test module. #. Doctest in a function in a package. This function will group tests with the scope determined by splitting the first ``::`` from the left. That is, test will be grouped in a single work unit when they reside in the same file. In the above example, scopes will be:: example/loadsuite/test/test_beta.py example/loadsuite/test/test_delta.py example/loadsuite/epsilon/__init__.py """ return nodeid.split("::", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/loadgroup.py0000644000175100001770000000416714523717146022074 0ustar00runnerdockerfrom .loadscope import LoadScopeScheduling from xdist.remote import Producer class LoadGroupScheduling(LoadScopeScheduling): """Implement load scheduling across nodes, but grouping test by xdist_group mark. This class behaves very much like LoadScopeScheduling, but it groups tests by xdist_group mark instead of the module or class to which they belong to. """ def __init__(self, config, log=None): super().__init__(config, log) if log is None: self.log = Producer("loadgroupsched") else: self.log = log.loadgroupsched def _split_scope(self, nodeid): """Determine the scope (grouping) of a nodeid. There are usually 3 cases for a nodeid:: example/loadsuite/test/test_beta.py::test_beta0 example/loadsuite/test/test_delta.py::Delta1::test_delta0 example/loadsuite/epsilon/__init__.py::epsilon.epsilon #. Function in a test module. #. Method of a class in a test module. #. Doctest in a function in a package. With loadgroup, two cases are added:: example/loadsuite/test/test_beta.py::test_beta0 example/loadsuite/test/test_delta.py::Delta1::test_delta0 example/loadsuite/epsilon/__init__.py::epsilon.epsilon example/loadsuite/test/test_gamma.py::test_beta0@gname example/loadsuite/test/test_delta.py::Gamma1::test_gamma0@gname This function will group tests with the scope determined by splitting the first ``@`` from the right. That is, test will be grouped in a single work unit when they have same group name. In the above example, scopes will be:: example/loadsuite/test/test_beta.py::test_beta0 example/loadsuite/test/test_delta.py::Delta1::test_delta0 example/loadsuite/epsilon/__init__.py::epsilon.epsilon gname gname """ if nodeid.rfind("@") > nodeid.rfind("]"): # check the index of ']' to avoid the case: parametrize mark value has '@' return nodeid.split("@")[-1] else: return nodeid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/loadscope.py0000644000175100001770000003357714523717146022060 0ustar00runnerdockerfrom collections import OrderedDict from _pytest.runner import CollectReport from xdist.remote import Producer from xdist.report import report_collection_diff from xdist.workermanage import parse_spec_config class LoadScopeScheduling: """Implement load scheduling across nodes, but grouping test by scope. This distributes the tests collected across all nodes so each test is run just once. All nodes collect and submit the list of tests and when all collections are received it is verified they are identical collections. Then the collection gets divided up in work units, grouped by test scope, and those work units get submitted to nodes. Whenever a node finishes an item, it calls ``.mark_test_complete()`` which will trigger the scheduler to assign more work units if the number of pending tests for the node falls below a low-watermark. When created, ``numnodes`` defines how many nodes are expected to submit a collection. This is used to know when all nodes have finished collection. Attributes: :numnodes: The expected number of nodes taking part. The actual number of nodes will vary during the scheduler's lifetime as nodes are added by the DSession as they are brought up and removed either because of a dead node or normal shutdown. This number is primarily used to know when the initial collection is completed. :collection: The final list of tests collected by all nodes once it is validated to be identical between all the nodes. It is initialised to None until ``.schedule()`` is called. :workqueue: Ordered dictionary that maps all available scopes with their associated tests (nodeid). Nodeids are in turn associated with their completion status. One entry of the workqueue is called a work unit. In turn, a collection of work unit is called a workload. :: workqueue = { '///test_module.py': { '///test_module.py::test_case1': False, '///test_module.py::test_case2': False, (...) }, (...) } :assigned_work: Ordered dictionary that maps worker nodes with their assigned work units. :: assigned_work = { '': { '///test_module.py': { '///test_module.py::test_case1': False, '///test_module.py::test_case2': False, (...) }, (...) }, (...) } :registered_collections: Ordered dictionary that maps worker nodes with their collection of tests gathered during test discovery. :: registered_collections = { '': [ '///test_module.py::test_case1', '///test_module.py::test_case2', ], (...) } :log: A py.log.Producer instance. :config: Config object, used for handling hooks. """ def __init__(self, config, log=None): self.numnodes = len(parse_spec_config(config)) self.collection = None self.workqueue = OrderedDict() self.assigned_work = OrderedDict() self.registered_collections = OrderedDict() if log is None: self.log = Producer("loadscopesched") else: self.log = log.loadscopesched self.config = config @property def nodes(self): """A list of all active nodes in the scheduler.""" return list(self.assigned_work.keys()) @property def collection_is_completed(self): """Boolean indication initial test collection is complete. This is a boolean indicating all initial participating nodes have finished collection. The required number of initial nodes is defined by ``.numnodes``. """ return len(self.registered_collections) >= self.numnodes @property def tests_finished(self): """Return True if all tests have been executed by the nodes.""" if not self.collection_is_completed: return False if self.workqueue: return False for assigned_unit in self.assigned_work.values(): if self._pending_of(assigned_unit) >= 2: return False return True @property def has_pending(self): """Return True if there are pending test items. This indicates that collection has finished and nodes are still processing test items, so this can be thought of as "the scheduler is active". """ if self.workqueue: return True for assigned_unit in self.assigned_work.values(): if self._pending_of(assigned_unit) > 0: return True return False def add_node(self, node): """Add a new node to the scheduler. From now on the node will be assigned work units to be executed. Called by the ``DSession.worker_workerready`` hook when it successfully bootstraps a new node. """ assert node not in self.assigned_work self.assigned_work[node] = OrderedDict() def remove_node(self, node): """Remove a node from the scheduler. This should be called either when the node crashed or at shutdown time. In the former case any pending items assigned to the node will be re-scheduled. Called by the hooks: - ``DSession.worker_workerfinished``. - ``DSession.worker_errordown``. Return the item being executed while the node crashed or None if the node has no more pending items. """ workload = self.assigned_work.pop(node) if not self._pending_of(workload): return None # The node crashed, identify test that crashed for work_unit in workload.values(): for nodeid, completed in work_unit.items(): if not completed: crashitem = nodeid break else: continue break else: raise RuntimeError( "Unable to identify crashitem on a workload with pending items" ) # Made uncompleted work unit available again self.workqueue.update(workload) for node in self.assigned_work: self._reschedule(node) return crashitem def add_node_collection(self, node, collection): """Add the collected test items from a node. The collection is stored in the ``.registered_collections`` dictionary. Called by the hook: - ``DSession.worker_collectionfinish``. """ # Check that add_node() was called on the node before assert node in self.assigned_work # A new node has been added later, perhaps an original one died. if self.collection_is_completed: # Assert that .schedule() should have been called by now assert self.collection # Check that the new collection matches the official collection if collection != self.collection: other_node = next(iter(self.registered_collections.keys())) msg = report_collection_diff( self.collection, collection, other_node.gateway.id, node.gateway.id ) self.log(msg) return self.registered_collections[node] = list(collection) def mark_test_complete(self, node, item_index, duration=0): """Mark test item as completed by node. Called by the hook: - ``DSession.worker_testreport``. """ nodeid = self.registered_collections[node][item_index] scope = self._split_scope(nodeid) self.assigned_work[node][scope][nodeid] = True self._reschedule(node) def mark_test_pending(self, item): raise NotImplementedError() def _assign_work_unit(self, node): """Assign a work unit to a node.""" assert self.workqueue # Grab a unit of work scope, work_unit = self.workqueue.popitem(last=False) # Keep track of the assigned work assigned_to_node = self.assigned_work.setdefault(node, default=OrderedDict()) assigned_to_node[scope] = work_unit # Ask the node to execute the workload worker_collection = self.registered_collections[node] nodeids_indexes = [ worker_collection.index(nodeid) for nodeid, completed in work_unit.items() if not completed ] node.send_runtest_some(nodeids_indexes) def _split_scope(self, nodeid): """Determine the scope (grouping) of a nodeid. There are usually 3 cases for a nodeid:: example/loadsuite/test/test_beta.py::test_beta0 example/loadsuite/test/test_delta.py::Delta1::test_delta0 example/loadsuite/epsilon/__init__.py::epsilon.epsilon #. Function in a test module. #. Method of a class in a test module. #. Doctest in a function in a package. This function will group tests with the scope determined by splitting the first ``::`` from the right. That is, classes will be grouped in a single work unit, and functions from a test module will be grouped by their module. In the above example, scopes will be:: example/loadsuite/test/test_beta.py example/loadsuite/test/test_delta.py::Delta1 example/loadsuite/epsilon/__init__.py """ return nodeid.rsplit("::", 1)[0] def _pending_of(self, workload): """Return the number of pending tests in a workload.""" pending = sum(list(scope.values()).count(False) for scope in workload.values()) return pending def _reschedule(self, node): """Maybe schedule new items on the node. If there are any globally pending work units left then this will check if the given node should be given any more tests. """ # Do not add more work to a node shutting down if node.shutting_down: return # Check that more work is available if not self.workqueue: node.shutdown() return self.log("Number of units waiting for node:", len(self.workqueue)) # Check that the node is almost depleted of work # 2: Heuristic of minimum tests to enqueue more work if self._pending_of(self.assigned_work[node]) > 2: return # Pop one unit of work and assign it self._assign_work_unit(node) def schedule(self): """Initiate distribution of the test collection. Initiate scheduling of the items across the nodes. If this gets called again later it behaves the same as calling ``._reschedule()`` on all nodes so that newly added nodes will start to be used. If ``.collection_is_completed`` is True, this is called by the hook: - ``DSession.worker_collectionfinish``. """ assert self.collection_is_completed # Initial distribution already happened, reschedule on all nodes if self.collection is not None: for node in self.nodes: self._reschedule(node) return # Check that all nodes collected the same tests if not self._check_nodes_have_same_collection(): self.log("**Different tests collected, aborting run**") return # Collections are identical, create the final list of items self.collection = list(next(iter(self.registered_collections.values()))) if not self.collection: return # Determine chunks of work (scopes) for nodeid in self.collection: scope = self._split_scope(nodeid) work_unit = self.workqueue.setdefault(scope, default=OrderedDict()) work_unit[nodeid] = False # Avoid having more workers than work extra_nodes = len(self.nodes) - len(self.workqueue) if extra_nodes > 0: self.log(f"Shutting down {extra_nodes} nodes") for _ in range(extra_nodes): unused_node, assigned = self.assigned_work.popitem(last=True) self.log(f"Shutting down unused node {unused_node}") unused_node.shutdown() # Assign initial workload for node in self.nodes: self._assign_work_unit(node) # Ensure nodes start with at least two work units if possible (#277) for node in self.nodes: self._reschedule(node) # Initial distribution sent all tests, start node shutdown if not self.workqueue: for node in self.nodes: node.shutdown() def _check_nodes_have_same_collection(self): """Return True if all nodes have collected the same items. If collections differ, this method returns False while logging the collection differences and posting collection errors to pytest_collectreport hook. """ node_collection_items = list(self.registered_collections.items()) first_node, col = node_collection_items[0] same_collection = True for node, collection in node_collection_items[1:]: msg = report_collection_diff( col, collection, first_node.gateway.id, node.gateway.id ) if not msg: continue same_collection = False self.log(msg) if self.config is None: continue rep = CollectReport(node.gateway.id, "failed", longrepr=msg, result=[]) self.config.hook.pytest_collectreport(report=rep) return same_collection ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/scheduler/worksteal.py0000644000175100001770000002641714523717146022115 0ustar00runnerdockerfrom collections import namedtuple from _pytest.runner import CollectReport from xdist.remote import Producer from xdist.workermanage import parse_spec_config from xdist.report import report_collection_diff NodePending = namedtuple("NodePending", ["node", "pending"]) # Every worker needs at least 2 tests in queue - the current and the next one. MIN_PENDING = 2 class WorkStealingScheduling: """Implement work-stealing scheduling. Initially, tests are distributed evenly among all nodes. When some node completes most of its assigned tests (when only one pending test remains), an attempt is made to reassign ("steal") some tests from other nodes to this node. Attributes: :numnodes: The expected number of nodes taking part. The actual number of nodes will vary during the scheduler's lifetime as nodes are added by the DSession as they are brought up and removed either because of a dead node or normal shutdown. This number is primarily used to know when the initial collection is completed. :node2collection: Map of nodes and their test collection. All collections should always be identical. :node2pending: Map of nodes and the indices of their pending tests. The indices are an index into ``.pending`` (which is identical to their own collection stored in ``.node2collection``). :collection: The one collection once it is validated to be identical between all the nodes. It is initialised to None until ``.schedule()`` is called. :pending: List of indices of globally pending tests. These are tests which have not yet been allocated to a chunk for a node to process. :log: A py.log.Producer instance. :config: Config object, used for handling hooks. :steal_requested_from_node: The node to which the current "steal" request was sent. ``None`` if there is no request in progress. Only one request can be in progress at any time, the scheduler doesn't send multiple simultaneous requests. """ def __init__(self, config, log=None): self.numnodes = len(parse_spec_config(config)) self.node2collection = {} self.node2pending = {} self.pending = [] self.collection = None if log is None: self.log = Producer("workstealsched") else: self.log = log.workstealsched self.config = config self.steal_requested_from_node = None @property def nodes(self): """A list of all nodes in the scheduler.""" return list(self.node2pending.keys()) @property def collection_is_completed(self): """Boolean indication initial test collection is complete. This is a boolean indicating all initial participating nodes have finished collection. The required number of initial nodes is defined by ``.numnodes``. """ return len(self.node2collection) >= self.numnodes @property def tests_finished(self): """Return True if all tests have been executed by the nodes.""" if not self.collection_is_completed: return False if self.pending: return False if self.steal_requested_from_node is not None: return False for pending in self.node2pending.values(): if len(pending) >= MIN_PENDING: return False return True @property def has_pending(self): """Return True if there are pending test items This indicates that collection has finished and nodes are still processing test items, so this can be thought of as "the scheduler is active". """ if self.pending: return True for pending in self.node2pending.values(): if pending: return True return False def add_node(self, node): """Add a new node to the scheduler. From now on the node will be allocated chunks of tests to execute. Called by the ``DSession.worker_workerready`` hook when it successfully bootstraps a new node. """ assert node not in self.node2pending self.node2pending[node] = [] def add_node_collection(self, node, collection): """Add the collected test items from a node The collection is stored in the ``.node2collection`` map. Called by the ``DSession.worker_collectionfinish`` hook. """ assert node in self.node2pending if self.collection_is_completed: # A new node has been added later, perhaps an original one died. # .schedule() should have # been called by now assert self.collection if collection != self.collection: other_node = next(iter(self.node2collection.keys())) msg = report_collection_diff( self.collection, collection, other_node.gateway.id, node.gateway.id ) self.log(msg) return self.node2collection[node] = list(collection) def mark_test_complete(self, node, item_index, duration=None): """Mark test item as completed by node This is called by the ``DSession.worker_testreport`` hook. """ self.node2pending[node].remove(item_index) self.check_schedule() def mark_test_pending(self, item): self.pending.insert( 0, self.collection.index(item), ) self.check_schedule() def remove_pending_tests_from_node(self, node, indices): """Node returned some test indices back in response to 'steal' command. This is called by ``DSession.worker_unscheduled``. """ assert node is self.steal_requested_from_node self.steal_requested_from_node = None indices_set = set(indices) self.node2pending[node] = [ i for i in self.node2pending[node] if i not in indices_set ] self.pending.extend(indices) self.check_schedule() def check_schedule(self): """Reschedule tests/perform load balancing.""" nodes_up = [ NodePending(node, pending) for node, pending in self.node2pending.items() if not node.shutting_down ] def get_idle_nodes(): return [node for node, pending in nodes_up if len(pending) < MIN_PENDING] idle_nodes = get_idle_nodes() if not idle_nodes: return if self.pending: # Distribute pending tests evenly among idle nodes for i, node in enumerate(idle_nodes): nodes_remaining = len(idle_nodes) - i num_send = len(self.pending) // nodes_remaining self._send_tests(node, num_send) idle_nodes = get_idle_nodes() # No need to steal anything if all nodes have enough work to continue if not idle_nodes: return # Only one active stealing request is allowed if self.steal_requested_from_node is not None: return # Find the node that has the longest test queue steal_from = max( nodes_up, key=lambda node_pending: len(node_pending.pending), default=None ) if steal_from is None: num_steal = 0 else: # Steal half of the test queue - but keep that node running too. # If the node has 2 or less tests queued, stealing will fail # anyway. max_steal = max(0, len(steal_from.pending) - MIN_PENDING) num_steal = min(len(steal_from.pending) // 2, max_steal) if num_steal == 0: # Can't get more work - shutdown idle nodes. This will force them # to run the last test now instead of waiting for more tests. for node in idle_nodes: node.shutdown() return steal_from.node.send_steal(steal_from.pending[-num_steal:]) self.steal_requested_from_node = steal_from.node def remove_node(self, node): """Remove a node from the scheduler This should be called either when the node crashed or at shutdown time. In the former case any pending items assigned to the node will be re-scheduled. Called by the ``DSession.worker_workerfinished`` and ``DSession.worker_errordown`` hooks. Return the item which was being executing while the node crashed or None if the node has no more pending items. """ pending = self.node2pending.pop(node) # If node was removed without completing its assigned tests - it crashed if pending: crashitem = self.collection[pending.pop(0)] else: crashitem = None self.pending.extend(pending) # Dead node won't respond to "steal" request if self.steal_requested_from_node is node: self.steal_requested_from_node = None self.check_schedule() return crashitem def schedule(self): """Initiate distribution of the test collection Initiate scheduling of the items across the nodes. If this gets called again later it behaves the same as calling ``.check_schedule()`` on all nodes so that newly added nodes will start to be used. This is called by the ``DSession.worker_collectionfinish`` hook if ``.collection_is_completed`` is True. """ assert self.collection_is_completed # Initial distribution already happened, reschedule on all nodes if self.collection is not None: self.check_schedule() return if not self._check_nodes_have_same_collection(): self.log("**Different tests collected, aborting run**") return # Collections are identical, create the index of pending items. self.collection = list(self.node2collection.values())[0] self.pending[:] = range(len(self.collection)) if not self.collection: return self.check_schedule() def _send_tests(self, node, num): tests_per_node = self.pending[:num] if tests_per_node: del self.pending[:num] self.node2pending[node].extend(tests_per_node) node.send_runtest_some(tests_per_node) def _check_nodes_have_same_collection(self): """Return True if all nodes have collected the same items. If collections differ, this method returns False while logging the collection differences and posting collection errors to pytest_collectreport hook. """ node_collection_items = list(self.node2collection.items()) first_node, col = node_collection_items[0] same_collection = True for node, collection in node_collection_items[1:]: msg = report_collection_diff( col, collection, first_node.gateway.id, node.gateway.id ) if msg: same_collection = False self.log(msg) if self.config is not None: rep = CollectReport( node.gateway.id, "failed", longrepr=msg, result=[] ) self.config.hook.pytest_collectreport(report=rep) return same_collection ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/src/xdist/workermanage.py0000644000175100001770000004025014523717146020575 0ustar00runnerdockerimport fnmatch import os import re import sys import uuid from pathlib import Path from typing import List, Union, Sequence, Optional, Any, Tuple, Set import pytest import execnet import xdist.remote from xdist.remote import Producer from xdist.plugin import _sys_path def parse_spec_config(config): xspeclist = [] for xspec in config.getvalue("tx"): i = xspec.find("*") try: num = int(xspec[:i]) except ValueError: xspeclist.append(xspec) else: xspeclist.extend([xspec[i + 1 :]] * num) if not xspeclist: raise pytest.UsageError( "MISSING test execution (tx) nodes: please specify --tx" ) return xspeclist class NodeManager: EXIT_TIMEOUT = 10 DEFAULT_IGNORES = [".*", "*.pyc", "*.pyo", "*~"] def __init__(self, config, specs=None, defaultchdir="pyexecnetcache") -> None: self.config = config self.trace = self.config.trace.get("nodemanager") self.testrunuid = self.config.getoption("testrunuid") if self.testrunuid is None: self.testrunuid = uuid.uuid4().hex self.group = execnet.Group() if specs is None: specs = self._getxspecs() self.specs = [] for spec in specs: if not isinstance(spec, execnet.XSpec): spec = execnet.XSpec(spec) if not spec.chdir and not spec.popen: spec.chdir = defaultchdir self.group.allocate_id(spec) self.specs.append(spec) self.roots = self._getrsyncdirs() self.rsyncoptions = self._getrsyncoptions() self._rsynced_specs: Set[Tuple[Any, Any]] = set() def rsync_roots(self, gateway): """Rsync the set of roots to the node's gateway cwd.""" if self.roots: for root in self.roots: self.rsync(gateway, root, **self.rsyncoptions) def setup_nodes(self, putevent): self.config.hook.pytest_xdist_setupnodes(config=self.config, specs=self.specs) self.trace("setting up nodes") return [self.setup_node(spec, putevent) for spec in self.specs] def setup_node(self, spec, putevent): gw = self.group.makegateway(spec) self.config.hook.pytest_xdist_newgateway(gateway=gw) self.rsync_roots(gw) node = WorkerController(self, gw, self.config, putevent) gw.node = node # keep the node alive node.setup() self.trace("started node %r" % node) return node def teardown_nodes(self): self.group.terminate(self.EXIT_TIMEOUT) def _getxspecs(self): return [execnet.XSpec(x) for x in parse_spec_config(self.config)] def _getrsyncdirs(self) -> List[Path]: for spec in self.specs: if not spec.popen or spec.chdir: break else: return [] import pytest import _pytest def get_dir(p): """Return the directory path if p is a package or the path to the .py file otherwise.""" stripped = p.rstrip("co") if os.path.basename(stripped) == "__init__.py": return os.path.dirname(p) else: return stripped pytestpath = get_dir(pytest.__file__) pytestdir = get_dir(_pytest.__file__) config = self.config candidates = [pytestpath, pytestdir] candidates += config.option.rsyncdir rsyncroots = config.getini("rsyncdirs") if rsyncroots: candidates.extend(rsyncroots) roots = [] for root in candidates: root = Path(root).resolve() if not root.exists(): raise pytest.UsageError(f"rsyncdir doesn't exist: {root!r}") if root not in roots: roots.append(root) return roots def _getrsyncoptions(self): """Get options to be passed for rsync.""" ignores = list(self.DEFAULT_IGNORES) ignores += [str(path) for path in self.config.option.rsyncignore] ignores += [str(path) for path in self.config.getini("rsyncignore")] return { "ignores": ignores, "verbose": getattr(self.config.option, "verbose", 0), } def rsync(self, gateway, source, notify=None, verbose=False, ignores=None): """Perform rsync to remote hosts for node.""" # XXX This changes the calling behaviour of # pytest_xdist_rsyncstart and pytest_xdist_rsyncfinish to # be called once per rsync target. rsync = HostRSync(source, verbose=verbose, ignores=ignores) spec = gateway.spec if spec.popen and not spec.chdir: # XXX This assumes that sources are python-packages # and that adding the basedir does not hurt. gateway.remote_exec( """ import sys ; sys.path.insert(0, %r) """ % os.path.dirname(str(source)) ).waitclose() return if (spec, source) in self._rsynced_specs: return def finished(): if notify: notify("rsyncrootready", spec, source) rsync.add_target_host(gateway, finished=finished) self._rsynced_specs.add((spec, source)) self.config.hook.pytest_xdist_rsyncstart(source=source, gateways=[gateway]) rsync.send() self.config.hook.pytest_xdist_rsyncfinish(source=source, gateways=[gateway]) class HostRSync(execnet.RSync): """RSyncer that filters out common files""" PathLike = Union[str, "os.PathLike[str]"] def __init__( self, sourcedir: PathLike, *, ignores: Optional[Sequence[PathLike]] = None, **kwargs: object ) -> None: if ignores is None: ignores = [] self._ignores = [re.compile(fnmatch.translate(os.fspath(x))) for x in ignores] super().__init__(sourcedir=Path(sourcedir), **kwargs) def filter(self, path: PathLike) -> bool: path = Path(path) for cre in self._ignores: if cre.match(path.name) or cre.match(str(path)): return False else: return True def add_target_host(self, gateway, finished=None): remotepath = os.path.basename(self._sourcedir) super().add_target(gateway, remotepath, finishedcallback=finished, delete=True) def _report_send_file(self, gateway, modified_rel_path): if self._verbose > 0: path = os.path.basename(self._sourcedir) + "/" + modified_rel_path remotepath = gateway.spec.chdir print(f"{gateway.spec}:{remotepath} <= {path}") def make_reltoroot(roots: Sequence[Path], args: List[str]) -> List[str]: # XXX introduce/use public API for splitting pytest args splitcode = "::" result = [] for arg in args: parts = arg.split(splitcode) fspath = Path(parts[0]) try: exists = fspath.exists() except OSError: exists = False if not exists: result.append(arg) continue for root in roots: x: Optional[Path] try: x = fspath.relative_to(root) except ValueError: x = None if x or fspath == root: parts[0] = root.name + "/" + str(x) break else: raise ValueError(f"arg {arg} not relative to an rsync root") result.append(splitcode.join(parts)) return result class WorkerController: ENDMARK = -1 class RemoteHook: @pytest.hookimpl(trylast=True) def pytest_xdist_getremotemodule(self): return xdist.remote def __init__(self, nodemanager, gateway, config, putevent): config.pluginmanager.register(self.RemoteHook()) self.nodemanager = nodemanager self.putevent = putevent self.gateway = gateway self.config = config self.workerinput = { "workerid": gateway.id, "workercount": len(nodemanager.specs), "testrunuid": nodemanager.testrunuid, "mainargv": sys.argv, } self._down = False self._shutdown_sent = False self.log = Producer(f"workerctl-{gateway.id}", enabled=config.option.debug) def __repr__(self): return f"<{self.__class__.__name__} {self.gateway.id}>" @property def shutting_down(self): return self._down or self._shutdown_sent def setup(self): self.log("setting up worker session") spec = self.gateway.spec if hasattr(self.config, "invocation_params"): args = [str(x) for x in self.config.invocation_params.args or ()] option_dict = {} else: args = self.config.args option_dict = vars(self.config.option) if not spec.popen or spec.chdir: args = make_reltoroot(self.nodemanager.roots, args) if spec.popen: name = "popen-%s" % self.gateway.id if hasattr(self.config, "_tmp_path_factory"): basetemp = self.config._tmp_path_factory.getbasetemp() option_dict["basetemp"] = str(basetemp / name) self.config.hook.pytest_configure_node(node=self) remote_module = self.config.hook.pytest_xdist_getremotemodule() self.channel = self.gateway.remote_exec(remote_module) # change sys.path only for remote workers # restore sys.path from a frozen copy for local workers change_sys_path = _sys_path if self.gateway.spec.popen else None self.channel.send((self.workerinput, args, option_dict, change_sys_path)) if self.putevent: self.channel.setcallback(self.process_from_remote, endmarker=self.ENDMARK) def ensure_teardown(self): if hasattr(self, "channel"): if not self.channel.isclosed(): self.log("closing", self.channel) self.channel.close() # del self.channel if hasattr(self, "gateway"): self.log("exiting", self.gateway) self.gateway.exit() # del self.gateway def send_runtest_some(self, indices): self.sendcommand("runtests", indices=indices) def send_runtest_all(self): self.sendcommand("runtests_all") def send_steal(self, indices): self.sendcommand("steal", indices=indices) def shutdown(self): if not self._down: try: self.sendcommand("shutdown") except OSError: pass self._shutdown_sent = True def sendcommand(self, name, **kwargs): """send a named parametrized command to the other side.""" self.log(f"sending command {name}(**{kwargs})") self.channel.send((name, kwargs)) def notify_inproc(self, eventname, **kwargs): self.log(f"queuing {eventname}(**{kwargs})") self.putevent((eventname, kwargs)) def process_from_remote(self, eventcall): # noqa too complex """this gets called for each object we receive from the other side and if the channel closes. Note that channel callbacks run in the receiver thread of execnet gateways - we need to avoid raising exceptions or doing heavy work. """ try: if eventcall == self.ENDMARK: err = self.channel._getremoteerror() if not self._down: if not err or isinstance(err, EOFError): err = "Not properly terminated" # lost connection? self.notify_inproc("errordown", node=self, error=err) self._down = True return eventname, kwargs = eventcall if eventname in ("collectionstart",): self.log(f"ignoring {eventname}({kwargs})") elif eventname == "workerready": self.notify_inproc(eventname, node=self, **kwargs) elif eventname == "internal_error": self.notify_inproc(eventname, node=self, **kwargs) elif eventname == "workerfinished": self._down = True self.workeroutput = kwargs["workeroutput"] self.notify_inproc("workerfinished", node=self) elif eventname in ("logstart", "logfinish"): self.notify_inproc(eventname, node=self, **kwargs) elif eventname in ("testreport", "collectreport", "teardownreport"): item_index = kwargs.pop("item_index", None) rep = self.config.hook.pytest_report_from_serializable( config=self.config, data=kwargs["data"] ) if item_index is not None: rep.item_index = item_index self.notify_inproc(eventname, node=self, rep=rep) elif eventname == "collectionfinish": self.notify_inproc(eventname, node=self, ids=kwargs["ids"]) elif eventname == "runtest_protocol_complete": self.notify_inproc(eventname, node=self, **kwargs) elif eventname == "unscheduled": self.notify_inproc(eventname, node=self, **kwargs) elif eventname == "logwarning": self.notify_inproc( eventname, message=kwargs["message"], code=kwargs["code"], nodeid=kwargs["nodeid"], fslocation=kwargs["nodeid"], ) elif eventname == "warning_captured": warning_message = unserialize_warning_message( kwargs["warning_message_data"] ) self.notify_inproc( eventname, warning_message=warning_message, when=kwargs["when"], item=kwargs["item"], ) elif eventname == "warning_recorded": warning_message = unserialize_warning_message( kwargs["warning_message_data"] ) self.notify_inproc( eventname, warning_message=warning_message, when=kwargs["when"], nodeid=kwargs["nodeid"], location=kwargs["location"], ) else: raise ValueError(f"unknown event: {eventname}") except KeyboardInterrupt: # should not land in receiver-thread raise except: # noqa from _pytest._code import ExceptionInfo excinfo = ExceptionInfo.from_current() print("!" * 20, excinfo) self.config.notify_exception(excinfo) self.shutdown() self.notify_inproc("errordown", node=self, error=excinfo) def unserialize_warning_message(data): import warnings import importlib if data["message_module"]: mod = importlib.import_module(data["message_module"]) cls = getattr(mod, data["message_class_name"]) message = None if data["message_args"] is not None: try: message = cls(*data["message_args"]) except TypeError: pass if message is None: # could not recreate the original warning instance; # create a generic Warning instance with the original # message at least message_text = "{mod}.{cls}: {msg}".format( mod=data["message_module"], cls=data["message_class_name"], msg=data["message_str"], ) message = Warning(message_text) else: message = data["message_str"] if data["category_module"]: mod = importlib.import_module(data["category_module"]) category = getattr(mod, data["category_class_name"]) else: category = None kwargs = {"message": message, "category": category} # access private _WARNING_DETAILS because the attributes vary between Python versions for attr_name in warnings.WarningMessage._WARNING_DETAILS: # type: ignore[attr-defined] if attr_name in ("message", "category"): continue kwargs[attr_name] = data[attr_name] return warnings.WarningMessage(**kwargs) # type: ignore[arg-type] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1699716733.6995263 pytest-xdist-3.4.0/testing/0000755000175100001770000000000014523717176015276 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/acceptance_test.py0000644000175100001770000014636614523717146021012 0ustar00runnerdockerimport os import re import shutil from typing import Dict from typing import List from typing import Tuple import pytest import xdist class TestDistribution: def test_n1_pass(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ def test_ok(): pass """ ) result = pytester.runpytest(p1, "-n1") assert result.ret == 0 result.stdout.fnmatch_lines(["*1 passed*"]) def test_n1_fail(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ def test_fail(): assert 0 """ ) result = pytester.runpytest(p1, "-n1") assert result.ret == 1 result.stdout.fnmatch_lines(["*1 failed*"]) def test_n1_import_error(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ import __import_of_missing_module def test_import(): pass """ ) result = pytester.runpytest(p1, "-n1") assert result.ret == 1 result.stdout.fnmatch_lines( ["E *Error: No module named *__import_of_missing_module*"] ) def test_n2_import_error(self, pytester: pytest.Pytester) -> None: """Check that we don't report the same import error multiple times in distributed mode.""" p1 = pytester.makepyfile( """ import __import_of_missing_module def test_import(): pass """ ) result1 = pytester.runpytest(p1, "-n2") result2 = pytester.runpytest(p1, "-n1") assert len(result1.stdout.lines) == len(result2.stdout.lines) def test_n1_skip(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ def test_skip(): import pytest pytest.skip("myreason") """ ) result = pytester.runpytest(p1, "-n1") assert result.ret == 0 result.stdout.fnmatch_lines(["*1 skipped*"]) def test_manytests_to_one_import_error(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ import __import_of_missing_module def test_import(): pass """ ) result = pytester.runpytest(p1, "--tx=popen", "--tx=popen") assert result.ret in (1, 2) result.stdout.fnmatch_lines( ["E *Error: No module named *__import_of_missing_module*"] ) def test_manytests_to_one_popen(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ import pytest def test_fail0(): assert 0 def test_fail1(): raise ValueError() def test_ok(): pass def test_skip(): pytest.skip("hello") """ ) result = pytester.runpytest(p1, "-v", "-d", "--tx=popen", "--tx=popen") result.stdout.fnmatch_lines( [ "created: 2/2 workers", "*2 failed, 1 passed, 1 skipped*", ] ) assert result.ret == 1 def test_exitfail_waits_for_workers_to_finish( self, pytester: pytest.Pytester ) -> None: """The DSession waits for workers before exiting early on failure. When -x/--exitfail is set, the DSession wait for the workers to finish before raising an Interrupt exception. This prevents reports from the faiing test and other tests from being discarded. """ p1 = pytester.makepyfile( """ import time def test_fail1(): time.sleep(0.1) assert 0 def test_fail2(): time.sleep(0.2) def test_fail3(): time.sleep(0.3) assert 0 def test_fail4(): time.sleep(0.3) def test_fail5(): time.sleep(0.3) def test_fail6(): time.sleep(0.3) """ ) result = pytester.runpytest(p1, "-x", "-rA", "-v", "-n2") assert result.ret == 2 result.stdout.re_match_lines([".*Interrupted: stopping.*[12].*"]) m = re.search(r"== (\d+) failed, (\d+) passed in ", str(result.stdout)) assert m n_failed, n_passed = (int(s) for s in m.groups()) assert 1 <= n_failed <= 2 assert 1 <= n_passed <= 3 assert (n_passed + n_failed) < 6 def test_basetemp_in_subprocesses(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ def test_send(tmp_path): from pathlib import Path assert tmp_path.relative_to(Path(%r)), tmp_path """ % str(pytester.path) ) result = pytester.runpytest_subprocess(p1, "-n1") assert result.ret == 0 result.stdout.fnmatch_lines(["*1 passed*"]) def test_dist_ini_specified(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ import pytest def test_fail0(): assert 0 def test_fail1(): raise ValueError() def test_ok(): pass def test_skip(): pytest.skip("hello") """ ) pytester.makeini( """ [pytest] addopts = --tx=3*popen """ ) result = pytester.runpytest(p1, "-d", "-v") result.stdout.fnmatch_lines( [ "created: 3/3 workers", "*2 failed, 1 passed, 1 skipped*", ] ) assert result.ret == 1 def test_dist_tests_with_crash(self, pytester: pytest.Pytester) -> None: if not hasattr(os, "kill"): pytest.skip("no os.kill") p1 = pytester.makepyfile( """ import pytest def test_fail0(): assert 0 def test_fail1(): raise ValueError() def test_ok(): pass def test_skip(): pytest.skip("hello") def test_crash(): import time import os time.sleep(0.5) os.kill(os.getpid(), 15) """ ) result = pytester.runpytest(p1, "-v", "-d", "-n1") result.stdout.fnmatch_lines( [ "*Python*", "*PASS**test_ok*", "*node*down*", "*3 failed, 1 passed, 1 skipped*", ] ) assert result.ret == 1 def test_distribution_rsyncdirs_example( self, pytester: pytest.Pytester, monkeypatch ) -> None: # use a custom plugin that has a custom command-line option to ensure # this is propagated to workers (see #491) pytester.makepyfile( **{ "myplugin/src/foobarplugin.py": """ from __future__ import print_function import os import sys import pytest def pytest_addoption(parser): parser.addoption("--foobar", action="store", dest="foobar_opt") @pytest.hookimpl(tryfirst=True) def pytest_load_initial_conftests(early_config): opt = early_config.known_args_namespace.foobar_opt print("--foobar=%s active! [%s]" % (opt, os.getpid()), file=sys.stderr) """ } ) assert (pytester.path / "myplugin/src/foobarplugin.py").is_file() monkeypatch.setenv( "PYTHONPATH", str(pytester.path / "myplugin/src"), prepend=os.pathsep ) source = pytester.mkdir("source") dest = pytester.mkdir("dest") subdir = source / "example_pkg" subdir.mkdir() subdir.joinpath("__init__.py").touch() p = subdir / "test_one.py" p.write_text("def test_5():\n assert not __file__.startswith(%r)" % str(p)) result = pytester.runpytest_subprocess( "-v", "-d", "-s", "-pfoobarplugin", "--foobar=123", "--dist=load", "--rsyncdir=%(subdir)s" % locals(), "--tx=popen//chdir=%(dest)s" % locals(), p, ) assert result.ret == 0 result.stdout.fnmatch_lines( [ "*1 passed*", ] ) result.stderr.fnmatch_lines(["--foobar=123 active! *"]) assert dest.joinpath(subdir.name).is_dir() def test_data_exchange(self, pytester: pytest.Pytester) -> None: pytester.makeconftest( """ # This hook only called on the controlling process. def pytest_configure_node(node): node.workerinput['a'] = 42 node.workerinput['b'] = 7 def pytest_configure(config): # this attribute is only set on workers if hasattr(config, 'workerinput'): a = config.workerinput['a'] b = config.workerinput['b'] r = a + b config.workeroutput['r'] = r # This hook only called on the controlling process. def pytest_testnodedown(node, error): node.config.calc_result = node.workeroutput['r'] def pytest_terminal_summary(terminalreporter): if not hasattr(terminalreporter.config, 'workerinput'): calc_result = terminalreporter.config.calc_result terminalreporter._tw.sep('-', 'calculated result is %s' % calc_result) """ ) p1 = pytester.makepyfile("def test_func(): pass") result = pytester.runpytest("-v", p1, "-d", "--tx=popen") result.stdout.fnmatch_lines( [ "created: 1/1 worker", "*calculated result is 49*", "*1 passed*", ] ) assert result.ret == 0 def test_keyboardinterrupt_hooks_issue79(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( __init__="", test_one=""" def test_hello(): raise KeyboardInterrupt() """, ) pytester.makeconftest( """ def pytest_sessionfinish(session): # on the worker if hasattr(session.config, 'workeroutput'): session.config.workeroutput['s2'] = 42 # on the controller def pytest_testnodedown(node, error): assert node.workeroutput['s2'] == 42 print ("s2call-finished") """ ) args = ["-n1", "--debug"] result = pytester.runpytest_subprocess(*args) s = result.stdout.str() assert result.ret == 2 assert "s2call" in s assert "Interrupted" in s def test_keyboard_interrupt_dist(self, pytester: pytest.Pytester) -> None: # xxx could be refined to check for return code pytester.makepyfile( """ def test_sleep(): import time time.sleep(10) """ ) child = pytester.spawn_pytest("-n1 -v", expect_timeout=30.0) child.expect(".*test_sleep.*") child.kill(2) # keyboard interrupt child.expect(".*KeyboardInterrupt.*") # child.expect(".*seconds.*") child.close() # assert ret == 2 def test_dist_with_collectonly(self, pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ def test_ok(): pass """ ) result = pytester.runpytest(p1, "-n1", "--collect-only") assert result.ret == 0 result.stdout.fnmatch_lines(["*collected 1 item*"]) class TestDistEach: def test_simple(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( """ def test_hello(): pass """ ) result = pytester.runpytest_subprocess("--debug", "--dist=each", "--tx=2*popen") assert not result.ret result.stdout.fnmatch_lines(["*2 pass*"]) @pytest.mark.xfail( run=False, reason="other python versions might not have pytest installed" ) def test_simple_diffoutput(self, pytester: pytest.Pytester) -> None: interpreters = [] for name in ("python2.5", "python2.6"): interp = shutil.which(name) if interp is None: pytest.skip("%s not found" % name) interpreters.append(interp) pytester.makepyfile( __init__="", test_one=""" import sys def test_hello(): print("%s...%s" % sys.version_info[:2]) assert 0 """, ) args = ["--dist=each", "-v"] args += ["--tx", "popen//python=%s" % interpreters[0]] args += ["--tx", "popen//python=%s" % interpreters[1]] result = pytester.runpytest(*args) s = result.stdout.str() assert "2...5" in s assert "2...6" in s class TestTerminalReporting: @pytest.mark.parametrize("verbosity", ["", "-q", "-v"]) def test_output_verbosity(self, pytester, verbosity: str) -> None: pytester.makepyfile( """ def test_ok(): pass """ ) args = ["-n1"] if verbosity: args.append(verbosity) result = pytester.runpytest(*args) out = result.stdout.str() if verbosity == "-v": assert "scheduling tests" in out assert "1 worker [1 item]" in out elif verbosity == "-q": assert "scheduling tests" not in out assert "gw" not in out assert "bringing up nodes..." in out else: assert "scheduling tests" not in out assert "1 worker [1 item]" in out def test_pass_skip_fail(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( """ import pytest def test_ok(): pass def test_skip(): pytest.skip("xx") def test_func(): assert 0 """ ) result = pytester.runpytest("-n1", "-v") result.stdout.fnmatch_lines_random( [ "*PASS*test_pass_skip_fail.py*test_ok*", "*SKIP*test_pass_skip_fail.py*test_skip*", "*FAIL*test_pass_skip_fail.py*test_func*", ] ) result.stdout.fnmatch_lines( ["*def test_func():", "> assert 0", "E assert 0"] ) def test_fail_platinfo(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( """ def test_func(): assert 0 """ ) result = pytester.runpytest("-n1", "-v") result.stdout.fnmatch_lines( [ "*FAIL*test_fail_platinfo.py*test_func*", "*0*Python*", "*def test_func():", "> assert 0", "E assert 0", ] ) def test_logfinish_hook(self, pytester: pytest.Pytester) -> None: """Ensure the pytest_runtest_logfinish hook is being properly handled""" pytester.makeconftest( """ def pytest_runtest_logfinish(): print('pytest_runtest_logfinish hook called') """ ) pytester.makepyfile( """ def test_func(): pass """ ) result = pytester.runpytest("-n1", "-s") result.stdout.fnmatch_lines(["*pytest_runtest_logfinish hook called*"]) def test_teardownfails_one_function(pytester: pytest.Pytester) -> None: p = pytester.makepyfile( """ def test_func(): pass def teardown_function(function): assert 0 """ ) result = pytester.runpytest(p, "-n1", "--tx=popen") result.stdout.fnmatch_lines( ["*def teardown_function(function):*", "*1 passed*1 error*"] ) @pytest.mark.xfail def test_terminate_on_hangingnode(pytester: pytest.Pytester) -> None: p = pytester.makeconftest( """ def pytest_sessionfinish(session): if session.nodeid == "my": # running on worker import time time.sleep(3) """ ) result = pytester.runpytest(p, "--dist=each", "--tx=popen//id=my") assert result.duration < 2.0 result.stdout.fnmatch_lines(["*killed*my*"]) @pytest.mark.xfail(reason="works if run outside test suite", run=False) def test_session_hooks(pytester: pytest.Pytester) -> None: pytester.makeconftest( """ import sys def pytest_sessionstart(session): sys.pytestsessionhooks = session def pytest_sessionfinish(session): if hasattr(session.config, 'workerinput'): name = "worker" else: name = "controller" with open(name, "w") as f: f.write("xy") # let's fail on the worker if name == "worker": raise ValueError(42) """ ) p = pytester.makepyfile( """ import sys def test_hello(): assert hasattr(sys, 'pytestsessionhooks') """ ) result = pytester.runpytest(p, "--dist=each", "--tx=popen") result.stdout.fnmatch_lines(["*ValueError*", "*1 passed*"]) assert not result.ret d = result.parseoutcomes() assert d["passed"] == 1 assert pytester.path.joinpath("worker").exists() assert pytester.path.joinpath("controller").exists() def test_session_testscollected(pytester: pytest.Pytester) -> None: """ Make sure controller node is updating the session object with the number of tests collected from the workers. """ pytester.makepyfile( test_foo=""" import pytest @pytest.mark.parametrize('i', range(3)) def test_ok(i): pass """ ) pytester.makeconftest( """ def pytest_sessionfinish(session): collected = getattr(session, 'testscollected', None) with open('testscollected', 'w') as f: f.write('collected = %s' % collected) """ ) result = pytester.inline_run("-n1") result.assertoutcome(passed=3) collected_file = pytester.path / "testscollected" assert collected_file.is_file() assert collected_file.read_text() == "collected = 3" def test_fixture_teardown_failure(pytester: pytest.Pytester) -> None: p = pytester.makepyfile( """ import pytest @pytest.fixture(scope="module") def myarg(request): yield 42 raise ValueError(42) def test_hello(myarg): pass """ ) result = pytester.runpytest_subprocess(p, "-n1") result.stdout.fnmatch_lines(["*ValueError*42*", "*1 passed*1 error*"]) assert result.ret def test_config_initialization( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, pytestconfig ) -> None: """Ensure workers and controller are initialized consistently. Integration test for #445""" pytester.makepyfile( **{ "dir_a/test_foo.py": """ def test_1(request): assert request.config.option.verbose == 2 """ } ) pytester.makefile( ".ini", myconfig=""" [pytest] testpaths=dir_a """, ) monkeypatch.setenv("PYTEST_ADDOPTS", "-v") result = pytester.runpytest("-n2", "-c", "myconfig.ini", "-v") result.stdout.fnmatch_lines(["dir_a/test_foo.py::test_1*", "*= 1 passed in *"]) assert result.ret == 0 @pytest.mark.parametrize("when", ["setup", "call", "teardown"]) def test_crashing_item(pytester, when) -> None: """Ensure crashing item is correctly reported during all testing stages""" code = dict(setup="", call="", teardown="") code[when] = "os._exit(1)" p = pytester.makepyfile( """ import os import pytest @pytest.fixture def fix(): {setup} yield {teardown} def test_crash(fix): {call} pass def test_ok(): pass """.format( **code ) ) passes = 2 if when == "teardown" else 1 result = pytester.runpytest("-n2", p) result.stdout.fnmatch_lines( ["*crashed*test_crash*", "*1 failed*%d passed*" % passes] ) def test_multiple_log_reports(pytester: pytest.Pytester) -> None: """ Ensure that pytest-xdist supports plugins that emit multiple logreports (#206). Inspired by pytest-rerunfailures. """ pytester.makeconftest( """ from _pytest.runner import runtestprotocol def pytest_runtest_protocol(item, nextitem): item.ihook.pytest_runtest_logstart(nodeid=item.nodeid, location=item.location) reports = runtestprotocol(item, nextitem=nextitem) for report in reports: item.ihook.pytest_runtest_logreport(report=report) return True """ ) pytester.makepyfile( """ def test(): pass """ ) result = pytester.runpytest("-n1") result.stdout.fnmatch_lines(["*2 passed*"]) def test_skipping(pytester: pytest.Pytester) -> None: p = pytester.makepyfile( """ import pytest def test_crash(): pytest.skip("hello") """ ) result = pytester.runpytest("-n1", "-rs", p) assert result.ret == 0 result.stdout.fnmatch_lines(["*hello*", "*1 skipped*"]) def test_fixture_scope_caching_issue503(pytester: pytest.Pytester) -> None: p1 = pytester.makepyfile( """ import pytest @pytest.fixture(scope='session') def fix(): assert fix.counter == 0, \ 'session fixture was invoked multiple times' fix.counter += 1 fix.counter = 0 def test_a(fix): pass def test_b(fix): pass """ ) result = pytester.runpytest(p1, "-v", "-n1") assert result.ret == 0 result.stdout.fnmatch_lines(["*2 passed*"]) def test_issue_594_random_parametrize(pytester: pytest.Pytester) -> None: """ Make sure that tests that are randomly parametrized display an appropriate error message, instead of silently skipping the entire test run. """ p1 = pytester.makepyfile( """ import pytest import random xs = list(range(10)) random.shuffle(xs) @pytest.mark.parametrize('x', xs) def test_foo(x): assert 1 """ ) result = pytester.runpytest(p1, "-v", "-n4") assert result.ret == 1 result.stdout.fnmatch_lines(["Different tests were collected between gw* and gw*"]) def test_tmpdir_disabled(pytester: pytest.Pytester) -> None: """Test xdist doesn't break if internal tmpdir plugin is disabled (#22).""" p1 = pytester.makepyfile( """ def test_ok(): pass """ ) result = pytester.runpytest(p1, "-n1", "-p", "no:tmpdir") assert result.ret == 0 result.stdout.fnmatch_lines("*1 passed*") @pytest.mark.parametrize("plugin", ["xdist.looponfail"]) def test_sub_plugins_disabled(pytester, plugin) -> None: """Test that xdist doesn't break if we disable any of its sub-plugins. (#32)""" p1 = pytester.makepyfile( """ def test_ok(): pass """ ) result = pytester.runpytest(p1, "-n1", "-p", f"no:{plugin}") assert result.ret == 0 result.stdout.fnmatch_lines("*1 passed*") class TestWarnings: @pytest.mark.parametrize("n", ["-n0", "-n1"]) def test_warnings(self, pytester, n) -> None: pytester.makepyfile( """ import warnings, py, pytest @pytest.mark.filterwarnings('ignore:config.warn has been deprecated') def test_func(request): warnings.warn(UserWarning('this is a warning')) """ ) result = pytester.runpytest(n) result.stdout.fnmatch_lines(["*this is a warning*", "*1 passed, 1 warning*"]) def test_warning_captured_deprecated_in_pytest_6( self, pytester: pytest.Pytester ) -> None: """ Do not trigger the deprecated pytest_warning_captured hook in pytest 6+ (#562) """ from _pytest import hookspec if not hasattr(hookspec, "pytest_warning_captured"): pytest.skip( f"pytest {pytest.__version__} does not have the pytest_warning_captured hook." ) pytester.makeconftest( """ def pytest_warning_captured(warning_message): if warning_message == "my custom worker warning": assert False, ( "this hook should not be called from workers " "in this version: {}" ).format(warning_message) """ ) pytester.makepyfile( """ import warnings def test(): warnings.warn("my custom worker warning") """ ) result = pytester.runpytest("-n1", "-Wignore") result.stdout.fnmatch_lines(["*1 passed*"]) result.stdout.no_fnmatch_line("*this hook should not be called in this version") @pytest.mark.parametrize("n", ["-n0", "-n1"]) def test_custom_subclass(self, pytester, n) -> None: """Check that warning subclasses that don't honor the args attribute don't break pytest-xdist (#344) """ pytester.makepyfile( """ import warnings, py, pytest class MyWarning(UserWarning): def __init__(self, p1, p2): self.p1 = p1 self.p2 = p2 self.args = () def test_func(request): warnings.warn(MyWarning("foo", 1)) """ ) pytester.syspathinsert() result = pytester.runpytest(n) result.stdout.fnmatch_lines(["*MyWarning*", "*1 passed, 1 warning*"]) @pytest.mark.parametrize("n", ["-n0", "-n1"]) def test_unserializable_arguments(self, pytester, n) -> None: """Check that warnings with unserializable arguments are handled correctly (#349).""" pytester.makepyfile( """ import warnings, pytest def test_func(tmp_path): fn = tmp_path / 'foo.txt' fn.touch() with fn.open('r') as f: warnings.warn(UserWarning("foo", f)) """ ) pytester.syspathinsert() result = pytester.runpytest(n) result.stdout.fnmatch_lines(["*UserWarning*foo.txt*", "*1 passed, 1 warning*"]) @pytest.mark.parametrize("n", ["-n0", "-n1"]) def test_unserializable_warning_details(self, pytester, n) -> None: """Check that warnings with unserializable _WARNING_DETAILS are handled correctly (#379). """ pytester.makepyfile( """ import warnings, pytest import socket import gc def abuse_socket(): s = socket.socket() del s # Deliberately provoke a ResourceWarning for an unclosed socket. # The socket itself will end up attached as a value in # _WARNING_DETAIL. We need to test that it is not serialized # (it can't be, so the test will fail if we try to). @pytest.mark.filterwarnings('always') def test_func(tmp_path): abuse_socket() gc.collect() """ ) pytester.syspathinsert() result = pytester.runpytest(n) result.stdout.fnmatch_lines( ["*ResourceWarning*unclosed*", "*1 passed, 1 warning*"] ) class TestNodeFailure: def test_load_single(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): os._exit(1) def test_b(): pass """ ) res = pytester.runpytest(f, "-n1") res.stdout.fnmatch_lines( [ "replacing crashed worker gw*", "worker*crashed while running*", "*1 failed*1 passed*", ] ) def test_load_multiple(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): pass def test_b(): os._exit(1) def test_c(): pass def test_d(): pass """ ) res = pytester.runpytest(f, "-n2") res.stdout.fnmatch_lines( [ "replacing crashed worker gw*", "worker*crashed while running*", "*1 failed*3 passed*", ] ) def test_each_single(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): os._exit(1) def test_b(): pass """ ) res = pytester.runpytest(f, "--dist=each", "--tx=popen") res.stdout.fnmatch_lines( [ "replacing crashed worker gw*", "worker*crashed while running*", "*1 failed*1 passed*", ] ) @pytest.mark.xfail(reason="#20: xdist race condition on node restart") def test_each_multiple(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): os._exit(1) def test_b(): pass """ ) res = pytester.runpytest(f, "--dist=each", "--tx=2*popen") res.stdout.fnmatch_lines( [ "*Replacing crashed worker*", "*Worker*crashed while running*", "*2 failed*2 passed*", ] ) def test_max_worker_restart(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): pass def test_b(): os._exit(1) def test_c(): os._exit(1) def test_d(): pass """ ) res = pytester.runpytest(f, "-n4", "--max-worker-restart=1") res.stdout.fnmatch_lines( [ "replacing crashed worker*", "maximum crashed workers reached: 1*", "worker*crashed while running*", "worker*crashed while running*", "*2 failed*2 passed*", ] ) def test_max_worker_restart_tests_queued(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os, pytest @pytest.mark.parametrize('i', range(10)) def test(i): os._exit(1) """ ) res = pytester.runpytest(f, "-n2", "--max-worker-restart=3") res.stdout.fnmatch_lines( [ "replacing crashed worker*", "maximum crashed workers reached: 3*", "worker*crashed while running*", "worker*crashed while running*", "* xdist: maximum crashed workers reached: 3 *", "* 4 failed in *", ] ) assert "INTERNALERROR" not in res.stdout.str() def test_max_worker_restart_die(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os os._exit(1) """ ) res = pytester.runpytest(f, "-n4", "--max-worker-restart=0") res.stdout.fnmatch_lines( [ "* xdist: worker gw* crashed and worker restarting disabled *", "* no tests ran in *", ] ) def test_disable_restart(self, pytester: pytest.Pytester) -> None: f = pytester.makepyfile( """ import os def test_a(): pass def test_b(): os._exit(1) def test_c(): pass """ ) res = pytester.runpytest(f, "-n4", "--max-worker-restart=0") res.stdout.fnmatch_lines( [ "worker gw* crashed and worker restarting disabled", "*worker*crashed while running*", "* xdist: worker gw* crashed and worker restarting disabled *", "* 1 failed, 2 passed in *", ] ) @pytest.mark.parametrize("n", [0, 2]) def test_worker_id_fixture(pytester, n) -> None: import glob f = pytester.makepyfile( """ import pytest @pytest.mark.parametrize("run_num", range(2)) def test_worker_id1(worker_id, run_num): with open("worker_id%s.txt" % run_num, "w") as f: f.write(worker_id) """ ) result = pytester.runpytest(f, "-n%d" % n) result.stdout.fnmatch_lines("* 2 passed in *") worker_ids = set() for fname in glob.glob(str(pytester.path / "*.txt")): with open(fname) as f: worker_ids.add(f.read().strip()) if n == 0: assert worker_ids == {"master"} else: assert worker_ids == {"gw0", "gw1"} @pytest.mark.parametrize("n", [0, 2]) def test_testrun_uid_fixture(pytester, n) -> None: import glob f = pytester.makepyfile( """ import pytest @pytest.mark.parametrize("run_num", range(2)) def test_testrun_uid1(testrun_uid, run_num): with open("testrun_uid%s.txt" % run_num, "w") as f: f.write(testrun_uid) """ ) result = pytester.runpytest(f, "-n%d" % n) result.stdout.fnmatch_lines("* 2 passed in *") testrun_uids = set() for fname in glob.glob(str(pytester.path / "*.txt")): with open(fname) as f: testrun_uids.add(f.read().strip()) assert len(testrun_uids) == 1 assert len(testrun_uids.pop()) == 32 @pytest.mark.parametrize("tb", ["auto", "long", "short", "no", "line", "native"]) def test_error_report_styles(pytester, tb) -> None: pytester.makepyfile( """ import pytest def test_error_report_styles(): raise RuntimeError('some failure happened') """ ) result = pytester.runpytest("-n1", "--tb=%s" % tb) if tb != "no": result.stdout.fnmatch_lines("*some failure happened*") result.assert_outcomes(failed=1) def test_color_yes_collection_on_non_atty(pytester, request) -> None: """skip collect progress report when working on non-terminals. Similar to pytest-dev/pytest#1397 """ tr = request.config.pluginmanager.getplugin("terminalreporter") if not hasattr(tr, "isatty"): pytest.skip("only valid for newer pytest versions") pytester.makepyfile( """ import pytest @pytest.mark.parametrize('i', range(10)) def test_this(i): assert 1 """ ) args = ["--color=yes", "-n2"] result = pytester.runpytest(*args) assert "test session starts" in result.stdout.str() assert "\x1b[1m" in result.stdout.str() assert "created: 2/2 workers" in result.stdout.str() assert "2 workers [10 items]" in result.stdout.str() assert "collecting:" not in result.stdout.str() def test_without_terminal_plugin(pytester, request) -> None: """ No output when terminal plugin is disabled """ pytester.makepyfile( """ def test_1(): pass """ ) result = pytester.runpytest("-p", "no:terminal", "-n2") assert result.stdout.str() == "" assert result.stderr.str() == "" assert result.ret == 0 def test_internal_error_with_maxfail(pytester: pytest.Pytester) -> None: """ Internal error when using --maxfail option (#62, #65). """ pytester.makepyfile( """ import pytest @pytest.fixture(params=['1', '2']) def crasher(): raise RuntimeError def test_aaa0(crasher): pass def test_aaa1(crasher): pass """ ) result = pytester.runpytest_subprocess("--maxfail=1", "-n1") result.stdout.re_match_lines([".* [12] errors? in .*"]) assert "INTERNALERROR" not in result.stderr.str() def test_internal_errors_propagate_to_controller(pytester: pytest.Pytester) -> None: pytester.makeconftest( """ def pytest_collection_modifyitems(): raise RuntimeError("Some runtime error") """ ) pytester.makepyfile("def test(): pass") result = pytester.runpytest("-n1") result.stdout.fnmatch_lines(["*RuntimeError: Some runtime error*"]) class TestLoadScope: def test_by_module(self, pytester: pytest.Pytester) -> None: test_file = """ import pytest @pytest.mark.parametrize('i', range(10)) def test(i): pass """ pytester.makepyfile(test_a=test_file, test_b=test_file) result = pytester.runpytest("-n2", "--dist=loadscope", "-v") assert get_workers_and_test_count_by_prefix( "test_a.py::test", result.outlines ) in ({"gw0": 10}, {"gw1": 10}) assert get_workers_and_test_count_by_prefix( "test_b.py::test", result.outlines ) in ({"gw0": 10}, {"gw1": 10}) def test_by_class(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( test_a=""" import pytest class TestA: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass class TestB: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass """ ) result = pytester.runpytest("-n2", "--dist=loadscope", "-v") assert get_workers_and_test_count_by_prefix( "test_a.py::TestA", result.outlines ) in ({"gw0": 10}, {"gw1": 10}) assert get_workers_and_test_count_by_prefix( "test_a.py::TestB", result.outlines ) in ({"gw0": 10}, {"gw1": 10}) def test_module_single_start(self, pytester: pytest.Pytester) -> None: """Fix test suite never finishing in case all workers start with a single test (#277).""" test_file1 = """ import pytest def test(): pass """ test_file2 = """ import pytest def test_1(): pass def test_2(): pass """ pytester.makepyfile(test_a=test_file1, test_b=test_file1, test_c=test_file2) result = pytester.runpytest("-n2", "--dist=loadscope", "-v") a = get_workers_and_test_count_by_prefix("test_a.py::test", result.outlines) b = get_workers_and_test_count_by_prefix("test_b.py::test", result.outlines) c1 = get_workers_and_test_count_by_prefix("test_c.py::test_1", result.outlines) c2 = get_workers_and_test_count_by_prefix("test_c.py::test_2", result.outlines) assert a in ({"gw0": 1}, {"gw1": 1}) assert b in ({"gw0": 1}, {"gw1": 1}) assert a.items() != b.items() assert c1 == c2 class TestFileScope: def test_by_module(self, pytester: pytest.Pytester) -> None: test_file = """ import pytest class TestA: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass class TestB: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass """ pytester.makepyfile(test_a=test_file, test_b=test_file) result = pytester.runpytest("-n2", "--dist=loadfile", "-v") test_a_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestA", result.outlines ) test_b_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_b.py::TestB", result.outlines ) assert test_a_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_a_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) assert test_b_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_b_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) def test_by_class(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( test_a=""" import pytest class TestA: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass class TestB: @pytest.mark.parametrize('i', range(10)) def test(self, i): pass """ ) result = pytester.runpytest("-n2", "--dist=loadfile", "-v") test_a_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestA", result.outlines ) test_b_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestB", result.outlines ) assert test_a_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_a_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) assert test_b_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_b_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) def test_module_single_start(self, pytester: pytest.Pytester) -> None: """Fix test suite never finishing in case all workers start with a single test (#277).""" test_file1 = """ import pytest def test(): pass """ test_file2 = """ import pytest def test_1(): pass def test_2(): pass """ pytester.makepyfile(test_a=test_file1, test_b=test_file1, test_c=test_file2) result = pytester.runpytest("-n2", "--dist=loadfile", "-v") a = get_workers_and_test_count_by_prefix("test_a.py::test", result.outlines) b = get_workers_and_test_count_by_prefix("test_b.py::test", result.outlines) c1 = get_workers_and_test_count_by_prefix("test_c.py::test_1", result.outlines) c2 = get_workers_and_test_count_by_prefix("test_c.py::test_2", result.outlines) assert a in ({"gw0": 1}, {"gw1": 1}) assert b in ({"gw0": 1}, {"gw1": 1}) assert a.items() != b.items() assert c1 == c2 class TestGroupScope: def test_by_module(self, testdir): test_file = """ import pytest class TestA: @pytest.mark.xdist_group(name="xdist_group") @pytest.mark.parametrize('i', range(5)) def test(self, i): pass """ testdir.makepyfile(test_a=test_file, test_b=test_file) result = testdir.runpytest("-n2", "--dist=loadgroup", "-v") test_a_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestA", result.outlines ) test_b_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_b.py::TestA", result.outlines ) assert test_a_workers_and_test_count in ( {"gw0": 5}, {"gw1": 0}, ) or test_a_workers_and_test_count in ({"gw0": 0}, {"gw1": 5}) assert test_b_workers_and_test_count in ( {"gw0": 5}, {"gw1": 0}, ) or test_b_workers_and_test_count in ({"gw0": 0}, {"gw1": 5}) assert ( test_a_workers_and_test_count.items() == test_b_workers_and_test_count.items() ) def test_by_class(self, testdir): testdir.makepyfile( test_a=""" import pytest class TestA: @pytest.mark.xdist_group(name="xdist_group") @pytest.mark.parametrize('i', range(10)) def test(self, i): pass class TestB: @pytest.mark.xdist_group(name="xdist_group") @pytest.mark.parametrize('i', range(10)) def test(self, i): pass """ ) result = testdir.runpytest("-n2", "--dist=loadgroup", "-v") test_a_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestA", result.outlines ) test_b_workers_and_test_count = get_workers_and_test_count_by_prefix( "test_a.py::TestB", result.outlines ) assert test_a_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_a_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) assert test_b_workers_and_test_count in ( {"gw0": 10}, {"gw1": 0}, ) or test_b_workers_and_test_count in ({"gw0": 0}, {"gw1": 10}) assert ( test_a_workers_and_test_count.items() == test_b_workers_and_test_count.items() ) def test_module_single_start(self, testdir): test_file1 = """ import pytest @pytest.mark.xdist_group(name="xdist_group") def test(): pass """ test_file2 = """ import pytest def test_1(): pass @pytest.mark.xdist_group(name="xdist_group") def test_2(): pass """ testdir.makepyfile(test_a=test_file1, test_b=test_file1, test_c=test_file2) result = testdir.runpytest("-n2", "--dist=loadgroup", "-v") a = get_workers_and_test_count_by_prefix("test_a.py::test", result.outlines) b = get_workers_and_test_count_by_prefix("test_b.py::test", result.outlines) c = get_workers_and_test_count_by_prefix("test_c.py::test_2", result.outlines) assert a.keys() == b.keys() and b.keys() == c.keys() def test_with_two_group_names(self, testdir): test_file = """ import pytest @pytest.mark.xdist_group(name="group1") def test_1(): pass @pytest.mark.xdist_group("group2") def test_2(): pass """ testdir.makepyfile(test_a=test_file, test_b=test_file) result = testdir.runpytest("-n2", "--dist=loadgroup", "-v") a_1 = get_workers_and_test_count_by_prefix("test_a.py::test_1", result.outlines) a_2 = get_workers_and_test_count_by_prefix("test_a.py::test_2", result.outlines) b_1 = get_workers_and_test_count_by_prefix("test_b.py::test_1", result.outlines) b_2 = get_workers_and_test_count_by_prefix("test_b.py::test_2", result.outlines) assert a_1.keys() == b_1.keys() and a_2.keys() == b_2.keys() class TestLocking: _test_content = """ class TestClassName%s(object): @classmethod def setup_class(cls): FILE_LOCK.acquire() @classmethod def teardown_class(cls): FILE_LOCK.release() def test_a(self): pass def test_b(self): pass def test_c(self): pass """ test_file1 = """ import filelock FILE_LOCK = filelock.FileLock("test.lock") """ + ( (_test_content * 4) % ("A", "B", "C", "D") ) @pytest.mark.parametrize("scope", ["each", "load", "loadscope", "loadfile", "no"]) def test_single_file(self, pytester, scope) -> None: pytester.makepyfile(test_a=self.test_file1) result = pytester.runpytest("-n2", "--dist=%s" % scope, "-v") result.assert_outcomes(passed=(12 if scope != "each" else 12 * 2)) @pytest.mark.parametrize("scope", ["each", "load", "loadscope", "loadfile", "no"]) def test_multi_file(self, pytester, scope) -> None: pytester.makepyfile( test_a=self.test_file1, test_b=self.test_file1, test_c=self.test_file1, test_d=self.test_file1, ) result = pytester.runpytest("-n2", "--dist=%s" % scope, "-v") result.assert_outcomes(passed=(48 if scope != "each" else 48 * 2)) def parse_tests_and_workers_from_output(lines: List[str]) -> List[Tuple[str, str, str]]: result = [] for line in lines: # example match: "[gw0] PASSED test_a.py::test[7]" m = re.match( r""" \[(gw\d)\] # worker \s* (?:\[\s*\d+%\])? # progress indicator \s(.*?) # status string ("PASSED") \s(.*::.*) # nodeid """, line.strip(), re.VERBOSE, ) if m: worker, status, nodeid = m.groups() result.append((worker, status, nodeid)) return result def get_workers_and_test_count_by_prefix( prefix: str, lines: List[str], expected_status: str = "PASSED" ) -> Dict[str, int]: result: Dict[str, int] = {} for worker, status, nodeid in parse_tests_and_workers_from_output(lines): if expected_status == status and nodeid.startswith(prefix): result[worker] = result.get(worker, 0) + 1 return result class TestAPI: @pytest.fixture def fake_request(self): class FakeOption: def __init__(self): self.dist = "load" class FakeConfig: def __init__(self): self.workerinput = {"workerid": "gw5"} self.option = FakeOption() class FakeRequest: def __init__(self): self.config = FakeConfig() return FakeRequest() def test_is_xdist_worker(self, fake_request) -> None: assert xdist.is_xdist_worker(fake_request) del fake_request.config.workerinput assert not xdist.is_xdist_worker(fake_request) def test_is_xdist_controller(self, fake_request) -> None: assert not xdist.is_xdist_master(fake_request) assert not xdist.is_xdist_controller(fake_request) del fake_request.config.workerinput assert xdist.is_xdist_master(fake_request) assert xdist.is_xdist_controller(fake_request) fake_request.config.option.dist = "no" assert not xdist.is_xdist_master(fake_request) assert not xdist.is_xdist_controller(fake_request) def test_get_xdist_worker_id(self, fake_request) -> None: assert xdist.get_xdist_worker_id(fake_request) == "gw5" del fake_request.config.workerinput assert xdist.get_xdist_worker_id(fake_request) == "master" def test_collection_crash(testdir): p1 = testdir.makepyfile( """ assert 0 """ ) result = testdir.runpytest(p1, "-n1") assert result.ret == 1 result.stdout.fnmatch_lines( [ "created: 1/1 worker", "1 worker [[]0 items[]]", "*_ ERROR collecting test_collection_crash.py _*", "E assert 0", "*= 1 error in *", ] ) def test_dist_in_addopts(testdir): """Users can set a default distribution in the configuration file (#789).""" testdir.makepyfile( """ def test(): pass """ ) testdir.makeini( """ [pytest] addopts = --dist loadscope """ ) result = testdir.runpytest() assert result.ret == 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/conftest.py0000644000175100001770000000261614523717146017477 0ustar00runnerdockerimport execnet import pytest import shutil from typing import List pytest_plugins = "pytester" @pytest.fixture(autouse=True) def _divert_atexit(request, monkeypatch: pytest.MonkeyPatch): import atexit finalizers = [] def fake_register(func, *args, **kwargs): finalizers.append((func, args, kwargs)) monkeypatch.setattr(atexit, "register", fake_register) yield while finalizers: func, args, kwargs = finalizers.pop() func(*args, **kwargs) def pytest_addoption(parser) -> None: parser.addoption( "--gx", action="append", dest="gspecs", help="add a global test environment, XSpec-syntax. ", ) @pytest.fixture def specssh(request) -> str: return getspecssh(request.config) # configuration information for tests def getgspecs(config) -> List[execnet.XSpec]: return [execnet.XSpec(spec) for spec in config.getvalueorskip("gspecs")] def getspecssh(config) -> str: # type: ignore[return] xspecs = getgspecs(config) for spec in xspecs: if spec.ssh: if not shutil.which("ssh"): pytest.skip("command not found: ssh") return str(spec) pytest.skip("need '--gx ssh=...'") def getsocketspec(config) -> execnet.XSpec: xspecs = getgspecs(config) for spec in xspecs: if spec.socket: return spec pytest.skip("need '--gx socket=...'") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_dsession.py0000644000175100001770000005177214523717146020547 0ustar00runnerdockerfrom __future__ import annotations from xdist.dsession import ( DSession, get_default_max_worker_restart, get_workers_status_line, WorkerStatus, ) from xdist.report import report_collection_diff from xdist.scheduler import EachScheduling, LoadScheduling, WorkStealingScheduling from typing import Sequence import pytest import execnet class MockGateway: def __init__(self) -> None: self._count = 0 self.id = str(self._count) self._count += 1 class MockNode: def __init__(self) -> None: self.sent = [] # type: ignore[var-annotated] self.stolen = [] # type: ignore[var-annotated] self.gateway = MockGateway() self._shutdown = False def send_runtest_some(self, indices) -> None: self.sent.extend(indices) def send_runtest_all(self) -> None: self.sent.append("ALL") def send_steal(self, indices) -> None: self.stolen.extend(indices) def shutdown(self) -> None: self._shutdown = True @property def shutting_down(self) -> bool: return self._shutdown class TestEachScheduling: def test_schedule_load_simple(self, pytester: pytest.Pytester) -> None: node1 = MockNode() node2 = MockNode() config = pytester.parseconfig("--tx=2*popen") sched = EachScheduling(config) sched.add_node(node1) sched.add_node(node2) collection = ["a.py::test_1"] assert not sched.collection_is_completed sched.add_node_collection(node1, collection) assert not sched.collection_is_completed sched.add_node_collection(node2, collection) assert sched.collection_is_completed assert sched.node2collection[node1] == collection assert sched.node2collection[node2] == collection sched.schedule() assert sched.tests_finished assert node1.sent == ["ALL"] assert node2.sent == ["ALL"] sched.mark_test_complete(node1, 0) assert sched.tests_finished sched.mark_test_complete(node2, 0) assert sched.tests_finished def test_schedule_remove_node(self, pytester: pytest.Pytester) -> None: node1 = MockNode() config = pytester.parseconfig("--tx=popen") sched = EachScheduling(config) sched.add_node(node1) collection = ["a.py::test_1"] assert not sched.collection_is_completed sched.add_node_collection(node1, collection) assert sched.collection_is_completed assert sched.node2collection[node1] == collection sched.schedule() assert sched.tests_finished crashitem = sched.remove_node(node1) assert crashitem assert sched.tests_finished assert not sched.nodes class TestLoadScheduling: def test_schedule_load_simple(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes collection = ["a.py::test_1", "a.py::test_2"] assert not sched.collection_is_completed sched.add_node_collection(node1, collection) assert not sched.collection_is_completed sched.add_node_collection(node2, collection) assert sched.collection_is_completed assert sched.node2collection[node1] == collection assert sched.node2collection[node2] == collection sched.schedule() assert not sched.pending assert sched.tests_finished assert len(node1.sent) == 1 assert len(node2.sent) == 1 assert node1.sent == [0] assert node2.sent == [1] sched.mark_test_complete(node1, node1.sent[0]) assert sched.tests_finished def test_schedule_batch_size(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes col = ["xyz"] * 6 sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.schedule() # assert not sched.tests_finished sent1 = node1.sent sent2 = node2.sent assert sent1 == [0, 1] assert sent2 == [2, 3] assert sched.pending == [4, 5] assert sched.node2pending[node1] == sent1 assert sched.node2pending[node2] == sent2 assert len(sched.pending) == 2 sched.mark_test_complete(node1, 0) assert node1.sent == [0, 1, 4] assert sched.pending == [5] assert node2.sent == [2, 3] sched.mark_test_complete(node1, 1) assert node1.sent == [0, 1, 4, 5] assert not sched.pending def test_schedule_maxchunk_none(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes col = [f"test{i}" for i in range(16)] sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.schedule() assert node1.sent == [0, 1] assert node2.sent == [2, 3] assert sched.pending == list(range(4, 16)) assert sched.node2pending[node1] == node1.sent assert sched.node2pending[node2] == node2.sent sched.mark_test_complete(node1, 0) assert node1.sent == [0, 1, 4, 5] assert sched.pending == list(range(6, 16)) sched.mark_test_complete(node1, 1) assert node1.sent == [0, 1, 4, 5] assert sched.pending == list(range(6, 16)) for i in range(7, 16): sched.mark_test_complete(node1, i - 3) assert node1.sent == [0, 1] + list(range(4, i)) assert node2.sent == [2, 3] assert sched.pending == list(range(i, 16)) def test_schedule_maxchunk_1(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen", "--maxschedchunk=1") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes col = [f"test{i}" for i in range(16)] sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.schedule() assert node1.sent == [0, 1] assert node2.sent == [2, 3] assert sched.pending == list(range(4, 16)) assert sched.node2pending[node1] == node1.sent assert sched.node2pending[node2] == node2.sent for complete_index, first_pending in enumerate(range(5, 16)): sched.mark_test_complete(node1, node1.sent[complete_index]) assert node1.sent == [0, 1] + list(range(4, first_pending)) assert node2.sent == [2, 3] assert sched.pending == list(range(first_pending, 16)) def test_schedule_fewer_tests_than_nodes(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=3*popen") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2, node3 = sched.nodes col = ["xyz"] * 2 sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.add_node_collection(node3, col) assert sched.collection_is_completed sched.schedule() # assert not sched.tests_finished assert node1.sent == [0] assert node2.sent == [1] assert node3.sent == [] assert not sched.pending def test_schedule_fewer_than_two_tests_per_node( self, pytester: pytest.Pytester ) -> None: config = pytester.parseconfig("--tx=3*popen") sched = LoadScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2, node3 = sched.nodes col = ["xyz"] * 5 sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.add_node_collection(node3, col) assert sched.collection_is_completed sched.schedule() # assert not sched.tests_finished assert node1.sent == [0, 3] assert node2.sent == [1, 4] assert node3.sent == [2] assert not sched.pending def test_add_remove_node(self, pytester: pytest.Pytester) -> None: node = MockNode() config = pytester.parseconfig("--tx=popen") sched = LoadScheduling(config) sched.add_node(node) collection = ["test_file.py::test_func"] sched.add_node_collection(node, collection) assert sched.collection_is_completed sched.schedule() assert not sched.pending crashitem = sched.remove_node(node) assert crashitem == collection[0] def test_different_tests_collected(self, pytester: pytest.Pytester) -> None: """ Test that LoadScheduling is reporting collection errors when different test ids are collected by workers. """ class CollectHook: """ Dummy hook that stores collection reports. """ def __init__(self): self.reports = [] def pytest_collectreport(self, report): self.reports.append(report) collect_hook = CollectHook() config = pytester.parseconfig("--tx=2*popen") config.pluginmanager.register(collect_hook, "collect_hook") node1 = MockNode() node2 = MockNode() sched = LoadScheduling(config) sched.add_node(node1) sched.add_node(node2) sched.add_node_collection(node1, ["a.py::test_1"]) sched.add_node_collection(node2, ["a.py::test_2"]) sched.schedule() assert len(collect_hook.reports) == 1 rep = collect_hook.reports[0] assert "Different tests were collected between" in rep.longrepr class TestWorkStealingScheduling: def test_ideal_case(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen") sched = WorkStealingScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes collection = [f"test_workstealing.py::test_{i}" for i in range(16)] assert not sched.collection_is_completed sched.add_node_collection(node1, collection) assert not sched.collection_is_completed sched.add_node_collection(node2, collection) assert sched.collection_is_completed assert sched.node2collection[node1] == collection assert sched.node2collection[node2] == collection sched.schedule() assert not sched.pending assert not sched.tests_finished assert node1.sent == list(range(0, 8)) assert node2.sent == list(range(8, 16)) for i in range(8): sched.mark_test_complete(node1, node1.sent[i]) sched.mark_test_complete(node2, node2.sent[i]) assert sched.tests_finished assert node1.stolen == [] assert node2.stolen == [] def test_stealing(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=2*popen") sched = WorkStealingScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2 = sched.nodes collection = [f"test_workstealing.py::test_{i}" for i in range(16)] sched.add_node_collection(node1, collection) sched.add_node_collection(node2, collection) assert sched.collection_is_completed sched.schedule() assert node1.sent == list(range(0, 8)) assert node2.sent == list(range(8, 16)) for i in range(8): sched.mark_test_complete(node1, node1.sent[i]) assert node2.stolen == list(range(12, 16)) sched.remove_pending_tests_from_node(node2, node2.stolen) for i in range(4): sched.mark_test_complete(node2, node2.sent[i]) assert node1.stolen == [14, 15] sched.remove_pending_tests_from_node(node1, node1.stolen) sched.mark_test_complete(node1, 12) sched.mark_test_complete(node2, 14) assert node2.stolen == list(range(12, 16)) assert node1.stolen == [14, 15] assert sched.tests_finished def test_steal_on_add_node(self, pytester: pytest.Pytester) -> None: node = MockNode() config = pytester.parseconfig("--tx=popen") sched = WorkStealingScheduling(config) sched.add_node(node) collection = [f"test_workstealing.py::test_{i}" for i in range(5)] sched.add_node_collection(node, collection) assert sched.collection_is_completed sched.schedule() assert not sched.pending sched.mark_test_complete(node, 0) node2 = MockNode() sched.add_node(node2) sched.add_node_collection(node2, collection) assert sched.collection_is_completed sched.schedule() assert node.stolen == [3, 4] sched.remove_pending_tests_from_node(node, node.stolen) sched.mark_test_complete(node, 1) sched.mark_test_complete(node2, 3) assert sched.tests_finished assert node2.stolen == [] def test_schedule_fewer_tests_than_nodes(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfig("--tx=3*popen") sched = WorkStealingScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2, node3 = sched.nodes col = ["xyz"] * 2 sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.add_node_collection(node3, col) sched.schedule() assert node1.sent == [] assert node1.stolen == [] assert node2.sent == [0] assert node2.stolen == [] assert node3.sent == [1] assert node3.stolen == [] assert not sched.pending assert sched.tests_finished def test_schedule_fewer_than_two_tests_per_node( self, pytester: pytest.Pytester ) -> None: config = pytester.parseconfig("--tx=3*popen") sched = WorkStealingScheduling(config) sched.add_node(MockNode()) sched.add_node(MockNode()) sched.add_node(MockNode()) node1, node2, node3 = sched.nodes col = ["xyz"] * 5 sched.add_node_collection(node1, col) sched.add_node_collection(node2, col) sched.add_node_collection(node3, col) sched.schedule() assert node1.sent == [0] assert node2.sent == [1, 2] assert node3.sent == [3, 4] assert not sched.pending assert not sched.tests_finished sched.mark_test_complete(node1, node1.sent[0]) sched.mark_test_complete(node2, node2.sent[0]) sched.mark_test_complete(node3, node3.sent[0]) sched.mark_test_complete(node3, node3.sent[1]) assert sched.tests_finished assert node1.stolen == [] assert node2.stolen == [] assert node3.stolen == [] def test_add_remove_node(self, pytester: pytest.Pytester) -> None: node = MockNode() config = pytester.parseconfig("--tx=popen") sched = WorkStealingScheduling(config) sched.add_node(node) collection = ["test_file.py::test_func"] sched.add_node_collection(node, collection) assert sched.collection_is_completed sched.schedule() assert not sched.pending crashitem = sched.remove_node(node) assert crashitem == collection[0] def test_different_tests_collected(self, pytester: pytest.Pytester) -> None: class CollectHook: def __init__(self): self.reports = [] def pytest_collectreport(self, report): self.reports.append(report) collect_hook = CollectHook() config = pytester.parseconfig("--tx=2*popen") config.pluginmanager.register(collect_hook, "collect_hook") node1 = MockNode() node2 = MockNode() sched = WorkStealingScheduling(config) sched.add_node(node1) sched.add_node(node2) sched.add_node_collection(node1, ["a.py::test_1"]) sched.add_node_collection(node2, ["a.py::test_2"]) sched.schedule() assert len(collect_hook.reports) == 1 rep = collect_hook.reports[0] assert "Different tests were collected between" in rep.longrepr class TestDistReporter: @pytest.mark.xfail def test_rsync_printing(self, pytester: pytest.Pytester, linecomp) -> None: config = pytester.parseconfig() from _pytest.pytest_terminal import TerminalReporter rep = TerminalReporter(config, file=linecomp.stringio) config.pluginmanager.register(rep, "terminalreporter") dsession = DSession(config) class gw1: id = "X1" spec = execnet.XSpec("popen") class gw2: id = "X2" spec = execnet.XSpec("popen") # class rinfo: # version_info = (2, 5, 1, 'final', 0) # executable = "hello" # platform = "xyz" # cwd = "qwe" # dsession.pytest_xdist_newgateway(gw1, rinfo) # linecomp.assert_contains_lines([ # "*X1*popen*xyz*2.5*" # ]) dsession.pytest_xdist_rsyncstart(source="hello", gateways=[gw1, gw2]) # type: ignore[attr-defined] linecomp.assert_contains_lines(["[X1,X2] rsyncing: hello"]) def test_report_collection_diff_equal() -> None: """Test reporting of equal collections.""" from_collection = to_collection = ["aaa", "bbb", "ccc"] assert report_collection_diff(from_collection, to_collection, 1, 2) is None def test_default_max_worker_restart() -> None: class config: class option: maxworkerrestart: str | None = None numprocesses: int = 0 assert get_default_max_worker_restart(config) is None config.option.numprocesses = 2 assert get_default_max_worker_restart(config) == 8 config.option.maxworkerrestart = "1" assert get_default_max_worker_restart(config) == 1 config.option.maxworkerrestart = "0" assert get_default_max_worker_restart(config) == 0 def test_report_collection_diff_different() -> None: """Test reporting of different collections.""" from_collection = ["aaa", "bbb", "ccc", "YYY"] to_collection = ["aZa", "bbb", "XXX", "ccc"] error_message = ( "Different tests were collected between 1 and 2. The difference is:\n" "--- 1\n" "\n" "+++ 2\n" "\n" "@@ -1,4 +1,4 @@\n" "\n" "-aaa\n" "+aZa\n" " bbb\n" "+XXX\n" " ccc\n" "-YYY\n" "To see why this happens see Known limitations in documentation" ) msg = report_collection_diff(from_collection, to_collection, "1", "2") assert msg == error_message @pytest.mark.xfail(reason="duplicate test ids not supported yet") def test_pytest_issue419(pytester: pytest.Pytester) -> None: pytester.makepyfile( """ import pytest @pytest.mark.parametrize('birth_year', [1988, 1988, ]) def test_2011_table(birth_year): pass """ ) reprec = pytester.inline_run("-n1") reprec.assertoutcome(passed=2) assert 0 Created = WorkerStatus.Created Initialized = WorkerStatus.Initialized ReadyForCollection = WorkerStatus.ReadyForCollection CollectionDone = WorkerStatus.CollectionDone @pytest.mark.parametrize( "status_and_items, expected", [ ( [], "", ), ( [(Created, 0)], "created: 1/1 worker", ), ( [(Created, 0), (Created, 0)], "created: 2/2 workers", ), ( [(Initialized, 0), (Created, 0)], "initialized: 1/2 workers", ), ( [(Initialized, 0), (Initialized, 0)], "initialized: 2/2 workers", ), ( [(ReadyForCollection, 0), (Created, 0)], "ready: 1/2 workers", ), ( [(ReadyForCollection, 0), (ReadyForCollection, 0)], "ready: 2/2 workers", ), ( [(CollectionDone, 12), (Created, 0)], "collecting: 1/2 workers", ), ( [(CollectionDone, 12), (CollectionDone, 12)], "2 workers [12 items]", ), ( [(CollectionDone, 1), (CollectionDone, 1)], "2 workers [1 item]", ), ( [(CollectionDone, 1)], "1 worker [1 item]", ), # Different number of tests collected will raise an error and should not happen in practice, # but we test for it anyway. ( [(CollectionDone, 1), (CollectionDone, 12)], "2 workers [1 item]", ), ], ) def test_get_workers_status_line( status_and_items: Sequence[tuple[WorkerStatus, int]], expected: str ) -> None: assert get_workers_status_line(status_and_items) == expected ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_looponfail.py0000644000175100001770000002610314523717146021050 0ustar00runnerdockerimport unittest.mock from typing import List import pytest import shutil import textwrap from pathlib import Path from xdist.looponfail import RemoteControl from xdist.looponfail import StatRecorder PYTEST_GTE_7 = hasattr(pytest, "version_tuple") and pytest.version_tuple >= (7, 0) # type: ignore[attr-defined] class TestStatRecorder: def test_filechange(self, tmp_path: Path) -> None: tmp = tmp_path hello = tmp / "hello.py" hello.touch() sd = StatRecorder([tmp]) changed = sd.check() assert not changed hello.write_text("world") changed = sd.check() assert changed hello.with_suffix(".pyc").write_text("hello") changed = sd.check() assert not changed p = tmp / "new.py" p.touch() changed = sd.check() assert changed p.unlink() changed = sd.check() assert changed tmp.joinpath("a", "b").mkdir(parents=True) tmp.joinpath("a", "b", "c.py").touch() changed = sd.check() assert changed tmp.joinpath("a", "c.txt").touch() changed = sd.check() assert changed changed = sd.check() assert not changed shutil.rmtree(str(tmp.joinpath("a"))) changed = sd.check() assert changed def test_dirchange(self, tmp_path: Path) -> None: tmp = tmp_path tmp.joinpath("dir").mkdir() tmp.joinpath("dir", "hello.py").touch() sd = StatRecorder([tmp]) assert not sd.fil(tmp / "dir") def test_filechange_deletion_race(self, tmp_path: Path) -> None: tmp = tmp_path sd = StatRecorder([tmp]) changed = sd.check() assert not changed p = tmp.joinpath("new.py") p.touch() changed = sd.check() assert changed p.unlink() # make check()'s visit() call return our just removed # path as if we were in a race condition dirname = str(tmp) dirnames: List[str] = [] filenames = [str(p)] with unittest.mock.patch( "os.walk", return_value=[(dirname, dirnames, filenames)], autospec=True ): changed = sd.check() assert changed def test_pycremoval(self, tmp_path: Path) -> None: tmp = tmp_path hello = tmp / "hello.py" hello.touch() sd = StatRecorder([tmp]) changed = sd.check() assert not changed pycfile = hello.with_suffix(".pyc") pycfile.touch() hello.write_text("world") changed = sd.check() assert changed assert not pycfile.exists() def test_waitonchange( self, tmp_path: Path, monkeypatch: pytest.MonkeyPatch ) -> None: tmp = tmp_path sd = StatRecorder([tmp]) ret_values = [True, False] monkeypatch.setattr(StatRecorder, "check", lambda self: ret_values.pop()) sd.waitonchange(checkinterval=0.2) assert not ret_values class TestRemoteControl: def test_nofailures(self, pytester: pytest.Pytester) -> None: item = pytester.getitem("def test_func(): pass\n") control = RemoteControl(item.config) control.setup() topdir, failures = control.runsession()[:2] assert not failures def test_failures_somewhere(self, pytester: pytest.Pytester) -> None: item = pytester.getitem("def test_func():\n assert 0\n") control = RemoteControl(item.config) control.setup() failures = control.runsession() assert failures control.setup() item_path = item.path if PYTEST_GTE_7 else Path(str(item.fspath)) # type: ignore[attr-defined] item_path.write_text("def test_func():\n assert 1\n") removepyc(item_path) topdir, failures = control.runsession()[:2] assert not failures def test_failure_change(self, pytester: pytest.Pytester) -> None: modcol = pytester.getitem( textwrap.dedent( """ def test_func(): assert 0 """ ) ) control = RemoteControl(modcol.config) control.loop_once() assert control.failures if PYTEST_GTE_7: modcol_path = modcol.path # type:ignore[attr-defined] else: modcol_path = Path(str(modcol.fspath)) modcol_path.write_text( textwrap.dedent( """ def test_func(): assert 1 def test_new(): assert 0 """ ) ) removepyc(modcol_path) control.loop_once() assert not control.failures control.loop_once() assert control.failures assert str(control.failures).find("test_new") != -1 def test_failure_subdir_no_init( self, pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: modcol = pytester.getitem( textwrap.dedent( """ def test_func(): assert 0 """ ) ) if PYTEST_GTE_7: parent = modcol.path.parent.parent # type: ignore[attr-defined] else: parent = Path(modcol.fspath.dirpath().dirpath()) monkeypatch.chdir(parent) modcol.config.args = [ str(Path(x).relative_to(parent)) for x in modcol.config.args ] control = RemoteControl(modcol.config) control.loop_once() assert control.failures control.loop_once() assert control.failures class TestLooponFailing: def test_looponfail_from_fail_to_ok(self, pytester: pytest.Pytester) -> None: modcol = pytester.getmodulecol( textwrap.dedent( """ def test_one(): x = 0 assert x == 1 def test_two(): assert 1 """ ) ) remotecontrol = RemoteControl(modcol.config) remotecontrol.loop_once() assert len(remotecontrol.failures) == 1 modcol_path = modcol.path if PYTEST_GTE_7 else Path(modcol.fspath) modcol_path.write_text( textwrap.dedent( """ def test_one(): assert 1 def test_two(): assert 1 """ ) ) removepyc(modcol_path) remotecontrol.loop_once() assert not remotecontrol.failures def test_looponfail_from_one_to_two_tests(self, pytester: pytest.Pytester) -> None: modcol = pytester.getmodulecol( textwrap.dedent( """ def test_one(): assert 0 """ ) ) remotecontrol = RemoteControl(modcol.config) remotecontrol.loop_once() assert len(remotecontrol.failures) == 1 assert "test_one" in remotecontrol.failures[0] modcol_path = modcol.path if PYTEST_GTE_7 else Path(modcol.fspath) modcol_path.write_text( textwrap.dedent( """ def test_one(): assert 1 # passes now def test_two(): assert 0 # new and fails """ ) ) removepyc(modcol_path) remotecontrol.loop_once() assert len(remotecontrol.failures) == 0 remotecontrol.loop_once() assert len(remotecontrol.failures) == 1 assert "test_one" not in remotecontrol.failures[0] assert "test_two" in remotecontrol.failures[0] @pytest.mark.xfail(reason="broken by pytest 3.1+", strict=True) def test_looponfail_removed_test(self, pytester: pytest.Pytester) -> None: modcol = pytester.getmodulecol( textwrap.dedent( """ def test_one(): assert 0 def test_two(): assert 0 """ ) ) remotecontrol = RemoteControl(modcol.config) remotecontrol.loop_once() assert len(remotecontrol.failures) == 2 modcol.path.write_text( textwrap.dedent( """ def test_xxx(): # renamed test assert 0 def test_two(): assert 1 # pass now """ ) ) removepyc(modcol.path) remotecontrol.loop_once() assert len(remotecontrol.failures) == 0 remotecontrol.loop_once() assert len(remotecontrol.failures) == 1 def test_looponfail_multiple_errors( self, pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: modcol = pytester.getmodulecol( textwrap.dedent( """ def test_one(): assert 0 """ ) ) remotecontrol = RemoteControl(modcol.config) orig_runsession = remotecontrol.runsession def runsession_dups(): # twisted.trial test cases may report multiple errors. failures, reports, collection_failed = orig_runsession() print(failures) return failures * 2, reports, collection_failed monkeypatch.setattr(remotecontrol, "runsession", runsession_dups) remotecontrol.loop_once() assert len(remotecontrol.failures) == 1 class TestFunctional: def test_fail_to_ok(self, pytester: pytest.Pytester) -> None: p = pytester.makepyfile( textwrap.dedent( """ def test_one(): x = 0 assert x == 1 """ ) ) # p = pytester.mkdir("sub").join(p1.basename) # p1.move(p) child = pytester.spawn_pytest("-f %s --traceconfig" % p, expect_timeout=30.0) child.expect("def test_one") child.expect("x == 1") child.expect("1 failed") child.expect("### LOOPONFAILING ####") child.expect("waiting for changes") p.write_text( textwrap.dedent( """ def test_one(): x = 1 assert x == 1 """ ), ) child.expect(".*1 passed.*") child.kill(15) def test_xfail_passes(self, pytester: pytest.Pytester) -> None: p = pytester.makepyfile( textwrap.dedent( """ import pytest @pytest.mark.xfail def test_one(): pass """ ) ) child = pytester.spawn_pytest("-f %s" % p, expect_timeout=30.0) child.expect("1 xpass") # child.expect("### LOOPONFAILING ####") child.expect("waiting for changes") child.kill(15) def removepyc(path: Path) -> None: # XXX damn those pyc files pyc = path.with_suffix(".pyc") if pyc.exists(): pyc.unlink() c = path.parent / "__pycache__" if c.exists(): shutil.rmtree(c) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_newhooks.py0000644000175100001770000001055614523717146020550 0ustar00runnerdockerimport pytest class TestHooks: @pytest.fixture(autouse=True) def create_test_file(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( """ import os def test_a(): pass def test_b(): pass def test_c(): pass """ ) def test_runtest_logreport(self, pytester: pytest.Pytester) -> None: """Test that log reports from pytest_runtest_logreport when running with xdist contain "node", "nodeid", "worker_id", and "testrun_uid" attributes. (#8) """ pytester.makeconftest( """ def pytest_runtest_logreport(report): if hasattr(report, 'node'): if report.when == "call": workerid = report.node.workerinput['workerid'] testrunuid = report.node.workerinput['testrunuid'] if workerid != report.worker_id: print("HOOK: Worker id mismatch: %s %s" % (workerid, report.worker_id)) elif testrunuid != report.testrun_uid: print("HOOK: Testrun uid mismatch: %s %s" % (testrunuid, report.testrun_uid)) else: print("HOOK: %s %s %s" % (report.nodeid, report.worker_id, report.testrun_uid)) """ ) res = pytester.runpytest("-n1", "-s") res.stdout.fnmatch_lines( [ "*HOOK: test_runtest_logreport.py::test_a gw0 *", "*HOOK: test_runtest_logreport.py::test_b gw0 *", "*HOOK: test_runtest_logreport.py::test_c gw0 *", "*3 passed*", ] ) def test_node_collection_finished(self, pytester: pytest.Pytester) -> None: """Test pytest_xdist_node_collection_finished hook (#8).""" pytester.makeconftest( """ def pytest_xdist_node_collection_finished(node, ids): workerid = node.workerinput['workerid'] stripped_ids = [x.split('::')[1] for x in ids] print("HOOK: %s %s" % (workerid, ', '.join(stripped_ids))) """ ) res = pytester.runpytest("-n2", "-s") res.stdout.fnmatch_lines_random( ["*HOOK: gw0 test_a, test_b, test_c", "*HOOK: gw1 test_a, test_b, test_c"] ) res.stdout.fnmatch_lines(["*3 passed*"]) class TestCrashItem: @pytest.fixture(autouse=True) def create_test_file(self, pytester: pytest.Pytester) -> None: pytester.makepyfile( """ import os def test_a(): pass def test_b(): os._exit(1) def test_c(): pass def test_d(): pass """ ) def test_handlecrashitem(self, pytester: pytest.Pytester) -> None: """Test pytest_handlecrashitem hook.""" pytester.makeconftest( """ test_runs = 0 def pytest_handlecrashitem(crashitem, report, sched): global test_runs if test_runs == 0: sched.mark_test_pending(crashitem) test_runs = 1 else: print("HOOK: pytest_handlecrashitem") """ ) res = pytester.runpytest("-n2", "-s") res.stdout.fnmatch_lines_random(["*HOOK: pytest_handlecrashitem"]) res.stdout.fnmatch_lines(["*3 passed*"]) def test_handlecrashitem_one(self, pytester: pytest.Pytester) -> None: """Test pytest_handlecrashitem hook with just one test.""" pytester.makeconftest( """ test_runs = 0 def pytest_handlecrashitem(crashitem, report, sched): global test_runs if test_runs == 0: sched.mark_test_pending(crashitem) test_runs = 1 else: print("HOOK: pytest_handlecrashitem") """ ) res = pytester.runpytest("-n1", "-s", "-k", "test_b") res.stdout.fnmatch_lines_random(["*HOOK: pytest_handlecrashitem"]) res.stdout.fnmatch_lines( [ "FAILED test_handlecrashitem_one.py::test_b", "FAILED test_handlecrashitem_one.py::test_b", ] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_plugin.py0000644000175100001770000002530214523717146020204 0ustar00runnerdockerfrom contextlib import suppress from pathlib import Path import sys import os import execnet from xdist.workermanage import NodeManager import pytest @pytest.fixture def monkeypatch_3_cpus(monkeypatch: pytest.MonkeyPatch): """Make pytest-xdist believe the system has 3 CPUs""" # block import monkeypatch.setitem(sys.modules, "psutil", None) # type: ignore monkeypatch.delattr(os, "sched_getaffinity", raising=False) monkeypatch.setattr(os, "cpu_count", lambda: 3) def test_dist_incompatibility_messages(pytester: pytest.Pytester) -> None: result = pytester.runpytest("--pdb", "--looponfail") assert result.ret != 0 result = pytester.runpytest("--pdb", "-n", "3") assert result.ret != 0 assert "incompatible" in result.stderr.str() result = pytester.runpytest("--pdb", "-d", "--tx", "popen") assert result.ret != 0 assert "incompatible" in result.stderr.str() def test_dist_options(pytester: pytest.Pytester) -> None: from xdist.plugin import pytest_cmdline_main as check_options config = pytester.parseconfigure("-n 2") check_options(config) assert config.option.dist == "load" assert config.option.tx == ["popen"] * 2 config = pytester.parseconfigure("--numprocesses", "2") check_options(config) assert config.option.dist == "load" assert config.option.tx == ["popen"] * 2 config = pytester.parseconfigure("--numprocesses", "3", "--maxprocesses", "2") check_options(config) assert config.option.dist == "load" assert config.option.tx == ["popen"] * 2 config = pytester.parseconfigure("-d") check_options(config) assert config.option.dist == "load" def test_auto_detect_cpus( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.delenv("PYTEST_XDIST_AUTO_NUM_WORKERS", raising=False) with suppress(ImportError): import psutil monkeypatch.setattr(psutil, "cpu_count", lambda logical=True: None) if hasattr(os, "sched_getaffinity"): monkeypatch.setattr(os, "sched_getaffinity", lambda _pid: set(range(99))) elif hasattr(os, "cpu_count"): monkeypatch.setattr(os, "cpu_count", lambda: 99) else: import multiprocessing monkeypatch.setattr(multiprocessing, "cpu_count", lambda: 99) config = pytester.parseconfigure("-n2") assert config.getoption("numprocesses") == 2 config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 99 config = pytester.parseconfigure("-nauto", "--pdb") check_options(config) assert config.getoption("usepdb") assert config.getoption("numprocesses") == 0 assert config.getoption("dist") == "no" config = pytester.parseconfigure("-nlogical", "--pdb") check_options(config) assert config.getoption("usepdb") assert config.getoption("numprocesses") == 0 assert config.getoption("dist") == "no" monkeypatch.delattr(os, "sched_getaffinity", raising=False) monkeypatch.setenv("TRAVIS", "true") config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 2 def test_auto_detect_cpus_psutil( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: from xdist.plugin import pytest_cmdline_main as check_options psutil = pytest.importorskip("psutil") monkeypatch.delenv("PYTEST_XDIST_AUTO_NUM_WORKERS", raising=False) monkeypatch.setattr(psutil, "cpu_count", lambda logical=True: 84 if logical else 42) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 42 config = pytester.parseconfigure("-nlogical") check_options(config) assert config.getoption("numprocesses") == 84 def test_auto_detect_cpus_os( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, monkeypatch_3_cpus ) -> None: from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.delenv("PYTEST_XDIST_AUTO_NUM_WORKERS", raising=False) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 3 config = pytester.parseconfigure("-nlogical") check_options(config) assert config.getoption("numprocesses") == 3 def test_hook_auto_num_workers( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: from xdist.plugin import pytest_cmdline_main as check_options pytester.makeconftest( """ def pytest_xdist_auto_num_workers(): return 42 """ ) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 42 config = pytester.parseconfigure("-nlogical") check_options(config) assert config.getoption("numprocesses") == 42 def test_hook_auto_num_workers_arg( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: # config.option.numprocesses is a pytest feature, # but we document it so let's test it. from xdist.plugin import pytest_cmdline_main as check_options pytester.makeconftest( """ def pytest_xdist_auto_num_workers(config): if config.option.numprocesses == 'auto': return 42 if config.option.numprocesses == 'logical': return 8 """ ) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 42 config = pytester.parseconfigure("-nlogical") check_options(config) assert config.getoption("numprocesses") == 8 def test_hook_auto_num_workers_none( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, monkeypatch_3_cpus ) -> None: # Returning None from a hook to skip it is pytest behavior, # but we document it so let's test it. from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.delenv("PYTEST_XDIST_AUTO_NUM_WORKERS", raising=False) pytester.makeconftest( """ def pytest_xdist_auto_num_workers(): return None """ ) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 3 monkeypatch.setenv("PYTEST_XDIST_AUTO_NUM_WORKERS", "5") config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 5 def test_envvar_auto_num_workers( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch ) -> None: from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.setenv("PYTEST_XDIST_AUTO_NUM_WORKERS", "7") config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 7 config = pytester.parseconfigure("-nlogical") check_options(config) assert config.getoption("numprocesses") == 7 def test_envvar_auto_num_workers_warn( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, monkeypatch_3_cpus ) -> None: from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.setenv("PYTEST_XDIST_AUTO_NUM_WORKERS", "fourscore") config = pytester.parseconfigure("-nauto") with pytest.warns(UserWarning): check_options(config) assert config.getoption("numprocesses") == 3 def test_auto_num_workers_hook_overrides_envvar( pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, monkeypatch_3_cpus ) -> None: from xdist.plugin import pytest_cmdline_main as check_options monkeypatch.setenv("PYTEST_XDIST_AUTO_NUM_WORKERS", "987") pytester.makeconftest( """ def pytest_xdist_auto_num_workers(): return 2 """ ) config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 2 config = pytester.parseconfigure("-nauto") check_options(config) assert config.getoption("numprocesses") == 2 def test_dsession_with_collect_only(pytester: pytest.Pytester) -> None: from xdist.plugin import pytest_cmdline_main as check_options from xdist.plugin import pytest_configure as configure config = pytester.parseconfigure("-n1") check_options(config) configure(config) assert config.pluginmanager.hasplugin("dsession") config = pytester.parseconfigure("-n1", "--collect-only") check_options(config) configure(config) assert not config.pluginmanager.hasplugin("dsession") def test_testrunuid_provided(pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--testrunuid", "test123", "--tx=popen") nm = NodeManager(config) assert nm.testrunuid == "test123" def test_testrunuid_generated(pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--tx=popen") nm = NodeManager(config) assert len(nm.testrunuid) == 32 class TestDistOptions: def test_getxspecs(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--tx=popen", "--tx", "ssh=xyz") nodemanager = NodeManager(config) xspecs = nodemanager._getxspecs() assert len(xspecs) == 2 print(xspecs) assert xspecs[0].popen assert xspecs[1].ssh == "xyz" def test_xspecs_multiplied(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--tx=3*popen") xspecs = NodeManager(config)._getxspecs() assert len(xspecs) == 3 assert xspecs[1].popen def test_getrsyncdirs(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--rsyncdir=" + str(pytester.path)) nm = NodeManager(config, specs=[execnet.XSpec("popen")]) assert not nm._getrsyncdirs() nm = NodeManager(config, specs=[execnet.XSpec("popen//chdir=qwe")]) assert nm.roots assert pytester.path in nm.roots def test_getrsyncignore(self, pytester: pytest.Pytester) -> None: config = pytester.parseconfigure("--rsyncignore=fo*") nm = NodeManager(config, specs=[execnet.XSpec("popen//chdir=qwe")]) assert "fo*" in nm.rsyncoptions["ignores"] def test_getrsyncdirs_with_conftest(self, pytester: pytest.Pytester) -> None: p = Path.cwd() for bn in ("x", "y", "z"): p.joinpath(bn).mkdir() pytester.makeini( """ [pytest] rsyncdirs= x """ ) config = pytester.parseconfigure(pytester.path, "--rsyncdir=y", "--rsyncdir=z") nm = NodeManager(config, specs=[execnet.XSpec("popen//chdir=xyz")]) roots = nm._getrsyncdirs() # assert len(roots) == 3 + 1 # pylib assert Path("y").resolve() in roots assert Path("z").resolve() in roots assert pytester.path.joinpath("x") in roots ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_remote.py0000644000175100001770000003066314523717146020207 0ustar00runnerdockerimport pprint import pytest import sys import uuid from xdist.workermanage import WorkerController import execnet import marshal from queue import Queue WAIT_TIMEOUT = 10.0 def check_marshallable(d): try: marshal.dumps(d) except ValueError: pprint.pprint(d) raise ValueError("not marshallable") class EventCall: def __init__(self, eventcall): self.name, self.kwargs = eventcall def __str__(self): return f"" class WorkerSetup: def __init__(self, request, pytester: pytest.Pytester) -> None: self.request = request self.pytester = pytester self.use_callback = False self.events = Queue() # type: ignore[var-annotated] def setup(self) -> None: self.pytester.chdir() # import os ; os.environ['EXECNET_DEBUG'] = "2" self.gateway = execnet.makegateway() self.config = config = self.pytester.parseconfigure() putevent = self.events.put if self.use_callback else None class DummyMananger: testrunuid = uuid.uuid4().hex specs = [0, 1] self.slp = WorkerController(DummyMananger, self.gateway, config, putevent) self.request.addfinalizer(self.slp.ensure_teardown) self.slp.setup() def popevent(self, name=None): while 1: if self.use_callback: data = self.events.get(timeout=WAIT_TIMEOUT) else: data = self.slp.channel.receive(timeout=WAIT_TIMEOUT) ev = EventCall(data) if name is None or ev.name == name: return ev print(f"skipping {ev}") def sendcommand(self, name, **kwargs): self.slp.sendcommand(name, **kwargs) @pytest.fixture def worker(request, pytester: pytest.Pytester) -> WorkerSetup: return WorkerSetup(request, pytester) @pytest.mark.xfail(reason="#59") def test_remoteinitconfig(pytester: pytest.Pytester) -> None: from xdist.remote import remote_initconfig config1 = pytester.parseconfig() config2 = remote_initconfig(config1.option.__dict__, config1.args) assert config2.option.__dict__ == config1.option.__dict__ assert config2.pluginmanager.getplugin("terminal") in (-1, None) class TestWorkerInteractor: @pytest.fixture def unserialize_report(self, pytestconfig): def unserialize(data): return pytestconfig.hook.pytest_report_from_serializable( config=pytestconfig, data=data ) return unserialize def test_basic_collect_and_runtests( self, worker: WorkerSetup, unserialize_report ) -> None: worker.pytester.makepyfile( """ def test_func(): pass """ ) worker.setup() ev = worker.popevent() assert ev.name == "workerready" ev = worker.popevent() assert ev.name == "collectionstart" assert not ev.kwargs ev = worker.popevent("collectionfinish") assert ev.kwargs["topdir"] == str(worker.pytester.path) ids = ev.kwargs["ids"] assert len(ids) == 1 worker.sendcommand("runtests", indices=list(range(len(ids)))) worker.sendcommand("shutdown") ev = worker.popevent("logstart") assert ev.kwargs["nodeid"].endswith("test_func") assert len(ev.kwargs["location"]) == 3 ev = worker.popevent("testreport") # setup ev = worker.popevent("testreport") assert ev.name == "testreport" rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith("::test_func") assert rep.passed assert rep.when == "call" ev = worker.popevent("workerfinished") assert "workeroutput" in ev.kwargs def test_remote_collect_skip(self, worker: WorkerSetup, unserialize_report) -> None: worker.pytester.makepyfile( """ import pytest pytest.skip("hello", allow_module_level=True) """ ) worker.setup() ev = worker.popevent("collectionstart") assert not ev.kwargs ev = worker.popevent() assert ev.name == "collectreport" rep = unserialize_report(ev.kwargs["data"]) assert rep.skipped assert rep.longrepr[2] == "Skipped: hello" ev = worker.popevent("collectionfinish") assert not ev.kwargs["ids"] def test_remote_collect_fail(self, worker: WorkerSetup, unserialize_report) -> None: worker.pytester.makepyfile("""aasd qwe""") worker.setup() ev = worker.popevent("collectionstart") assert not ev.kwargs ev = worker.popevent() assert ev.name == "collectreport" rep = unserialize_report(ev.kwargs["data"]) assert rep.failed ev = worker.popevent("collectionfinish") assert not ev.kwargs["ids"] def test_runtests_all(self, worker: WorkerSetup, unserialize_report) -> None: worker.pytester.makepyfile( """ def test_func(): pass def test_func2(): pass """ ) worker.setup() ev = worker.popevent() assert ev.name == "workerready" ev = worker.popevent() assert ev.name == "collectionstart" assert not ev.kwargs ev = worker.popevent("collectionfinish") ids = ev.kwargs["ids"] assert len(ids) == 2 worker.sendcommand("runtests_all") worker.sendcommand("shutdown") for func in "::test_func", "::test_func2": for i in range(3): # setup/call/teardown ev = worker.popevent("testreport") assert ev.name == "testreport" rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith(func) ev = worker.popevent("workerfinished") assert "workeroutput" in ev.kwargs def test_happy_run_events_converted( self, pytester: pytest.Pytester, worker: WorkerSetup ) -> None: pytest.xfail("implement a simple test for event production") assert not worker.use_callback # type: ignore[unreachable] worker.pytester.makepyfile( """ def test_func(): pass """ ) worker.setup() hookrec = pytester.getreportrecorder(worker.config) for data in worker.slp.channel: worker.slp.process_from_remote(data) worker.slp.process_from_remote(worker.slp.ENDMARK) pprint.pprint(hookrec.hookrecorder.calls) hookrec.hookrecorder.contains( [ ("pytest_collectstart", "collector.fspath == aaa"), ("pytest_pycollect_makeitem", "name == 'test_func'"), ("pytest_collectreport", "report.collector.fspath == aaa"), ("pytest_collectstart", "collector.fspath == bbb"), ("pytest_pycollect_makeitem", "name == 'test_func'"), ("pytest_collectreport", "report.collector.fspath == bbb"), ] ) def test_process_from_remote_error_handling( self, worker: WorkerSetup, capsys: pytest.CaptureFixture[str] ) -> None: worker.use_callback = True worker.setup() worker.slp.process_from_remote(("", ())) out, err = capsys.readouterr() assert "INTERNALERROR> ValueError: unknown event: " in out ev = worker.popevent() assert ev.name == "errordown" def test_steal_work(self, worker: WorkerSetup, unserialize_report) -> None: worker.pytester.makepyfile( """ import time def test_func(): time.sleep(1) def test_func2(): pass def test_func3(): pass def test_func4(): pass """ ) worker.setup() ev = worker.popevent("collectionfinish") ids = ev.kwargs["ids"] assert len(ids) == 4 worker.sendcommand("runtests_all") # wait for test_func setup ev = worker.popevent("testreport") rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith("::test_func") assert rep.when == "setup" worker.sendcommand("steal", indices=[1, 2]) ev = worker.popevent("unscheduled") assert ev.kwargs["indices"] == [2] reports = [ ("test_func", "call"), ("test_func", "teardown"), ("test_func2", "setup"), ("test_func2", "call"), ("test_func2", "teardown"), ] for func, when in reports: ev = worker.popevent("testreport") rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith(f"::{func}") assert rep.when == when worker.sendcommand("shutdown") for when in ["setup", "call", "teardown"]: ev = worker.popevent("testreport") rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith("::test_func4") assert rep.when == when ev = worker.popevent("workerfinished") assert "workeroutput" in ev.kwargs def test_steal_empty_queue(self, worker: WorkerSetup, unserialize_report) -> None: worker.pytester.makepyfile( """ def test_func(): pass def test_func2(): pass """ ) worker.setup() ev = worker.popevent("collectionfinish") ids = ev.kwargs["ids"] assert len(ids) == 2 worker.sendcommand("runtests_all") for when in ["setup", "call", "teardown"]: ev = worker.popevent("testreport") rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith("::test_func") assert rep.when == when worker.sendcommand("steal", indices=[0, 1]) ev = worker.popevent("unscheduled") assert ev.kwargs["indices"] == [] worker.sendcommand("shutdown") for when in ["setup", "call", "teardown"]: ev = worker.popevent("testreport") rep = unserialize_report(ev.kwargs["data"]) assert rep.nodeid.endswith("::test_func2") assert rep.when == when ev = worker.popevent("workerfinished") assert "workeroutput" in ev.kwargs def test_remote_env_vars(pytester: pytest.Pytester) -> None: pytester.makepyfile( """ import os def test(): assert len(os.environ['PYTEST_XDIST_TESTRUNUID']) == 32 assert os.environ['PYTEST_XDIST_WORKER'] in ('gw0', 'gw1') assert os.environ['PYTEST_XDIST_WORKER_COUNT'] == '2' """ ) result = pytester.runpytest("-n2", "--max-worker-restart=0") assert result.ret == 0 def test_remote_inner_argv(pytester: pytest.Pytester) -> None: """Test/document the behavior due to execnet using `python -c`.""" pytester.makepyfile( """ import sys def test_argv(): assert sys.argv == ["-c"] """ ) result = pytester.runpytest("-n1") assert result.ret == 0 def test_remote_mainargv(pytester: pytest.Pytester) -> None: outer_argv = sys.argv pytester.makepyfile( """ def test_mainargv(request): assert request.config.workerinput["mainargv"] == {!r} """.format( outer_argv ) ) result = pytester.runpytest("-n1") assert result.ret == 0 def test_remote_usage_prog(pytester: pytest.Pytester, request) -> None: if not hasattr(request.config._parser, "prog"): pytest.skip("prog not available in config parser") pytester.makeconftest( """ import pytest config_parser = None @pytest.fixture def get_config_parser(): return config_parser def pytest_configure(config): global config_parser config_parser = config._parser """ ) pytester.makepyfile( """ import sys def test(get_config_parser, request): get_config_parser._getparser().error("my_usage_error") """ ) result = pytester.runpytest_subprocess("-n1") assert result.ret == 1 result.stdout.fnmatch_lines(["*usage: *", "*error: my_usage_error"]) def test_remote_sys_path(pytester: pytest.Pytester) -> None: """Work around sys.path differences due to execnet using `python -c`.""" pytester.makepyfile( """ import sys def test_sys_path(): assert "" not in sys.path """ ) result = pytester.runpytest("-n1") assert result.ret == 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/test_workermanage.py0000644000175100001770000003271514523717146021376 0ustar00runnerdockerimport execnet import pytest import shutil import textwrap import warnings from pathlib import Path from util import generate_warning from xdist import workermanage from xdist._path import visit_path from xdist.remote import serialize_warning_message from xdist.workermanage import HostRSync, NodeManager, unserialize_warning_message pytest_plugins = "pytester" @pytest.fixture def hookrecorder(request, config, pytester: pytest.Pytester): hookrecorder = pytester.make_hook_recorder(config.pluginmanager) return hookrecorder @pytest.fixture def config(pytester: pytest.Pytester): return pytester.parseconfig() @pytest.fixture def source(tmp_path: Path) -> Path: source = tmp_path / "source" source.mkdir() return source @pytest.fixture def dest(tmp_path: Path) -> Path: dest = tmp_path / "dest" dest.mkdir() return dest @pytest.fixture def workercontroller(monkeypatch: pytest.MonkeyPatch): class MockController: def __init__(self, *args): pass def setup(self): pass monkeypatch.setattr(workermanage, "WorkerController", MockController) return MockController class TestNodeManagerPopen: def test_popen_no_default_chdir(self, config) -> None: gm = NodeManager(config, ["popen"]) assert gm.specs[0].chdir is None def test_default_chdir(self, config) -> None: specs = ["ssh=noco", "socket=xyz"] for spec in NodeManager(config, specs).specs: assert spec.chdir == "pyexecnetcache" for spec in NodeManager(config, specs, defaultchdir="abc").specs: assert spec.chdir == "abc" def test_popen_makegateway_events( self, config, hookrecorder, workercontroller ) -> None: hm = NodeManager(config, ["popen"] * 2) hm.setup_nodes(None) call = hookrecorder.popcall("pytest_xdist_setupnodes") assert len(call.specs) == 2 call = hookrecorder.popcall("pytest_xdist_newgateway") assert call.gateway.spec == execnet.XSpec("popen") assert call.gateway.id == "gw0" call = hookrecorder.popcall("pytest_xdist_newgateway") assert call.gateway.id == "gw1" assert len(hm.group) == 2 hm.teardown_nodes() assert not len(hm.group) def test_popens_rsync( self, config, source: Path, dest: Path, workercontroller ) -> None: hm = NodeManager(config, ["popen"] * 2) hm.setup_nodes(None) assert len(hm.group) == 2 for gw in hm.group: class pseudoexec: args = [] # type: ignore[var-annotated] def __init__(self, *args): self.args.extend(args) def waitclose(self): pass gw.remote_exec = pseudoexec notifications = [] for gw in hm.group: hm.rsync(gw, source, notify=lambda *args: notifications.append(args)) assert not notifications hm.teardown_nodes() assert not len(hm.group) assert "sys.path.insert" in gw.remote_exec.args[0] def test_rsync_popen_with_path( self, config, source: Path, dest: Path, workercontroller ) -> None: hm = NodeManager(config, ["popen//chdir=%s" % dest] * 1) hm.setup_nodes(None) source.joinpath("dir1", "dir2").mkdir(parents=True) source.joinpath("dir1", "dir2", "hello").touch() notifications = [] for gw in hm.group: hm.rsync(gw, source, notify=lambda *args: notifications.append(args)) assert len(notifications) == 1 assert notifications[0] == ("rsyncrootready", hm.group["gw0"].spec, source) hm.teardown_nodes() dest = dest.joinpath(source.name) assert dest.joinpath("dir1").exists() assert dest.joinpath("dir1", "dir2").exists() assert dest.joinpath("dir1", "dir2", "hello").exists() def test_rsync_same_popen_twice( self, config, source: Path, dest: Path, hookrecorder, workercontroller, ) -> None: hm = NodeManager(config, ["popen//chdir=%s" % dest] * 2) hm.roots = [] hm.setup_nodes(None) source.joinpath("dir1", "dir2").mkdir(parents=True) source.joinpath("dir1", "dir2", "hello").touch() gw = hm.group[0] hm.rsync(gw, source) call = hookrecorder.popcall("pytest_xdist_rsyncstart") assert call.source == source assert len(call.gateways) == 1 assert call.gateways[0] in hm.group call = hookrecorder.popcall("pytest_xdist_rsyncfinish") class TestHRSync: def test_hrsync_filter(self, source: Path, dest: Path) -> None: source.joinpath("dir").mkdir() source.joinpath("dir", "file.txt").touch() source.joinpath(".svn").mkdir() source.joinpath(".svn", "entries").touch() source.joinpath(".somedotfile").mkdir() source.joinpath(".somedotfile", "moreentries").touch() source.joinpath("somedir").mkdir() source.joinpath("somedir", "editfile~").touch() syncer = HostRSync(source, ignores=NodeManager.DEFAULT_IGNORES) files = list(visit_path(source, recurse=syncer.filter, filter=syncer.filter)) names = {x.name for x in files} assert names == {"dir", "file.txt", "somedir"} def test_hrsync_one_host(self, source: Path, dest: Path) -> None: gw = execnet.makegateway("popen//chdir=%s" % dest) finished = [] rsync = HostRSync(source) rsync.add_target_host(gw, finished=lambda: finished.append(1)) source.joinpath("hello.py").write_text("world") rsync.send() gw.exit() assert dest.joinpath(source.name, "hello.py").exists() assert len(finished) == 1 class TestNodeManager: @pytest.mark.xfail(run=False) def test_rsync_roots_no_roots( self, pytester: pytest.Pytester, source: Path, dest: Path ) -> None: source.joinpath("dir1").mkdir() source.joinpath("dir1", "file1").write_text("hello") config = pytester.parseconfig(source) nodemanager = NodeManager(config, ["popen//chdir=%s" % dest]) # assert nodemanager.config.topdir == source == config.topdir nodemanager.makegateways() # type: ignore[attr-defined] nodemanager.rsync_roots() # type: ignore[call-arg] (p,) = nodemanager.gwmanager.multi_exec( # type: ignore[attr-defined] "import os ; channel.send(os.getcwd())" ).receive_each() p = Path(p) print("remote curdir", p) assert p == dest.joinpath(config.rootpath.name) assert p.joinpath("dir1").check() assert p.joinpath("dir1", "file1").check() def test_popen_rsync_subdir( self, pytester: pytest.Pytester, source: Path, dest: Path, workercontroller ) -> None: dir1 = source / "dir1" dir1.mkdir() dir2 = dir1 / "dir2" dir2.mkdir() dir2.joinpath("hello").touch() for rsyncroot in (dir1, source): shutil.rmtree(str(dest), ignore_errors=True) nodemanager = NodeManager( pytester.parseconfig( "--tx", "popen//chdir=%s" % dest, "--rsyncdir", rsyncroot, source ) ) nodemanager.setup_nodes(None) # calls .rsync_roots() if rsyncroot == source: dest = dest.joinpath("source") assert dest.joinpath("dir1").exists() assert dest.joinpath("dir1", "dir2").exists() assert dest.joinpath("dir1", "dir2", "hello").exists() nodemanager.teardown_nodes() @pytest.mark.parametrize( "flag, expects_report", [("-q", False), ("", False), ("-v", True)] ) def test_rsync_report( self, pytester: pytest.Pytester, source: Path, dest: Path, workercontroller, capsys: pytest.CaptureFixture[str], flag: str, expects_report: bool, ) -> None: dir1 = source / "dir1" dir1.mkdir() args = ["--tx", "popen//chdir=%s" % dest, "--rsyncdir", str(dir1), str(source)] if flag: args.append(flag) nodemanager = NodeManager(pytester.parseconfig(*args)) nodemanager.setup_nodes(None) # calls .rsync_roots() out, _ = capsys.readouterr() if expects_report: assert "<= pytest/__init__.py" in out else: assert "<= pytest/__init__.py" not in out def test_init_rsync_roots( self, pytester: pytest.Pytester, source: Path, dest: Path, workercontroller ) -> None: dir2 = source.joinpath("dir1", "dir2") dir2.mkdir(parents=True) source.joinpath("dir1", "somefile").mkdir() dir2.joinpath("hello").touch() source.joinpath("bogusdir").mkdir() source.joinpath("bogusdir", "file").touch() source.joinpath("tox.ini").write_text( textwrap.dedent( """ [pytest] rsyncdirs=dir1/dir2 """ ) ) config = pytester.parseconfig(source) nodemanager = NodeManager(config, ["popen//chdir=%s" % dest]) nodemanager.setup_nodes(None) # calls .rsync_roots() assert dest.joinpath("dir2").exists() assert not dest.joinpath("dir1").exists() assert not dest.joinpath("bogus").exists() def test_rsyncignore( self, pytester: pytest.Pytester, source: Path, dest: Path, workercontroller ) -> None: dir2 = source.joinpath("dir1", "dir2") dir2.mkdir(parents=True) source.joinpath("dir5", "dir6").mkdir(parents=True) source.joinpath("dir5", "dir6", "bogus").touch() source.joinpath("dir5", "file").touch() dir2.joinpath("hello").touch() source.joinpath("foo").mkdir() source.joinpath("foo", "bar").touch() source.joinpath("bar").mkdir() source.joinpath("bar", "foo").touch() source.joinpath("tox.ini").write_text( textwrap.dedent( """ [pytest] rsyncdirs = dir1 dir5 rsyncignore = dir1/dir2 dir5/dir6 foo* """ ) ) config = pytester.parseconfig(source) config.option.rsyncignore = ["bar"] nodemanager = NodeManager(config, ["popen//chdir=%s" % dest]) nodemanager.setup_nodes(None) # calls .rsync_roots() assert dest.joinpath("dir1").exists() assert not dest.joinpath("dir1", "dir2").exists() assert dest.joinpath("dir5", "file").exists() assert not dest.joinpath("dir6").exists() assert not dest.joinpath("foo").exists() assert not dest.joinpath("bar").exists() def test_optimise_popen( self, pytester: pytest.Pytester, source: Path, dest: Path, workercontroller ) -> None: specs = ["popen"] * 3 source.joinpath("conftest.py").write_text("rsyncdirs = ['a']") source.joinpath("a").mkdir() config = pytester.parseconfig(source) nodemanager = NodeManager(config, specs) nodemanager.setup_nodes(None) # calls .rysnc_roots() for gwspec in nodemanager.specs: assert gwspec._samefilesystem() assert not gwspec.chdir def test_ssh_setup_nodes(self, specssh: str, pytester: pytest.Pytester) -> None: pytester.makepyfile( __init__="", test_x=""" def test_one(): pass """, ) reprec = pytester.inline_run( "-d", "--rsyncdir=%s" % pytester.path, "--tx", specssh, pytester.path ) (rep,) = reprec.getreports("pytest_runtest_logreport") assert rep.passed class MyWarning(UserWarning): pass @pytest.mark.parametrize( "w_cls", [ UserWarning, MyWarning, "Imported", pytest.param( "Nested", marks=pytest.mark.xfail(reason="Nested warning classes are not supported."), ), ], ) def test_unserialize_warning_msg(w_cls): """Test that warning serialization process works well""" # Create a test warning message with pytest.warns(UserWarning) as w: if not isinstance(w_cls, str): warnings.warn("hello", w_cls) elif w_cls == "Imported": generate_warning() elif w_cls == "Nested": # dynamic creation class MyWarning2(UserWarning): pass warnings.warn("hello", MyWarning2) # Unpack assert len(w) == 1 w_msg = w[0] # Serialize and deserialize data = serialize_warning_message(w_msg) w_msg2 = unserialize_warning_message(data) # Compare the two objects all_keys = set(vars(w_msg).keys()).union(set(vars(w_msg2).keys())) for k in all_keys: v1 = getattr(w_msg, k) v2 = getattr(w_msg2, k) if k == "message": assert type(v1) is type(v2) assert v1.args == v2.args else: assert v1 == v2 class MyWarningUnknown(UserWarning): # Changing the __module__ attribute is only safe if class can be imported # from there __module__ = "unknown" def test_warning_serialization_tweaked_module(): """Test for GH#404""" # Create a test warning message with pytest.warns(UserWarning) as w: warnings.warn("hello", MyWarningUnknown) # Unpack assert len(w) == 1 w_msg = w[0] # Serialize and deserialize data = serialize_warning_message(w_msg) # __module__ cannot be found! with pytest.raises(ModuleNotFoundError): unserialize_warning_message(data) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/testing/util.py0000644000175100001770000000017314523717146016623 0ustar00runnerdockerimport warnings class MyWarning2(UserWarning): pass def generate_warning(): warnings.warn(MyWarning2("hello")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1699716710.0 pytest-xdist-3.4.0/tox.ini0000644000175100001770000000242714523717146015136 0ustar00runnerdocker[tox] envlist= linting py{37,38,39,310,311,312}-pytestlatest py310-pytestmain py310-psutil py310-setproctitle isolated_build = true [testenv] extras = testing deps = pytestlatest: pytest pytestmain: git+https://github.com/pytest-dev/pytest.git commands= pytest {posargs} [testenv:py310-psutil] extras = testing psutil commands = pytest {posargs:-k psutil} [testenv:py310-setproctitle] extras = testing setproctitle deps = pytest commands = pytest {posargs} [testenv:linting] skip_install = True usedevelop = True passenv = PRE_COMMIT_HOME deps = pre-commit commands = pre-commit run --all-files --show-diff-on-failure [testenv:release] changedir= description = do a release, required posarg of the version number basepython = python3.10 skipsdist = True usedevelop = True passenv = * deps = towncrier commands = towncrier build --version {posargs} --yes [testenv:docs] basepython = python3.10 usedevelop = True deps = sphinx sphinx_rtd_theme commands = sphinx-build -W --keep-going -b html docs docs/_build/html {posargs:} [pytest] # pytest-services also defines a worker_id fixture, disable # it so they don't conflict with each other (#611). addopts = -ra -p no:pytest-services testpaths = testing [flake8] max-line-length = 120 ignore = E203,W503