pax_global_header00006660000000000000000000000064145520611420014512gustar00rootroot0000000000000052 comment=d7afe0d6570c24d8d9ecc4443be9a765a0b85c2c pytest-check-2.3.0/000077500000000000000000000000001455206114200141175ustar00rootroot00000000000000pytest-check-2.3.0/.coveragerc000066400000000000000000000001051455206114200162340ustar00rootroot00000000000000[run] branch = True [paths] source = src .tox/*/site-packages pytest-check-2.3.0/.github/000077500000000000000000000000001455206114200154575ustar00rootroot00000000000000pytest-check-2.3.0/.github/FUNDING.yml000066400000000000000000000000751455206114200172760ustar00rootroot00000000000000# These are supported funding model platforms github: okken pytest-check-2.3.0/.github/workflows/000077500000000000000000000000001455206114200175145ustar00rootroot00000000000000pytest-check-2.3.0/.github/workflows/main.yml000066400000000000000000000010751455206114200211660ustar00rootroot00000000000000name: Python package on: push: branches: [main] pull_request: branches: [main] env: FORCE_COLOR: "1" TOX_TESTENV_PASSENV: FORCE_COLOR jobs: build: runs-on: ubuntu-latest strategy: matrix: python: ["3.7", "3.8", "3.9", "3.10", "3.11"] steps: - uses: actions/checkout@v3 - name: Setup Python uses: actions/setup-python@v4 with: python-version: ${{ matrix.python }} - name: Install Tox and any other packages run: pip install tox - name: Run Tox run: tox -e py pytest-check-2.3.0/.github/workflows/publish-to-pypi.yml000066400000000000000000000015531455206114200233100ustar00rootroot00000000000000name: Publish to PyPI and TestPyPI on: push jobs: build-n-publish: name: Build and publish to PyPI and TestPyPI if: startsWith(github.ref, 'refs/tags') runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up Python 3.11 uses: actions/setup-python@v4 with: python-version: "3.11" - name: Install pypa/build run: python -m pip install build --user - name: Build a binary wheel and a source tarball run: python -m build --sdist --wheel --outdir dist/ - name: Publish to Test PyPI uses: pypa/gh-action-pypi-publish@release/v1 with: password: ${{ secrets.TEST_PYPI_API_TOKEN }} repository_url: https://test.pypi.org/legacy/ - name: Publish to PyPI uses: pypa/gh-action-pypi-publish@release/v1 with: password: ${{ secrets.PYPI_API_TOKEN }}pytest-check-2.3.0/.gitignore000066400000000000000000000002031455206114200161020ustar00rootroot00000000000000*.egg-info *.pyc .tox dist/ __pycache__ .coverage .idea/ cov_html/ venv/ README.html build/ htmlcov/ *.swp .python-version .vscode/pytest-check-2.3.0/LICENSE.txt000066400000000000000000000020671455206114200157470ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2015, Brian Okken Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. pytest-check-2.3.0/README.md000066400000000000000000000211031455206114200153730ustar00rootroot00000000000000# pytest-check A pytest plugin that allows multiple failures per test. ---- Normally, a test function will fail and stop running with the first failed `assert`. That's totally fine for tons of kinds of software tests. However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails. `pytest-check` allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong. ## Installation From PyPI: ``` $ pip install pytest-check ``` From conda (conda-forge): ``` $ conda install -c conda-forge pytest-check ``` ## Example Quick example of where you might want multiple checks: ```python import httpx from pytest_check import check def test_httpx_get(): r = httpx.get('https://www.example.org/') # bail if bad status code assert r.status_code == 200 # but if we get to here # then check everything else without stopping with check: assert r.is_redirect is False with check: assert r.encoding == 'utf-8' with check: assert 'Example Domain' in r.text ``` ## Import vs fixture The example above used import: `from pytest_check import check`. You can also grab `check` as a fixture with no import: ```python def test_httpx_get(check): r = httpx.get('https://www.example.org/') ... with check: assert r.is_redirect == False ... ``` ## Validation functions `check` also helper functions for common checks. These methods do NOT need to be inside of a `with check:` block. - **check.equal** - *a == b* - **check.not_equal** - *a != b* - **check.is_** - *a is b* - **check.is_not** - *a is not b* - **check.is_true** - *bool(x) is True* - **check.is_false** - *bool(x) is False* - **check.is_none** - *x is None* - **check.is_not_none** - *x is not None* - **check.is_in** - *a in b* - **check.is_not_in** - *a not in b* - **check.is_instance** - *isinstance(a, b)* - **check.is_not_instance** - *not isinstance(a, b)* - **check.almost_equal** - *a == pytest.approx(b, rel, abs)* see at: [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx) - **check.not_almost_equal** - *a != pytest.approx(b, rel, abs)* see at: [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx) - **check.greater** - *a > b* - **check.greater_equal** - *a >= b* - **check.less** - *a < b* - **check.less_equal** - *a <= b* - **check.between(b, a, c, ge=False, le=False)** - *a < b < c* - **check.between_equal(b, a, c)** - *a <= b <= c* - **check.raises** - *func raises given exception* similar to [pytest.raises](https://docs.pytest.org/en/latest/reference/reference.html#pytest-raises) - **check.fail(msg)** - *Log a failure* The httpx example can be rewritten with helper functions: ```python def test_httpx_get_with_helpers(): r = httpx.get('https://www.example.org/') assert r.status_code == 200 check.is_false(r.is_redirect) check.equal(r.encoding, 'utf-8') check.is_in('Example Domain', r.text) ``` Which you use is personal preference. ## Defining your own check functions The `@check.check_func` decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function. ```python from pytest_check import check @check.check_func def is_four(a): assert a == 4 def test_all_four(): is_four(1) is_four(2) is_four(3) is_four(4) ``` ## Using raises as a context manager `raises` is used as context manager, much like `pytest.raises`. The main difference being that a failure to raise the right exception won't stop the execution of the test method. ```python from pytest_check import check def test_raises(): with check.raises(AssertionError): x = 3 assert 1 < x < 4 ``` ## Pseudo-tracebacks With `check`, tests can have multiple failures per test. This would possibly make for extensive output if we include the full traceback for every failure. To make the output a little more concise, `pytest-check` implements a shorter version, which we call pseudo-tracebacks. For example, take this test: ```python def test_example(): a = 1 b = 2 c = [2, 4, 6] check.greater(a, b) check.less_equal(b, a) check.is_in(a, c, "Is 1 in the list") check.is_not_in(b, c, "make sure 2 isn't in list") ``` This will result in: ``` =================================== FAILURES =================================== _________________________________ test_example _________________________________ FAILURE: assert 1 > 2 test_check.py, line 14, in test_example() -> check.greater(a, b) FAILURE: assert 2 <= 1 test_check.py, line 15, in test_example() -> check.less_equal(b, a) FAILURE: Is 1 in the list assert 1 in [2, 4, 6] test_check.py, line 16, in test_example() -> check.is_in(a, c, "Is 1 in the list") FAILURE: make sure 2 isn't in list assert 2 not in [2, 4, 6] test_check.py, line 17, in test_example() -> check.is_not_in(b, c, "make sure 2 isn't in list") ------------------------------------------------------------ Failed Checks: 4 =========================== 1 failed in 0.11 seconds =========================== ``` ## Red output The failures will also be red, unless you turn that off with pytests `--color=no`. ## No output You can turn off the failure reports with pytests `--tb=no`. ## Stop on Fail (maxfail behavior) Setting `-x` or `--maxfail=1` will cause this plugin to abort testing after the first failed check. Setting `-maxfail=2` or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest. In other words, the `maxfail` count is counting tests, not checks. The exception is the case of `1`, where we want to stop on the very first failed check. ## any_failures() Use `any_failures()` to see if there are any failures. One use case is to make a block of checks conditional on not failing in a previous set of checks: ```python from pytest_check import check def test_with_groups_of_checks(): # always check these check.equal(1, 1) check.equal(2, 3) if not check.any_failures(): # only check these if the above passed check.equal(1, 2) check.equal(2, 2) ``` ## Speedups If you have lots of check failures, your tests may not run as fast as you want. There are a few ways to speed things up. * `--check-max-tb=5` - Only first 5 failures per test will include pseudo-tracebacks (rest without them). * The example shows `5` but any number can be used. * pytest-check uses custom traceback code I'm calling a pseudo-traceback. * This is visually shorter than normal assert tracebacks. * Internally, it uses introspection, which can be slow. * Allowing a limited number of pseudo-tracebacks speeds things up quite a bit. * Default is 1. * Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures * `--check-max-report=10` - limit reported failures per test. * The example shows `10` but any number can be used. * The test will still have the total nuber of failures reported. * Default is no maximum. * `--check-max-fail=20` - Stop the test after this many check failures. * This is useful if your code under test is slow-ish and you want to bail early. * Default is no maximum. * Any of these can be used on their own, or combined. * Recommendation: * Leave the default, equivelant to `--check-max-tb=1`. * If excessive output is annoying, set `--check-max-report=10` or some tolerable number. ## Local speedups The flags above are global settings, and apply to every test in the test run. Locally, you can set these values per test. From `examples/test_example_speedup_funcs.py`: ```python def test_max_tb(): check.set_max_tb(2) for i in range(1, 11): check.equal(i, 100) def test_max_report(): check.set_max_report(5) for i in range(1, 11): check.equal(i, 100) def test_max_fail(): check.set_max_fail(5) for i in range(1, 11): check.equal(i, 100) ``` ## Contributing Contributions are very welcome. Tests can be run with [tox](https://tox.readthedocs.io/en/latest/). Test coverage is now 100%. Please make sure to keep it at 100%. If you have an awesome pull request and need help with getting coverage back up, let me know. ## License Distributed under the terms of the [MIT](http://opensource.org/licenses/MIT) license, "pytest-check" is free and open source software ## Issues If you encounter any problems, please [file an issue](https://github.com/okken/pytest-check/issues) along with a detailed description. ## Changelog See [changelog.md](https://github.com/okken/pytest-check/blob/main/changelog.md) pytest-check-2.3.0/changelog.md000066400000000000000000000233741455206114200164010ustar00rootroot00000000000000# Changelog All notable changes to this project be documented in this file. ## [2.3.0] - 2024-Jan-17 ### Added - `between_equal(b, a c)` - as a shortcut for `between(b, a, c, ge=True, le=True)` - `fail(msg)` - indicate test failure, but don't stop testing ## [2.2.5] - 2024-Jan-17 - fix [155](https://github.com/okken/pytest-check/issues/155) - Summaries from 2.2.3 are cool, but don't work with xdist - Turn off summaries while xdist is running - I hope I'm not seeing a pattern here ## [2.2.4] - 2024-Jan-8 - fix [153](https://github.com/okken/pytest-check/issues/153) - Summaries from 2.2.3 are cool, but don't work for pytest < 7.3 - Remove them for earlier pytest - Add tox test run for pytest 7.0.0 - Change dependencies to require 7.0.0 pytest ## [2.2.3] - 2023-Dec-31 - Check failure info now shows up in summaries. - fix [133](https://github.com/okken/pytest-check/issues/133) - thanks [hirotoKirimaru](https://github.com/hirotoKirimaru) ## [2.2.2] - 2023-Sept-22 ### fixed - Fix [137](https://github.com/okken/pytest-check/issues/137) ## [2.2.1] - 2023-Aug-11 ### Changed - Increase Python range to include 3.7.0. Thanks [EliahKagan](https://github.com/EliahKagan) ## [2.2.0] - 2023-July-14 ### Added - - pseudo traceback additions - `-l` or `--showlocals` shows locals - `__tracebackhide__ = True` is honored. - if `assert` or other exception is involved, - the exception is included as part of the traceback. ### Changed - pseudo traceback changes - The red color is used more selectively. - this is intended to help readability - Other minor formatting changes. - Please let me know if you have any issues with the changes ## [2.1.5] - 2023-June-6 ### Fixed - Fix [127](https://github.com/okken/pytest-check/issues/127) IndexError when running a check in a thread - Thanks [fperrin](https://github.com/fperrin) ## [2.1.4] - 2023-Feb-13 ### Added - include tests an examples in sdist - [pr 121](https://github.com/okken/pytest-check/pull/121) ## [2.1.3] - 2023-Feb-9 ### Added - publish-to-pypi.yml workflow ## [2.1.2] - 2023-Jan-13 ### Changed - README.md - clean up documentation for `--check-max-tb`. Thanks alblasco. - Minor internal cleanup - removed some debug code. ## [2.1.1] - 2023-Jan-12 ### Modified - `check.call_on_fail(func)` - ***Experimental*** - *Experimental feature - may change with any release, even a minor release* - Name changed from `check.set_failure_callback(func)`. - Also, I warned you I could change that at any time. - No tomatoes thrown, please. - It's better, right? Thanks to Hynek for the suggestion. ## [2.1.0] - 2023-Jan-10 ### Added - `check.set_failure_callback(func)` - ***Experimental*** - *Experimental feature - may change with any release, even a minor release* - Function to allow a callback function to be registered. - This "func" function will be called for each failed check, and needs to accept a string parameter. - Examples: - `print` : `check.set_failure_callback(print)` - allows stdout printing while test is running (provided `-s` is used). - `logging.error` : `check.set_failure_callback(logging.error)` - failure reports to logging - See `examples/test_example_logging.py` for runnable examples ## [2.0.0] - 2023-Jan-8 ### Added - With the following change, the default is now pretty darn fast, and most people will not need to modify any settings to get most of the speed improvements. - `--check-max-tb=` - sets the max number of pseudo-traceback reports per test function. - Default is 1. - After this, error is still reported - The error reports continue, they just won't have a traceback. - If you set `--check-max-report`, then the reports stop at that number, with or without tracebacks.a - `check.set_max_tb(int)` - overrides `--check-max-tb` for the test function it's called from. Value is reset after one test function. ### Deprecated - `check.set_no_tb` and `--set-no-tb` will be removed in a future release. (probably soon) - `check.set_no_tb` is deprecated. - For now, it internally calls `set_max_tb(0)`. See discussion below. - `--check-no-tb` is deprecated. - It's also short lived. - Since `--check-max-tb` is more useful, the default for `--check-max-tb` is 1, which is already pretty fast. And `--check-max-tb=0` has the same effect as `--check-no-tb`. ### Changed - [PR 109](https://github.com/okken/pytest-check/pull/109). Update README.md with conda install instructions. Thanks Abel Aoun. ### Reason for major version bump The default behavior has changed. Since `--check-max-tb=1` is the default, the default behavior is now: - Show traceback for the first failure per test. (controlled by `--check-max-tb`, default is 1) - Show error report for the remaining failures in a test. (controlled by `--check-max-report`, default is None) Old default behavior was the same except all tracebacks would be reported. My logic here: - The actual error message, like `check 1 == 3`, is more useful than the traceback. - The first traceback is still useful to point you to the test file, line number, of first failure. - Other failures are probably close by. If this logic doesn't hold for you, you can always set `--check-max-tb=1000` or some other large number. ## [1.3.0] - 2022-Dec-2 - Most changes are related to speedup improvements. - use `--check-no-tb --check-max-report=10` to get super zippy. - `check.between()` added just for fun ### Changed - Rewrote `check.equal()` and other helper functions to not use assert. - Check functions report things like `FAILURE: check 1 < 2` now. - It used to say `FAILURE: assert 1 < 2`, but that's not really true anymore. ### Added - `--check-no-tb` - turn off tracebacks - `--check-max-report` - limit reported failures per test function - `--check-max-fail` - limit failures per test function - `check.set_no_tb()` - turn off tracebacks for one test function - `check.set_max_report()` - limit reports for one test function - `check.set_max_fail()` - limit failures for one test function - `check.between(b, a, c)` - check that a < b < c ## [1.2.1] - 2022-Nov-30 ### Changed - Remove colorama as a dependency, and simplify how we do "red". - Change the formatting of context manager checks such that if a msg is included, we see both the message and the assert failure. Previoulsy, the message would replace the assert failure. ## [1.2.0] - 2022-Nov-25 ### Added - Add `any_failures()`. Thank you [@alblasco](https://github.com/alblasco) ## [1.1.3] - 2022-Nov-22 ### Fixed - While using the `check` fixture, allow `check.check` as a context manager. - this is unnecessary, the `check` fixture now works as a context manager. - but apparently some people have been using `with check.check:` - so this fix makes that work. ## [1.1.2] - 2022-Nov-21 ### Added - Allow `raises` from `check` object. - Now `with check.raises(Exception):` works. ## [1.1.1] - 2022-Nov-21 ### Changed - README update ## [1.1.0] - 2022-Nov-21 ### Added - Red output in pseudo-traceback ### Changed - Refactored code into more modules - All functionality now possible through `check` fixture or `from pytest_check import check`. - The `check` object from either of these methods can be used as either a context manager or to access the check helper functions. - Lots of rewrite in README ### Deprecated - API change. - Access to check helper functions via `import pytest_check as check` is deprecated. - It still works, but will be removed in a future version. - However, there is no deprecation warning for this. ## [1.0.10] - 2022-09-29 ### Fixed - [issue 80](https://github.com/okken/pytest-check/issues/80) --tb=no should remove tracebacks from xml output ## [1.0.9] - 2022-08-24 ### Fixed - [issue 55](https://github.com/okken/pytest-check/issues/55) a problem with multiple drive letters. - code is using `os.path.relpath()`, but sometimes it can fail. - when it does fail, fall back to `os.path.abspath()`. - thanks [rloutrel](https://github.com/rloutrel) and [kr1zo](https://github.com/kr1zo) for your patience and persistence, and for the solution. ## [1.0.8] - 2022-08-24 ### Fixed - Support check failures not blowing up if called outside of a test. - should also fix [85](https://github.com/okken/pytest-check/issues/85) - thanks [zerocen](https://github.com/zerocen) for the excellent toy example that helped me to get this fixed quickly. ### Changed - changelog.md format. :) - tox.ini cleanup - refactoring tests to use examples directory instead of embedding example tests in test code. - this is easier to understand and debug issues. - also provides extra documentation through examples directory ## [1.0.7] - 2022-08-21 ### Fixed - Handle cases when context is None ## [1.0.6] - 2022-08-19 ### Changed - Update pyproject.toml to use newer flit - Changed plugin name from pytest_check to pytest-check. ## [1.0.5] - 2022-03-29 ### Added - Add `raises` ## [1.0.4] - 2021-09-12 ### Changed - Require Python >= 3.6 - Remove old Manifest.in file. ## [1.0.3] - 2021-09-12 ### Fixed - Fix #64, modifications to maxfail behavior. ## [1.0.2] - 2021-08-12 ### Added - Add `excinfo` to call object in `pytest_runtest_makereport()`. - Intended to help with some report systems. ## [1.0.1] - 2020-12-27 ### Changed - Remove Beta Classifier - Status is now "Development Status :: 5 - Production/Stable" ## [1.0.0] - 2020-12-27 ### Changed - Jump to 1.0 version, API is fairly stable. ## [0.4.1] - 2020-12-27 ### Fixed - Fix #43/#44 tests with failing checks and failing asserts now report all issues. ## [0.4.0] - 2020-12-14 ### Added - added `is_()` and `is_not()` ### Changed - Requires pytest 6.0 or above. (Removed some cruft to support pytest 5.x) pytest-check-2.3.0/examples/000077500000000000000000000000001455206114200157355ustar00rootroot00000000000000pytest-check-2.3.0/examples/test_example_alt_names.py000066400000000000000000000010641455206114200230250ustar00rootroot00000000000000""" By default, pytest looks for functions that START with test_ However, it's possible to also look for alternate names, such as functions ENDING with _test. To run this example, - use: pytest python_functionspytest -o "python_functions=test_* *_test" test_example_alt_names.py` The purpose of this example is to make sure that `check` works with alternately named tests. """ from pytest_check import check def ends_with_test(): helper_func() def test_default_naming(): pass def helper_func(): with check: assert 1 == 0 pytest-check-2.3.0/examples/test_example_any_failures.py000066400000000000000000000010271455206114200235420ustar00rootroot00000000000000""" `any_failures()` can be used to determine if more checks are needed. """ from pytest_check import check def test_any_failures_false(): check.equal(1, 1) check.equal(2, 2) if not check.any_failures(): check.equal(1, 2) check.equal(1, 3) check.equal(1, 4) def test_any_failures_true(): check.equal(1, 1) check.equal(2, 3) if not check.any_failures(): check.equal(1, 2) check.equal(2, 2) check.equal(1, 3) check.equal(1, 4) check.equal(1, 5) pytest-check-2.3.0/examples/test_example_check_and_assert.py000066400000000000000000000002731455206114200243430ustar00rootroot00000000000000""" A test file with both `check` and `assert`, both failing """ from pytest_check import check def test_fail_check(): check.equal(1, 2) def test_fail_assert(): assert 1 == 2 pytest-check-2.3.0/examples/test_example_check_check.py000066400000000000000000000004151455206114200232730ustar00rootroot00000000000000""" "with check.check:" - bad "with check:" - good However, we want both to work. """ def test_check(check): with check: assert True def test_check_check(check): "Deprecated, but should still work for now" with check.check: assert True pytest-check-2.3.0/examples/test_example_check_func_decorator.py000066400000000000000000000010071455206114200252110ustar00rootroot00000000000000""" Make sure the @check.check_func decorator works. """ from pytest_check import check @check.check_func def is_five(a): assert a == 5 def test_pass(): is_five(5) def test_pass_return_val_of_check_helper(): assert is_five(5) is True @check.check_func def is_four(a): assert a == 4 def test_all_four(): is_four(1) is_four(2) should_be_False = is_four(3) should_be_True = is_four(4) print(f"should_be_True={should_be_True}") print(f"should_be_False={should_be_False}") pytest-check-2.3.0/examples/test_example_check_not_in_test.py000066400000000000000000000007431455206114200245470ustar00rootroot00000000000000""" The `check` context manager is intended to be used by tests and helper functions. But we need to make sure it doesn't blow up if called elsewhere. This test file should result in an error test result. """ from pytest_check import check def not_in_a_test(): helper_func() def helper_func(): with check: assert 1 == 0 # called at import time. # not a good practice # but should result in a test error report not_in_a_test() def test_something(): pass pytest-check-2.3.0/examples/test_example_context_functions.py000066400000000000000000000013241455206114200246350ustar00rootroot00000000000000""" It is no longer recommended to do this: import pytest_check def test_old_style(): pytest_check.equal(1, 1) or this: import pytest_check as check def test_old_style(): check.equal(1, 1) However, it used to work. It still works. But this method is deprecated and may be removed in future versions. Please migrate to this: from pytest_check import check def test_pass(): check.equal(1, 1) """ import pytest_check from pytest_check import check def test_old_style(): pytest_check.equal(1, 1) def test_pass(): check.equal(1, 1) with check: assert 1 == 1 def test_pass_check(check): check.equal(1, 1) with check: assert 1 == 1 pytest-check-2.3.0/examples/test_example_context_manager_fail.py000066400000000000000000000010111455206114200252230ustar00rootroot00000000000000""" Everything should fail in this file. This test is useful for testing: - messages - stop on fail (-x) - pseudo-tracebacks - lack of tracebacks (--tb=no) """ from pytest_check import check def test_3_failed_checks(): with check: assert 1 == 0 with check: assert 1 > 2 with check: assert 1 < 5 < 4 def test_messages(): with check("first fail"): assert 1 == 0 with check("second fail"): assert 1 > 2 with check("third fail"): assert 1 < 5 < 4 pytest-check-2.3.0/examples/test_example_context_manager_pass.py000066400000000000000000000005051455206114200252650ustar00rootroot00000000000000""" Context manater with one or multiple blocks. """ from pytest_check import check def test_one_with_block(): with check: x = 3 assert 1 < x < 4 def test_multiple_with_blocks(): x = 3 with check: assert 1 < x with check: assert x < 4 with check: assert x == 3 pytest-check-2.3.0/examples/test_example_fail_func.py000066400000000000000000000002241455206114200230050ustar00rootroot00000000000000import pytest_check as check def test_one_failure(): check.fail('one') def test_two_failures(): check.fail('one') check.fail('two') pytest-check-2.3.0/examples/test_example_fail_in_fixture.py000066400000000000000000000005131455206114200242270ustar00rootroot00000000000000""" Failures in fixture should result in Error, not Fail. """ import pytest from pytest_check import check @pytest.fixture() def a_fixture(): check.equal(1, 2) def test_setup_failure(a_fixture): pass @pytest.fixture() def b_fixture(): yield check.equal(1, 2) def test_teardown_failure(b_fixture): pass pytest-check-2.3.0/examples/test_example_fail_in_thread.py000066400000000000000000000007001455206114200240060ustar00rootroot00000000000000from concurrent.futures.thread import ThreadPoolExecutor from threading import Thread import pytest_check as check def force_fail(comparison): check.equal(1 + 1, comparison, f"1 + 1 is 2, not {comparison}") def test_threadpool(): with ThreadPoolExecutor() as executor: task = executor.submit(force_fail, 3) task.result() def test_threading(): t = Thread(target=force_fail, args=(4, )) t.start() t.join() pytest-check-2.3.0/examples/test_example_functions_fail.py000066400000000000000000000031211455206114200240610ustar00rootroot00000000000000""" Failing versions of all of the check helper functions. """ from pytest_check import check def test_equal(): check.equal(1, 2) def test_not_equal(): check.not_equal(1, 1) def test_is(): x = ["foo"] y = ["foo"] check.is_(x, y) def test_is_not(): x = ["foo"] y = x check.is_not(x, y) def test_is_true(): check.is_true(False) def test_is_false(): check.is_false(True) def test_is_none(): a = 1 check.is_none(a) def test_is_not_none(): a = None check.is_not_none(a) def test_is_in(): check.is_in(4, [1, 2, 3]) def test_is_not_in(): check.is_not_in(2, [1, 2, 3]) def test_is_instance(): check.is_instance(1, str) def test_is_not_instance(): check.is_not_instance(1, int) def test_almost_equal(): check.almost_equal(1, 2) check.almost_equal(1, 2.1, abs=0.1) check.almost_equal(1, 3, rel=1) def test_not_almost_equal(): check.not_almost_equal(1, 1) check.not_almost_equal(1, 1.1, abs=0.1) check.not_almost_equal(1, 2, rel=1) def test_greater(): check.greater(1, 2) check.greater(1, 1) def test_greater_equal(): check.greater_equal(1, 2) def test_less(): check.less(2, 1) check.less(1, 1) def test_less_equal(): check.less_equal(2, 1) def test_between(): check.between(0, 0, 20) def test_between_ge(): check.between(20, 0, 20, ge=True) def test_between_le(): check.between(0, 0, 20, le=True) def test_between_ge_le(): check.between(21, 0, 20, ge=True, le=True) def test_between_equal(): check.between_equal(21, 0, 20, ge=True, le=True) pytest-check-2.3.0/examples/test_example_functions_pass.py000066400000000000000000000034721455206114200241250ustar00rootroot00000000000000""" Passing versions of all of the check helper functions. """ from pytest_check import check def test_equal(): check.equal(1, 1) def test_not_equal(): check.not_equal(1, 2) def test_is(): x = ["foo"] y = x check.is_(x, y) def test_is_not(): x = ["foo"] y = ["foo"] check.is_not(x, y) def test_is_true(): check.is_true(True) def test_is_false(): check.is_false(False) def test_is_none(): a = None check.is_none(a) def test_is_not_none(): a = 1 check.is_not_none(a) def test_is_in(): check.is_in(2, [1, 2, 3]) def test_is_not_in(): check.is_not_in(4, [1, 2, 3]) def test_is_instance(): check.is_instance(1, int) def test_is_not_instance(): check.is_not_instance(1, str) def test_almost_equal(): check.almost_equal(1, 1) check.almost_equal(1, 1.1, abs=0.2) check.almost_equal(2, 1, rel=1) def test_not_almost_equal(): check.not_almost_equal(1, 2) check.not_almost_equal(1, 2.1, abs=0.1) check.not_almost_equal(3, 1, rel=1) def test_greater(): check.greater(2, 1) def test_greater_equal(): check.greater_equal(2, 1) check.greater_equal(1, 1) def test_less(): check.less(1, 2) def test_less_equal(): check.less_equal(1, 2) check.less_equal(1, 1) def test_between(): check.between(10, 0, 20) def test_between_ge(): check.between(10, 0, 20, ge=True) check.between(0, 0, 20, ge=True) def test_between_le(): check.between(10, 0, 20, le=True) check.between(20, 0, 20, le=True) def test_between_ge_le(): check.between(0, 0, 20, ge=True, le=True) check.between(10, 0, 20, ge=True, le=True) check.between(20, 0, 20, ge=True, le=True) def test_between_equal(): check.between_equal(0, 0, 20) check.between_equal(10, 0, 20) check.between_equal(20, 0, 20) pytest-check-2.3.0/examples/test_example_helpers.py000066400000000000000000000005461455206114200225300ustar00rootroot00000000000000""" Make sure checks can be run in helper functions and that a walk though of functions from test_something() to the point of failure is reported. """ from pytest_check import check def test_func(): helper1() def helper1(): helper2() def helper2(): with check("first"): assert 1 == 0 with check("second"): assert 1 > 2 pytest-check-2.3.0/examples/test_example_httpx.py000066400000000000000000000014041455206114200222270ustar00rootroot00000000000000""" This example is used in the README.md To run this example, first `pip install httpx` This example is NOT tested by the test suite. """ import httpx from pytest_check import check def test_httpx_get(): r = httpx.get("https://www.example.org/") # bail if bad status code assert r.status_code == 200 # but if we get here # no need to stop on any failure with check: assert r.is_redirect is False with check: assert r.encoding == "utf-8" with check: assert "Example Domain" in r.text def test_httpx_get_with_helpers(): r = httpx.get("https://www.example.org/") assert r.status_code == 200 check.is_false(r.is_redirect) check.equal(r.encoding, "utf-8") check.is_in("Example Domain", r.text) pytest-check-2.3.0/examples/test_example_locals.py000066400000000000000000000003611455206114200223360ustar00rootroot00000000000000""" Make sure locals can be viewed with -l or --showlocals """ from pytest_check import check def test_ctx(): a = 1 with check: b = 2 assert a == b def test_check_func(): a = 1 b = 2 check.equal(a, b) pytest-check-2.3.0/examples/test_example_logging.py000066400000000000000000000022271455206114200225120ustar00rootroot00000000000000import logging from pytest_check import check log = logging.getLogger(__name__) def test_with_logging(): """ Try this with: > pytest --log-format="%(levelname)s - %(message)s" --check-max-tb=4 or: > pytest --log-cli-level=DEBUG \ --log-format="%(levelname)s - %(message)s" --check-max-tb=4 \ --show-capture=no --no-summary """ def log_failure(message): log.error(message) check.call_on_fail(log_failure) log.debug('debug message') check.equal(1, 2, "after debug") log.info('info message') check.equal(1, 2, "after info") log.warning('warning message') check.equal(1, 2, "after warning") log.error('error message') check.equal(1, 2, "after error") def test_with_print(): """ Try this with: > pytest examples/test_example_logging.py::test_with_print --check-max-tb=2 or: > pytest examples/test_example_logging.py::test_with_print -s \ --check-max-tb=2 --no-summary """ check.call_on_fail(print) print('first message') check.equal(1, 2, "after first") print('second message') check.equal(1, 2, "after second") print('last message') pytest-check-2.3.0/examples/test_example_maxfail.py000066400000000000000000000014011455206114200224760ustar00rootroot00000000000000""" Running: pytest --maxfail=1 test_example_maxfail.py Should cause pytest to stop after the first failed check in test_a Running: pytest --maxfail=2 test_example_maxfail.py Should run a and b, but not c Running: pytest --maxfail=3 test_example_maxfail.py Should run a, b, and c This is because pytest-check has a special case for maxfail==1. If maxfail==1, stop on the first failed check. If > 1, run until 2 failed "tests", not "checks". """ from pytest_check import check def test_a(): "Failing test: 3 failed checks" with check: assert False, "one" with check: assert False, "two" with check: assert False, "three" def test_b(): "Failing test" assert False def test_c(): "Passing test" assert True pytest-check-2.3.0/examples/test_example_message.py000066400000000000000000000002711455206114200225050ustar00rootroot00000000000000from pytest_check import check def test_baseline(): a, b = 1, 2 check.equal(a, b) def test_message(): a, b = 1, 2 check.equal(a, b, f"comment about a={a} != b={b}") pytest-check-2.3.0/examples/test_example_mix_checks_and_assertions.py000066400000000000000000000006351455206114200262760ustar00rootroot00000000000000""" Mixing assertions and checks can be confusing. With normal test run, a failing assertion should take precedence over a failing check. With -x/--maxfail=1, the first failed check or assertion should stop the test. """ from pytest_check import check def test_failures(): assert 0 == 0 check.equal(1, 1) check.equal(0, 1) # failure assert 1 == 2 # failure check.equal(2, 3) # failure pytest-check-2.3.0/examples/test_example_multiple_failures.py000066400000000000000000000001661455206114200246110ustar00rootroot00000000000000from pytest_check import check def test_multiple_failures(): for i in range(1, 11): check.equal(i, 100) pytest-check-2.3.0/examples/test_example_pass_in_thread.py000066400000000000000000000006001455206114200240400ustar00rootroot00000000000000from concurrent.futures.thread import ThreadPoolExecutor from threading import Thread import pytest_check as check def always_pass(): check.equal(1 + 1, 2) def test_threadpool(): with ThreadPoolExecutor() as executor: task = executor.submit(always_pass) task.result() def test_threading(): t = Thread(target=always_pass) t.start() t.join() pytest-check-2.3.0/examples/test_example_raises.py000066400000000000000000000007471455206114200223570ustar00rootroot00000000000000from pytest_check import check, raises def test_raises_top_level_fail(): with raises(AssertionError): x = 3 assert 1 < x < 4 def test_raises_top_level_pass(): with raises(AssertionError): x = 4 assert 1 < x < 3 def test_raises_check_level_fail(): with check.raises(AssertionError): x = 3 assert 1 < x < 4 def test_raises_check_level_pass(): with check.raises(AssertionError): x = 4 assert 1 < x < 3 pytest-check-2.3.0/examples/test_example_simple.py000066400000000000000000000003201455206114200223450ustar00rootroot00000000000000""" Just a simple test to show passing and failing checks. """ from pytest_check import check def test_pass(): with check: assert 1 == 1 def test_fail(): with check: assert 1 == 2 pytest-check-2.3.0/examples/test_example_speedup_funcs.py000066400000000000000000000006451455206114200237310ustar00rootroot00000000000000from pytest_check import check def test_baseline(): for i in range(1, 11): check.equal(i, 100) def test_max_report(): check.set_max_report(5) for i in range(1, 11): check.equal(i, 100) def test_max_fail(): check.set_max_fail(5) for i in range(1, 11): check.equal(i, 100) def test_max_tb(): check.set_max_tb(2) for i in range(1, 11): check.equal(i, 100) pytest-check-2.3.0/examples/test_example_stop_on_fail.py000066400000000000000000000011341455206114200235340ustar00rootroot00000000000000""" An example useful for playing with stop on fail. -x or --maxfail=1 should result in one failed check and one failed test. --maxfail=2 should run both tests and catch all 4 check failures This is because --maxfail=1/-x stops on first failure, check or assert. Using --maxfail=2 or more counts failing test functions, not check failures. """ from pytest_check import check class TestStopOnFail: def test_1(self): check.equal(1, 1) check.equal(1, 2) check.equal(1, 3) def test_2(self): check.equal(1, 1) check.equal(1, 2) check.equal(1, 3) pytest-check-2.3.0/examples/test_example_summary.py000066400000000000000000000005301455206114200225540ustar00rootroot00000000000000from pytest_check import check def test_assert_no_msg(): a, b = 1, 2 assert a == b def test_assert_msg(): a, b = 1, 2 assert a == b, f"comment about a={a} != b={b}" def test_check_no_msg(): a, b = 1, 2 check.equal(a, b) def test_check_msg(): a, b = 1, 2 check.equal(a, b, f"comment about a={a} != b={b}") pytest-check-2.3.0/examples/test_example_tb_style.py000066400000000000000000000005461455206114200227130ustar00rootroot00000000000000""" Useful for making sure `tb=no` turns off pseudo-traceback. """ from pytest_check import check def run_helper2(): with check("first fail"): assert 1 == 0 with check("second fail"): assert 1 > 2 with check("third fail"): assert 1 < 5 < 4 def run_helper1(): run_helper2() def test_failures(): run_helper1() pytest-check-2.3.0/examples/test_example_traceback.py000066400000000000000000000021161455206114200230000ustar00rootroot00000000000000""" An example to show hwo tracebacks work with hlper functions. We've got. 1. test -> helper -> helper -> check function 2. test -> helper -> helper -> check context manager -> assert 3. test -> check context manager -> helper -> helper -> assert The 3rd option has the worst tb right now, as it doesn't show the helper functions. Takeaway: Keep the context manager close to the assert. Possible todo item: Maybe the pseudo-tb could inspect the incoming exception more. """ from pytest_check import check # check.equal in helper def helper_func(): helper2_func() def helper2_func(): check.equal(1, 2, "custom") def test_tb_func(check): helper_func() # ctx in helper def helper_ctx(): helper2_ctx() def helper2_ctx(): with check("check message"): assert 1 == 2, "assert message" def test_tb_ctx(check): helper_ctx() # ctx in test func, assert in helper def helper_assert(): helper2_assert() def helper2_assert(): assert 1 == 2, "assert message" def test_tb_ctx_assert(check): with check("check message"): helper_assert() pytest-check-2.3.0/examples/test_example_tracebackhide.py000066400000000000000000000004421455206114200236320ustar00rootroot00000000000000""" helper1() should not be included in traceback """ from pytest_check import check def test_func(): helper1() def helper1(): __tracebackhide__ = True helper2() def helper2(): with check("first"): assert 1 == 0 with check("second"): assert 1 > 2 pytest-check-2.3.0/examples/test_example_xfail.py000066400000000000000000000007661455206114200221750ustar00rootroot00000000000000""" Ensure that xfail, both strict and not strict, behaves correctly for check passes/failures. """ import pytest from pytest_check import check @pytest.mark.xfail() def test_xfail(): "Should xfail" check.equal(1, 2) @pytest.mark.xfail(strict=True) def test_xfail_strict(): "Should xfail" check.equal(1, 2) @pytest.mark.xfail() def test_xfail_pass(): "Should xpass" check.equal(1, 1) @pytest.mark.xfail(strict=True) def test_xfail_pass_strict(): check.equal(1, 1) pytest-check-2.3.0/pyproject.toml000066400000000000000000000012651455206114200170370ustar00rootroot00000000000000[project] name = "pytest-check" authors = [{name = "Brian Okken"}] readme = "README.md" license = {file = "LICENSE.txt"} description="A pytest plugin that allows multiple failures per test." version = "2.3.0" requires-python = ">=3.7" classifiers = [ "License :: OSI Approved :: MIT License", "Framework :: Pytest" , ] dependencies = ["pytest>=7.0.0"] [project.urls] Home = "https://github.com/okken/pytest-check" [project.entry-points.pytest11] check = "pytest_check.plugin" [build-system] requires = ["flit_core >=3.2,<4"] build-backend = "flit_core.buildapi" [tool.flit.module] name = "pytest_check" [tool.flit.sdist] include = ["changelog.md", "examples", "tests", "tox.ini"] pytest-check-2.3.0/src/000077500000000000000000000000001455206114200147065ustar00rootroot00000000000000pytest-check-2.3.0/src/pytest_check/000077500000000000000000000000001455206114200173735ustar00rootroot00000000000000pytest-check-2.3.0/src/pytest_check/__init__.py000066400000000000000000000026321455206114200215070ustar00rootroot00000000000000import pytest # make sure assert rewriting happens pytest.register_assert_rewrite("pytest_check.check_functions") # allow for top level helper function access: # import pytest_check # pytest_check.equal(1, 1) from pytest_check.check_functions import * # noqa: F401, F402, F403, E402 # allow to know if any_failures due to any previous check from pytest_check.check_log import any_failures # noqa: F401, F402, F403, E402 # allow top level raises: # from pytest_check import raises # with raises(Exception): # raise Exception # with raises(AssertionError): # assert 0 from pytest_check.check_raises import raises # noqa: F401, F402, F403, E402 # allow for with blocks and assert: # from pytest_check import check # with check: # assert 1 == 2 from pytest_check.context_manager import check # noqa: F401, F402, F403, E402 # allow check.raises() setattr(check, "raises", raises) # allow check.any_failures() setattr(check, "any_failures", any_failures) # allow check.check as a context manager. # weird, but some people are doing it. # decprecate this eventually setattr(check, "check", check) # allow for helper functions to be part of check context # manager and check fixture: # from pytest_check import check # def test_(): # check.equal(1, 1) # with check: # assert 1 == 2 for func in check_functions.__all__: # noqa: F405 setattr(check, func, getattr(check_functions, func)) # noqa: F405 pytest-check-2.3.0/src/pytest_check/check_functions.py000066400000000000000000000122631455206114200231160ustar00rootroot00000000000000import functools import pytest from .check_log import log_failure __all__ = [ "assert_equal", "equal", "not_equal", "is_", "is_not", "is_true", "is_false", "is_none", "is_not_none", "is_in", "is_not_in", "is_instance", "is_not_instance", "almost_equal", "not_almost_equal", "greater", "greater_equal", "less", "less_equal", "between", "between_equal", "check_func", "fail", ] def check_func(func): @functools.wraps(func) def wrapper(*args, **kwds): __tracebackhide__ = True try: func(*args, **kwds) return True except AssertionError as e: log_failure(e) return False return wrapper def assert_equal(a, b, msg=""): # pragma: no cover assert a == b, msg def equal(a, b, msg=""): __tracebackhide__ = True if a == b: return True else: log_failure(f"check {a} == {b}", msg) return False def not_equal(a, b, msg=""): __tracebackhide__ = True if a != b: return True else: log_failure(f"check {a} != {b}", msg) return False def is_(a, b, msg=""): __tracebackhide__ = True if a is b: return True else: log_failure(f"check {a} is {b}", msg) return False def is_not(a, b, msg=""): __tracebackhide__ = True if a is not b: return True else: log_failure(f"check {a} is not {b}", msg) return False def is_true(x, msg=""): __tracebackhide__ = True if bool(x): return True else: log_failure(f"check bool({x})", msg) return False def is_false(x, msg=""): __tracebackhide__ = True if not bool(x): return True else: log_failure(f"check not bool({x})", msg) return False def is_none(x, msg=""): __tracebackhide__ = True if x is None: return True else: log_failure(f"check {x} is None", msg) return False def is_not_none(x, msg=""): __tracebackhide__ = True if x is not None: return True else: log_failure(f"check {x} is not None", msg) return False def is_in(a, b, msg=""): __tracebackhide__ = True if a in b: return True else: log_failure(f"check {a} in {b}", msg) return False def is_not_in(a, b, msg=""): __tracebackhide__ = True if a not in b: return True else: log_failure(f"check {a} not in {b}", msg) return False def is_instance(a, b, msg=""): __tracebackhide__ = True if isinstance(a, b): return True else: log_failure(f"check isinstance({a}, {b})", msg) return False def is_not_instance(a, b, msg=""): __tracebackhide__ = True if not isinstance(a, b): return True else: log_failure(f"check not isinstance({a}, {b})", msg) return False def almost_equal(a, b, rel=None, abs=None, msg=""): """ For rel and abs tolerance, see: See https://docs.pytest.org/en/latest/builtin.html#pytest.approx """ __tracebackhide__ = True if a == pytest.approx(b, rel, abs): return True else: log_failure(f"check {a} == pytest.approx({b}, rel={rel}, abs={abs})", msg) return False def not_almost_equal(a, b, rel=None, abs=None, msg=""): """ For rel and abs tolerance, see: See https://docs.pytest.org/en/latest/builtin.html#pytest.approx """ __tracebackhide__ = True if a != pytest.approx(b, rel, abs): return True else: log_failure(f"check {a} != pytest.approx({b}, rel={rel}, abs={abs})", msg) return False def greater(a, b, msg=""): __tracebackhide__ = True if a > b: return True else: log_failure(f"check {a} > {b}", msg) return False def greater_equal(a, b, msg=""): __tracebackhide__ = True if a >= b: return True else: log_failure(f"check {a} >= {b}", msg) return False def less(a, b, msg=""): __tracebackhide__ = True if a < b: return True else: log_failure(f"check {a} < {b}", msg) return False def less_equal(a, b, msg=""): __tracebackhide__ = True if a <= b: return True else: log_failure(f"check {a} <= {b}", msg) return False def between(b, a, c, msg="", ge=False, le=False): __tracebackhide__ = True if ge and le: if a <= b <= c: return True else: log_failure(f"check {a} <= {b} <= {c}", msg) return False elif ge: if a <= b < c: return True else: log_failure(f"check {a} <= {b} < {c}", msg) return False elif le: if a < b <= c: return True else: log_failure(f"check {a} < {b} <= {c}", msg) return False else: if a < b < c: return True else: log_failure(f"check {a} < {b} < {c}", msg) return False def between_equal(b, a, c, msg=""): __tracebackhide__ = True return between(b, a, c, msg, ge=True, le=True) def fail(msg): __tracebackhide__ = True log_failure(msg) pytest-check-2.3.0/src/pytest_check/check_log.py000066400000000000000000000034541455206114200216710ustar00rootroot00000000000000from .pseudo_traceback import _build_pseudo_trace_str should_use_color = False COLOR_RED = "\x1b[31m" COLOR_RESET = "\x1b[0m" _failures = [] _stop_on_fail = False _default_max_fail = None _default_max_report = None _default_max_tb = 1 _max_fail = _default_max_fail _max_report = _default_max_report _max_tb = _default_max_tb _num_failures = 0 _fail_function = None _showlocals = False def clear_failures(): # get's called at the beginning of each test function global _failures, _num_failures global _max_fail, _max_report, _max_tb _failures = [] _num_failures = 0 _max_fail = _default_max_fail _max_report = _default_max_report _max_tb = _default_max_tb def any_failures() -> bool: return bool(get_failures()) def get_failures(): return _failures def log_failure(msg="", check_str="", tb=None): global _num_failures __tracebackhide__ = True _num_failures += 1 msg = str(msg).strip() if check_str: msg = f"{msg}: {check_str}" if (_max_report is None) or (_num_failures <= _max_report): if _num_failures <= _max_tb: pseudo_trace_str = _build_pseudo_trace_str(_showlocals, tb, should_use_color) msg = f"{msg}\n{pseudo_trace_str}" if should_use_color: msg = f"{COLOR_RED}FAILURE: {COLOR_RESET}{msg}" else: msg = f"FAILURE: {msg}" _failures.append(msg) if _fail_function: _fail_function(msg) if _max_fail and (_num_failures >= _max_fail): assert_msg = f"pytest-check max fail of {_num_failures} reached" assert _num_failures < _max_fail, assert_msg if _stop_on_fail: assert False, "Stopping on first failure" pytest-check-2.3.0/src/pytest_check/check_raises.py000066400000000000000000000073671455206114200224050ustar00rootroot00000000000000from .check_log import log_failure _stop_on_fail = False def raises(expected_exception, *args, **kwargs): """ Check that a given callable or context raises an error of a given type. Can be used as either a context manager: >>> with raises(AssertionError): >>> raise AssertionError or as a function: >>> def raises_assert(): >>> raise AssertionError >>> raises(AssertionError, raises_assert) `expected_exception` follows the same format rules as the second argument to `issubclass`, so multiple possible exception types can be used. When args[0] is callable, the remainder of args and all of kwargs except for any called `msg` are passed to args[0] as arguments. Note that because `raises` is implemented using a context manager, the usual control flow warnings apply: within the context, execution stops on the first error encountered *and does not resume after this error has been logged*. Therefore, the line you expect to raise an error must be the last line of the context: any subsequent lines won't be executed. Pull such lines out of the context if they don't raise errors, or use more calls to `raises` if they do. This function is modeled loosely after Pytest's own `raises`, except for the latter's `match`-ing logic. We should strive to keep the call signature of this `raises` as close as possible to the other `raises`. """ __tracebackhide__ = True if isinstance(expected_exception, type): excepted_exceptions = (expected_exception,) else: excepted_exceptions = expected_exception assert all( isinstance(exc, type) or issubclass(exc, BaseException) for exc in excepted_exceptions ) msg = kwargs.pop("msg", None) if not args: assert not kwargs, f"Unexpected kwargs for pytest_check.raises: {kwargs}" return CheckRaisesContext(expected_exception, msg=msg) else: func = args[0] assert callable(func) with CheckRaisesContext(expected_exception, msg=msg): func(*args[1:], **kwargs) class CheckRaisesContext: """ Helper context for `raises` that can be parameterized by error type. Note that CheckRaisesContext is instantiated whenever needed; it is not a global variable like `check`. Therefore, we don't need to curate `self.msg` in `__exit__` for this class like we do with CheckContextManager. """ def __init__(self, *expected_excs, msg=None): self.expected_excs = expected_excs self.msg = msg def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): __tracebackhide__ = True if exc_type is not None and issubclass(exc_type, self.expected_excs): # This is the case where an error has occured within the context # but it is the type we're expecting. Therefore we return True # to silence this error and proceed with execution outside the # context. return True if not _stop_on_fail: # Returning something falsey here will cause the context # manager to *not* suppress an exception not in # `expected_excs`, thus allowing the higher-level Pytest # context to handle it like any other unhandle exception during # test execution, including display and tracebacks. That is the # behavior we want when `_stop_on_fail` is True, so we let that # case fall through. If *not* `_stop_on_fail`, then we want to # log the error as a failed check but then continue execution # without raising an error, hence `return True`. log_failure(self.msg if self.msg else exc_val) return True pytest-check-2.3.0/src/pytest_check/context_manager.py000066400000000000000000000032051455206114200231230ustar00rootroot00000000000000import warnings import traceback from . import check_log from .check_log import log_failure _stop_on_fail = False # This class has grown into much more than just a context manager. # it's really the interface into the system. # TODO: maybe rename it # TODO: maybe pull in extra functionality here instead of in plugin.py class CheckContextManager: def __init__(self): self.msg = None def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): __tracebackhide__ = True if exc_type is not None and issubclass(exc_type, AssertionError): if _stop_on_fail: self.msg = None return else: fmt_tb = traceback.format_exception(exc_type, exc_val, exc_tb) if self.msg is not None: log_failure(f"{exc_val}, {self.msg}", tb=fmt_tb) else: log_failure(exc_val, tb=fmt_tb) self.msg = None return True self.msg = None def __call__(self, msg=None): self.msg = msg return self def set_no_tb(self): warnings.warn( "set_no_tb() is deprecated; use set_max_tb(0)", DeprecationWarning ) check_log._max_tb = 0 def set_max_fail(self, x): check_log._max_fail = x def set_max_report(self, x): check_log._max_report = x def set_max_tb(self, x): check_log._max_tb = x def call_on_fail(self, func): """Experimental feature - may change with any release""" check_log._fail_function = func check = CheckContextManager() pytest-check-2.3.0/src/pytest_check/plugin.py000066400000000000000000000110001455206114200212330ustar00rootroot00000000000000import sys import os import pytest from _pytest._code.code import ExceptionInfo from _pytest.skipping import xfailed_key from _pytest.reports import ExceptionChainRepr from _pytest._code.code import ExceptionRepr, ReprFileLocation from . import check_log, check_raises, context_manager, pseudo_traceback @pytest.hookimpl(hookwrapper=True, trylast=True) def pytest_runtest_makereport(item, call): outcome = yield report = outcome.get_result() num_failures = check_log._num_failures failures = check_log.get_failures() check_log.clear_failures() if failures: if item._store[xfailed_key]: report.outcome = "skipped" report.wasxfail = item._store[xfailed_key].reason else: summary = f"Failed Checks: {num_failures}" longrepr = ["\n".join(failures)] longrepr.append("-" * 60) longrepr.append(summary) if report.longrepr: longrepr.append("-" * 60) longrepr.append(report.longreprtext) report.longrepr = "\n".join(longrepr) else: report.longrepr = "\n".join(longrepr) report.outcome = "failed" try: raise AssertionError(report.longrepr) except AssertionError as e: excinfo = ExceptionInfo.from_current() if (pytest.version_tuple >= (7,3,0) and not os.getenv('PYTEST_XDIST_WORKER')): # Build a summary report with failure reason # Depends on internals of pytest, which changed in 7.3 # Also, doesn't work with xdist # # Example: Before 7.3: # =========== short test summary info =========== # FAILED test_example_simple.py::test_fail # Example after 7.3: # =========== short test summary info =========== # FAILED test_example_simple.py::test_fail - assert 1 == 2 # e_str = str(e) e_str = e_str.split('FAILURE: ')[1] # Remove redundant "Failure: " reprcrash = ReprFileLocation(item.nodeid, 0, e_str) reprtraceback = ExceptionRepr(reprcrash, excinfo) chain_repr = ExceptionChainRepr([(reprtraceback, reprcrash, str(e))]) report.longrepr = chain_repr else: # pragma: no cover # coverage is run on latest pytest # we'll have one test run on an older pytest just to make sure # it works. ... call.excinfo = excinfo def pytest_configure(config): # Add some red to the failure output, if stdout can accommodate it. isatty = sys.stdout.isatty() color = getattr(config.option, "color", None) check_log.should_use_color = (isatty and color == "auto") or (color == "yes") # If -x or --maxfail=1, then stop on the first failed check # Otherwise, let pytest stop on the maxfail-th test function failure maxfail = config.getvalue("maxfail") stop_on_fail = maxfail == 1 # TODO: perhaps centralize where we're storing stop_on_fail context_manager._stop_on_fail = stop_on_fail check_raises._stop_on_fail = stop_on_fail check_log._stop_on_fail = stop_on_fail # Allow for --tb=no to turn off check's pseudo tbs traceback_style = config.getoption("tbstyle", default=None) pseudo_traceback._traceback_style = traceback_style check_log._showlocals = config.getoption("showlocals", default=None) # grab options check_log._default_max_fail = config.getoption("--check-max-fail") check_log._default_max_report = config.getoption("--check-max-report") check_log._default_max_tb = config.getoption("--check-max-tb") # Allow for tests to grab "check" via fixture: # def test_a(check): # check.equal(a, b) @pytest.fixture(name="check") def check_fixture(): return context_manager.check # add some options def pytest_addoption(parser): parser.addoption( "--check-max-report", action="store", type=int, help="max failures to report", ) parser.addoption( "--check-max-fail", action="store", type=int, help="max failures per test", ) parser.addoption( "--check-max-tb", action="store", type=int, default=1, help="max pseudo-tracebacks per test", ) pytest-check-2.3.0/src/pytest_check/pseudo_traceback.py000066400000000000000000000067571455206114200232620ustar00rootroot00000000000000import inspect import os import re from pprint import pformat _traceback_style = "auto" def get_full_context(frame): (_, filename, line, funcname, contextlist) = frame[0:5] locals = frame.frame.f_locals tb_hide = locals.get("__tracebackhide__", False) try: filename = os.path.relpath(filename) except ValueError: # pragma: no cover # this is necessary if we're tracing to a different drive letter # such as C: to D: # # Turning off coverage for abspath, for now, # since that path requires testing with an odd setup. # But.... we'll keep looking for a way to test it. :) filename = os.path.abspath(filename) # pragma: no cover context = contextlist[0].strip() if contextlist else "" return (filename, line, funcname, context, locals, tb_hide) COLOR_RED = "\x1b[31m" COLOR_RESET = "\x1b[0m" def reformat_raw_traceback(lines, color): formatted = [] for line in lines: if 'Traceback (most recent call last)' in line: continue if 'AssertionError' in line: if color: line = f"{COLOR_RED}{line}{COLOR_RESET}" formatted.append(line) continue result = re.search(r'File "(.*)", line (.*), in (\w*)$\n\W*(.*)', line, flags=re.MULTILINE) if result: file_path, line_no, func_name, context = result.groups() file_name = os.path.basename(file_path) if color: file_name = f"{COLOR_RED}{file_name}{COLOR_RESET}" #formatted.append(f'{file_name}:{line_no} in {func_name}\n {context}') formatted.append(f'{file_name}:{line_no} in {func_name} -> {context}') else: # I don't have a test case to hit this clause yet # And I can't think of one. # But it feels weird to not have the if/else. # Thus the "no cover" formatted.append(line) # pragma: no cover return '\n'.join(formatted) def _build_pseudo_trace_str(showlocals, tb, color): """ built traceback styles for better error message only supports no """ if _traceback_style == "no": return "" skip_own_frames = 3 pseudo_trace = [] func = "" if tb: pseudo_trace.append(reformat_raw_traceback(tb, color)) context_stack = inspect.stack()[skip_own_frames:] while "test_" not in func and context_stack: full_context = get_full_context(context_stack.pop(0)) (file, line, func, context, locals, tb_hide) = full_context # we want to trace through user code, not 3rd party or builtin libs if "site-packages" in file: break # if called outside of a test, we might hit this if "" in func: break if tb_hide: continue if showlocals: for name, val in reversed(locals.items()): if not name.startswith('@py'): pseudo_trace.append("%-10s = %s" % (name, pformat(val, sort_dicts=False, compact=True))) file = os.path.basename(file) if color: file = f"{COLOR_RED}{file}{COLOR_RESET}" #line = f"{file}:{line} in {func}\n {context}" line = f"{file}:{line} in {func}() -> {context}" pseudo_trace.append(line) return "\n".join(reversed(pseudo_trace)) + "\n" pytest-check-2.3.0/tests/000077500000000000000000000000001455206114200152615ustar00rootroot00000000000000pytest-check-2.3.0/tests/__init__.py000066400000000000000000000000001455206114200173600ustar00rootroot00000000000000pytest-check-2.3.0/tests/conftest.py000066400000000000000000000000341455206114200174550ustar00rootroot00000000000000pytest_plugins = "pytester" pytest-check-2.3.0/tests/test_alt_names.py000066400000000000000000000011301455206114200206300ustar00rootroot00000000000000ini_contents = """ [pytest] python_functions = test_* *_test """ def test_alt_names(pytester): """ Should stop after first failed check """ pytester.copy_example("examples/test_example_alt_names.py") pytester.makeini(ini_contents) result = pytester.runpytest() result.stdout.fnmatch_lines( [ "*FAILURE: assert 1 == 0*", "*_alt_names.py:* in ends_with_test() -> helper_func()*", "*_alt_names.py:* in helper_func() -> *", "*Failed Checks: 1*", ], ) result.assert_outcomes(failed=1, passed=1) pytest-check-2.3.0/tests/test_any_failures.py000066400000000000000000000017321455206114200213560ustar00rootroot00000000000000from pytest_check import any_failures, check def test_any_failures_false(pytester): pytester.copy_example("examples/test_example_any_failures.py") result = pytester.runpytest("-k", "test_any_failures_false") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*check 1 == 2*", "*check 1 == 3*", "*check 1 == 4*", "*Failed Checks: 3", ], ) def test_any_failure_true(pytester): pytester.copy_example("examples/test_example_any_failures.py") result = pytester.runpytest("-k", "test_any_failures_true") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*check 2 == 3*", "*Failed Checks: 1", ], ) def test_top_level(): assert not any_failures() def test_from_imported_check(): assert not check.any_failures() def test_from_check_fixture(check): assert not check.any_failures() pytest-check-2.3.0/tests/test_check_and_assert.py000066400000000000000000000003451455206114200221540ustar00rootroot00000000000000def test_check_and_assert(pytester): pytester.copy_example("examples/test_example_check_and_assert.py") result = pytester.runpytest() result.assert_outcomes(failed=2) result.stdout.fnmatch_lines(["* 2 failed *"]) pytest-check-2.3.0/tests/test_check_check.py000066400000000000000000000002631455206114200211050ustar00rootroot00000000000000def test_check_check(pytester): pytester.copy_example("examples/test_example_check_check.py") result = pytester.runpytest() result.assert_outcomes(failed=0, passed=2) pytest-check-2.3.0/tests/test_check_context_manager.py000066400000000000000000000021171455206114200232060ustar00rootroot00000000000000def test_context_manager_pass(pytester): pytester.copy_example("examples/test_example_context_manager_pass.py") result = pytester.runpytest() result.assert_outcomes(passed=2) def test_context_manager_fail(pytester): pytester.copy_example("examples/test_example_context_manager_fail.py") result = pytester.runpytest("-k", "test_3_failed_checks") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*FAILURE: assert 1 == 0*", "*FAILURE: assert 1 > 2*", "*FAILURE: assert 5 < 4*", "*Failed Checks: 3*", ], ) def test_context_manager_fail_with_msg(pytester): pytester.copy_example("examples/test_example_context_manager_fail.py") result = pytester.runpytest("-k", "test_messages") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*FAILURE: assert 1 == 0, first fail*", "*FAILURE: assert 1 > 2, second fail*", "*FAILURE: assert 5 < 4, third fail*", "*Failed Checks: 3*", ], )pytest-check-2.3.0/tests/test_check_fixture.py000066400000000000000000000000651455206114200215160ustar00rootroot00000000000000def test_check_fixture(check): check.equal(1, 1) pytest-check-2.3.0/tests/test_check_func_decorator.py000066400000000000000000000013441455206114200230260ustar00rootroot00000000000000def test_passing_check_helper_functions(pytester): pytester.copy_example("examples/test_example_check_func_decorator.py") result = pytester.runpytest("-k", "test_pass") result.assert_outcomes(passed=2) def test_failing_check_helper_functions(pytester): pytester.copy_example("examples/test_example_check_func_decorator.py") result = pytester.runpytest("-s", "-k", "test_all_four") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*should_be_True=True*", "*should_be_False=False*", "*FAILURE: assert 1 == 4*", "*FAILURE: assert 2 == 4*", "*FAILURE: assert 3 == 4*", "*Failed Checks: 3*", ], ) pytest-check-2.3.0/tests/test_fail_func.py000066400000000000000000000010641455206114200206210ustar00rootroot00000000000000def test_fail_func(pytester): pytester.copy_example("examples/test_example_fail_func.py") result = pytester.runpytest("--check-max-tb=2") result.assert_outcomes(failed=2) result.stdout.fnmatch_lines( [ "*FAILURE: one", "*test_one_failure() -> check.fail('one')", "Failed Checks: 1", "*FAILURE: one", "*test_two_failures() -> check.fail('one')", "*FAILURE: two", "*test_two_failures() -> check.fail('two')", "Failed Checks: 2", ], ) pytest-check-2.3.0/tests/test_fail_in_fixture.py000066400000000000000000000010301455206114200220330ustar00rootroot00000000000000def test_setup_failure(pytester): pytester.copy_example("examples/test_example_fail_in_fixture.py") result = pytester.runpytest("-k", "test_setup_failure") result.assert_outcomes(errors=1) result.stdout.fnmatch_lines(["* check.equal(1, 2)*"]) def test_teardown_failure(pytester): pytester.copy_example("examples/test_example_fail_in_fixture.py") result = pytester.runpytest("-k", "test_teardown_failure") result.assert_outcomes(passed=1, errors=1) result.stdout.fnmatch_lines(["* check.equal(1, 2)*"]) pytest-check-2.3.0/tests/test_functions.py000066400000000000000000000006101455206114200206770ustar00rootroot00000000000000def test_passing_check_functions(pytester): pytester.copy_example("examples/test_example_functions_pass.py") result = pytester.runpytest() result.assert_outcomes(failed=0, passed=23) def test_failing_check_functions(pytester): pytester.copy_example("examples/test_example_functions_fail.py") result = pytester.runpytest() result.assert_outcomes(failed=23, passed=0) pytest-check-2.3.0/tests/test_helpers.py000066400000000000000000000020141455206114200203310ustar00rootroot00000000000000import sys import pytest # the output is different in prior versions. @pytest.mark.skipif(sys.version_info < (3, 10), reason="requires python3.10 or higher") def test_sequence_with_helper_funcs(pytester): """ Should show a sequence of calls """ pytester.copy_example("examples/test_example_helpers.py") result = pytester.runpytest("--check-max-tb=2") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*FAILURE: assert 1 == 0, first", "*in test_func() -> helper1()", "*in helper1() -> helper2()", "*in helper2() -> with check(\"first\"):", "*in helper2 -> assert 1 == 0", "*AssertionError: assert 1 == 0", "*FAILURE: assert 1 > 2, second", "*in test_func() -> helper1()", "*in helper1() -> helper2()", "*in helper2() -> with check(\"second\"):", "*in helper2 -> assert 1 > 2", "*AssertionError: assert 1 > 2" ], ) pytest-check-2.3.0/tests/test_locals.py000066400000000000000000000011771455206114200201550ustar00rootroot00000000000000def test_locals_context_manager(pytester): pytester.copy_example("examples/test_example_locals.py") result = pytester.runpytest("test_example_locals.py::test_ctx", "-l") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines([ "*a *= 1*", "*b *= 2*" ]) def test_locals_check_function(pytester): pytester.copy_example("examples/test_example_locals.py") result = pytester.runpytest("test_example_locals.py::test_check_func", "--showlocals") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines([ "*a *= 1*", "*b *= 2*" ])pytest-check-2.3.0/tests/test_logging.py000066400000000000000000000025101455206114200203160ustar00rootroot00000000000000def test_log(testdir): testdir.makepyfile(""" import logging from pytest_check import check log = logging.getLogger(__name__) records = None # will fail and produce logs def test_logging(caplog): global records check.call_on_fail(log.error) log.error('one') check.equal(1, 2, "two") log.error('three') check.equal(1, 2, "four") log.error('five') records = caplog.records # consumes logs from previous test # should pass def test_log_content(): assert 'one' in records[0].message assert 'two' in records[1].message assert 'three' in records[2].message assert 'four' in records[3].message assert 'five' in records[4].message """) result = testdir.runpytest() result.assert_outcomes(failed=1, passed=1) def test_print(testdir): testdir.makepyfile( """ from pytest_check import check def test_with_print(): check.call_on_fail(print) print('one') check.equal(1, 2, "two") print('three') check.equal(1, 2, "four") print('five') """) result = testdir.runpytest() result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(["*one*", "*two*", "*three*", "*four*", "*five*"]) pytest-check-2.3.0/tests/test_maxfail.py000066400000000000000000000020141455206114200203100ustar00rootroot00000000000000def test_maxfail_1_stops_on_first_check(pytester): """ Should stop after first failed check """ pytester.copy_example("examples/test_example_maxfail.py") result = pytester.runpytest("--maxfail=1") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines(["*AssertionError: one*"]) def test_maxfail_2_stops_on_two_failed_tests(pytester): """ Should stop after 2 tests (not checks) """ pytester.copy_example("examples/test_example_maxfail.py") result = pytester.runpytest("--maxfail=2") result.assert_outcomes(failed=2, passed=0) result.stdout.fnmatch_lines( [ "*FAILURE: one", "*FAILURE: two", "*FAILURE: three", "*Failed Checks: 3*", ], ) def test_maxfail_3_runs_at_least_3_tests(pytester): """ Should not stop on checks. """ pytester.copy_example("examples/test_example_maxfail.py") result = pytester.runpytest("--maxfail=3") result.assert_outcomes(failed=2, passed=1) pytest-check-2.3.0/tests/test_message.py000066400000000000000000000007711455206114200203230ustar00rootroot00000000000000def test_baseline(pytester): pytester.copy_example("examples/test_example_message.py") result = pytester.runpytest("-k baseline") result.stdout.fnmatch_lines(["FAILURE: check 1 == 2"]) result.stdout.no_fnmatch_line("FAILURE: check 1 == 2: comment about a=1 != b=2") def test_message(pytester): pytester.copy_example("examples/test_example_message.py") result = pytester.runpytest("-k message") result.stdout.fnmatch_lines(["FAILURE: check 1 == 2: comment about a=1 != b=2"]) pytest-check-2.3.0/tests/test_not_in_test.py000066400000000000000000000016701455206114200212230ustar00rootroot00000000000000""" The error caused by our example is on purpose. However, the import system in some versions of Python (such as 3.7) don't like it, even when running as a test. Python 3.10 handles it fine. So that's where we'll test it. """ import sys import pytest @pytest.mark.skipif(sys.version_info < (3, 10), reason="requires python3.10 or higher") def test_check_not_in_a_test(pytester): """ Should error """ pytester.copy_example("examples/test_example_check_not_in_test.py") result = pytester.runpytest() result.assert_outcomes(errors=1, failed=0, passed=0) result.stdout.fnmatch_lines( [ "* ERROR at setup of test_something *", "*FAILURE: assert 1 == 0*", "*not_in_test.py:* in not_in_a_test() -> helper_func()*", "*Failed Checks: 1*", "* short test summary info *", "*ERROR test_example_check_not_in_test.py::test_something*", ], ) pytest-check-2.3.0/tests/test_raises.py000066400000000000000000000124451455206114200201660ustar00rootroot00000000000000import pytest from pytest_check import raises class BaseTestException(Exception): pass class _TestException(BaseTestException): pass class AnotherTestException(BaseTestException): pass BASE_IMPORTS_AND_EXCEPTIONS = """ from pytest_check import raises class BaseTestException(Exception): pass class _TestException(BaseTestException): pass class AnotherTestException(BaseTestException): pass """ def test_raises(): with raises(_TestException): raise _TestException def test_raises_with_assertion_error(): with raises(AssertionError): assert 0 def test_raises_with_multiple_errors(testdir): with raises((_TestException, AnotherTestException)): raise _TestException with raises((_TestException, AnotherTestException)): raise AnotherTestException testdir.makepyfile( BASE_IMPORTS_AND_EXCEPTIONS + """ def test_failures(): with raises((_TestException, AnotherTestException)): raise AssertionError """, ) result = testdir.runpytest() result.assert_outcomes(failed=1, passed=0) result.stdout.re_match_lines( [ "FAILURE: ", # Python < 3.10 reports error at `raise` but 3.10 reports at `with` r".*raise AssertionError.*" r"|.*with raises\(\(_TestException, AnotherTestException\)\):.*", ], consecutive=True, ) def test_raises_with_parents_and_children(testdir): with raises(BaseTestException): raise _TestException with raises((BaseTestException, _TestException)): raise BaseTestException with raises((BaseTestException, _TestException)): raise _TestException # Children shouldn't catch their parents, only vice versa. testdir.makepyfile( BASE_IMPORTS_AND_EXCEPTIONS + """ def test_failures(): with raises((_TestException, AnotherTestException)): raise BaseTestException """, ) result = testdir.runpytest() result.assert_outcomes(failed=1, passed=0) result.stdout.re_match_lines( [ "FAILURE: ", # Python < 3.10 reports error at `raise` but 3.10 reports at `with` r".*raise BaseTestException.*" r"|.*with raises\(\(_TestException, AnotherTestException\)\):.*", ], consecutive=True, ) @pytest.mark.parametrize( "run_flags,match_lines", [ ("--exitfirst", ["test_raises_stop_on_fail.py:19: ValueError"]), ("", ["*Failed Checks: 2*"]), ], ) def test_raises_stop_on_fail(run_flags, match_lines, testdir): """ Test multiple failures with and without `--exitfirst` With `--exitfirst`, first error is the only one reported, and without, multiple errors are accumulated. """ # test_failures below includes one passed check, two checked failures, and # a final passed check. `--exitfirst` should result in only the first # error reported, and subsequent errors and successes are ignored. Without # that flag, two failures should be counted and reported, and the last # success should be executed. testdir.makepyfile( BASE_IMPORTS_AND_EXCEPTIONS + """ def test_failures(): with raises(BaseTestException): raise BaseTestException with raises(BaseTestException): raise ValueError with raises(BaseTestException): raise ValueError with raises(BaseTestException): raise BaseTestException """, ) result = testdir.runpytest(run_flags) result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(match_lines) def test_can_mix_assertions_and_checks(pytester): """ You can mix checks and asserts, but a failing assert stops test execution. """ pytester.copy_example("examples/test_example_mix_checks_and_assertions.py") result = pytester.runpytest() result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE:*", "*Failed Checks: 1*", "*assert 1 == 2*", ], ) def test_msg_kwarg_with_raises_context_manager(testdir): testdir.makepyfile( """ from pytest_check import raises def raise_valueerror(): raise ValueError def test(): with raises(AssertionError, msg="hello, world!"): raise_valueerror() """, ) result = testdir.runpytest() result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(["FAILURE: hello, world!"]) def test_raises_function(testdir): def raise_error(): raise _TestException # Single exception raises(_TestException, raise_error) # Multiple exceptions raises((_TestException, AnotherTestException), raise_error) def assert_foo_equals_bar(foo, bar=None): assert foo == bar # Test args and kwargs are passed to callable raises(AssertionError, assert_foo_equals_bar, 1, bar=2) # Kwarg `msg` is special and can be found in failure output. testdir.makepyfile( """ from pytest_check import raises def raise_valueerror(): raise ValueError def test(): raises(AssertionError, raise_valueerror, msg="hello, world!") """, ) result = testdir.runpytest() result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(["FAILURE: hello, world!"]) pytest-check-2.3.0/tests/test_red.py000066400000000000000000000016551455206114200174530ustar00rootroot00000000000000def test_red(pytester): """ Should have red in failure. """ pytester.copy_example("examples/test_example_simple.py") result = pytester.runpytest("--color=yes") result.assert_outcomes(failed=1, passed=1) result.stdout.fnmatch_lines( [ "*[31mFAILURE:*[0massert*", "*[31mtest_example_simple.py*[0m:14 in test_fail*", "*[31mAssertionError: assert 1 == 2*", "*[0m*" ], ) def test_no_red(pytester): """ Should NOT have red in failure. """ pytester.copy_example("examples/test_example_simple.py") result = pytester.runpytest("--color=no") result.assert_outcomes(failed=1, passed=1) result.stdout.fnmatch_lines( [ "*FAILURE: assert*", # no red before assert ], ) result.stdout.no_fnmatch_line("*[31m*") # no red anywhere result.stdout.no_fnmatch_line("*[0m*") # no reset anywhere pytest-check-2.3.0/tests/test_speedup_flags.py000066400000000000000000000047071455206114200215230ustar00rootroot00000000000000def test_baseline(pytester): pytester.copy_example("examples/test_example_multiple_failures.py") result = pytester.runpytest("--check-max-tb=10") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 7 == 100", "*test_multiple_failures() -> check.equal(i, 100)", "*FAILURE: * 8 == 100", "*test_multiple_failures() -> check.equal(i, 100)", "*FAILURE: * 9 == 100", "*test_multiple_failures() -> check.equal(i, 100)", "Failed Checks: 10", ], ) def test_no_tb(pytester): pytester.copy_example("examples/test_example_multiple_failures.py") result = pytester.runpytest("--check-max-tb=0") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 7 == 100", "*FAILURE: * 8 == 100", "*FAILURE: * 9 == 100", "Failed Checks: 10", ], ) result.stdout.no_fnmatch_line("*test_multiple_failures() -> check.equal(i, 100)") def test_max_report(pytester): pytester.copy_example("examples/test_example_multiple_failures.py") result = pytester.runpytest("--check-max-report=5") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 1 == 100", "*FAILURE: * 2 == 100", "*FAILURE: * 3 == 100", "*FAILURE: * 4 == 100", "*FAILURE: * 5 == 100", "Failed Checks: 10", ], ) result.stdout.no_fnmatch_line("*FAILURE: * 6 == 100") def test_max_fail(pytester): pytester.copy_example("examples/test_example_multiple_failures.py") result = pytester.runpytest("--check-max-fail=5") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 1 == 100", "*FAILURE: * 2 == 100", "*FAILURE: * 3 == 100", "*FAILURE: * 4 == 100", "*FAILURE: * 5 == 100", "Failed Checks: 5", "*AssertionError: pytest-check max fail of 5 reached", ], ) result.stdout.no_fnmatch_line("*FAILURE: * 6 == 100") def test_max_tb(pytester): pytester.copy_example("examples/test_example_multiple_failures.py") result = pytester.runpytest("--check-max-tb=2", "--show-capture=no") result.assert_outcomes(failed=1) num_tb = str(result.stdout).count("test_multiple_failures() -> check.equal(i, 100)") assert num_tb == 2 pytest-check-2.3.0/tests/test_speedup_functions.py000066400000000000000000000051271455206114200224340ustar00rootroot00000000000000import pytest def test_baseline(pytester): pytester.copy_example("examples/test_example_speedup_funcs.py") result = pytester.runpytest("-k baseline", "--check-max-tb=10") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 7 == 100", "*test_baseline() -> check.equal(i, 100)", "*FAILURE: * 8 == 100", "*test_baseline() -> check.equal(i, 100)", "*FAILURE: * 9 == 100", "*test_baseline() -> check.equal(i, 100)", "Failed Checks: 10", ], ) def test_max_report(pytester): pytester.copy_example("examples/test_example_speedup_funcs.py") result = pytester.runpytest("-k max_report") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 1 == 100", "*FAILURE: * 2 == 100", "*FAILURE: * 3 == 100", "*FAILURE: * 4 == 100", "*FAILURE: * 5 == 100", "Failed Checks: 10", ], ) result.stdout.no_fnmatch_line("*FAILURE: * 6 == 100") def test_max_fail(pytester): pytester.copy_example("examples/test_example_speedup_funcs.py") result = pytester.runpytest("-k max_fail") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 1 == 100", "*FAILURE: * 2 == 100", "*FAILURE: * 3 == 100", "*FAILURE: * 4 == 100", "*FAILURE: * 5 == 100", "Failed Checks: 5", "*AssertionError: pytest-check max fail of 5 reached", ], ) result.stdout.no_fnmatch_line("*FAILURE: * 6 == 100") def test_max_tb(pytester): pytester.copy_example("examples/test_example_speedup_funcs.py") result = pytester.runpytest("-k max_tb", "--show-capture=no") result.assert_outcomes(failed=1) num_tb = str(result.stdout).count("in test_max_tb() -> check.equal(i, 100)") assert num_tb == 2 def test_deprecated_no_tb(check): with pytest.deprecated_call(): check.set_no_tb() text_for_test_no_tb = """ def test_no_tb(check): check.set_no_tb() for i in range(1, 11): check.equal(i, 100) """ def test_no_tb(pytester): pytester.makepyfile(text_for_test_no_tb) result = pytester.runpytest("-k no_tb") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines( [ "*FAILURE: * 7 == 100", "*FAILURE: * 8 == 100", "*FAILURE: * 9 == 100", "Failed Checks: 10", ], ) result.stdout.no_fnmatch_line("*test_baseline() -> check.equal(i, 100)") pytest-check-2.3.0/tests/test_stop_on_fail.py000066400000000000000000000016341455206114200213520ustar00rootroot00000000000000def test_stop_on_fail(pytester): pytester.copy_example("examples/test_example_stop_on_fail.py") result = pytester.runpytest("-x") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(["*> * check.equal(1, 2)*"]) def test_context_manager_stop_on_fail(pytester): pytester.copy_example("examples/test_example_context_manager_fail.py") result = pytester.runpytest("-x", "-k", "test_3_failed_checks") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines(["*assert 1 == 0*"]) def test_context_manager_stop_on_fail_with_msg(pytester): pytester.copy_example("examples/test_example_context_manager_fail.py") result = pytester.runpytest("-x", "-k", "test_messages") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines(["*first fail*"]) result.stdout.no_fnmatch_line("*second fail*") result.stdout.no_fnmatch_line("*third fail*") pytest-check-2.3.0/tests/test_summary.py000066400000000000000000000011461455206114200203710ustar00rootroot00000000000000import pytest require_pytest_7_3 = pytest.mark.skipif( pytest.version_tuple < (7, 3, 0), reason="summary message only supported on pytest7.3+") @require_pytest_7_3 def test_baseline(pytester): pytester.copy_example("examples/test_example_summary.py") result = pytester.runpytest("-k check_no_msg") result.stdout.fnmatch_lines(["*FAILED*-*check 1 == 2*"]) @require_pytest_7_3 def test_message(pytester): pytester.copy_example("examples/test_example_summary.py") result = pytester.runpytest("-k check_msg") result.stdout.fnmatch_lines(["*FAILED*-*check 1 == 2*comment about*"]) pytest-check-2.3.0/tests/test_tb_style.py000066400000000000000000000013111455206114200205130ustar00rootroot00000000000000from pytest import LineMatcher def test_traceback_style_no(pytester): pytester.copy_example("examples/test_example_tb_style.py") result = pytester.runpytest("--junitxml=output.xml", "--tb=no") result.assert_outcomes(failed=1, passed=0) with open("output.xml") as f: lines = LineMatcher(f.readlines()) lines.no_fnmatch_line("*run_helper1()*") def test_traceback_style_default(pytester): pytester.copy_example("examples/test_example_tb_style.py") result = pytester.runpytest("--junitxml=output.xml") result.assert_outcomes(failed=1, passed=0) with open("output.xml") as f: lines = LineMatcher(f.readlines()) lines.fnmatch_lines("*run_helper1()*") pytest-check-2.3.0/tests/test_thread.py000066400000000000000000000007741455206114200201510ustar00rootroot00000000000000def test_failing_threaded_testcode(pytester): pytester.copy_example("examples/test_example_fail_in_thread.py") result = pytester.runpytest() result.assert_outcomes(failed=2, passed=0) result.stdout.fnmatch_lines(["*1 + 1 is 2, not 3*"]) result.stdout.fnmatch_lines(["*1 + 1 is 2, not 4*"]) def test_passing_threaded_testcode(pytester): pytester.copy_example("examples/test_example_pass_in_thread.py") result = pytester.runpytest() result.assert_outcomes(failed=0, passed=2) pytest-check-2.3.0/tests/test_tracebackhide.py000066400000000000000000000024771455206114200214550ustar00rootroot00000000000000import sys import pytest @pytest.mark.skipif(sys.version_info < (3, 10), reason="requires python3.10 or higher") def test_normal_pseudo_traceback(pytester): """ Should show a sequence of calls """ pytester.copy_example("examples/test_example_helpers.py") result = pytester.runpytest("--check-max-tb=2") result.assert_outcomes(failed=1, passed=0) result.stdout.fnmatch_lines( [ "*FAILURE: assert 1 == 0, first", "*in test_func() -> helper1()", "*in helper1() -> helper2()", "*in helper2 -> assert 1 == 0", "*AssertionError: assert 1 == 0", "*FAILURE: assert 1 > 2, second", "*in test_func() -> helper1()", "*in helper1() -> helper2()", "*in helper2 -> assert 1 > 2", "*AssertionError: assert 1 > 2", "*Failed Checks: 2*", ], ) @pytest.mark.skipif(sys.version_info < (3, 10), reason="requires python3.10 or higher") def test_tracebackhide(pytester): """ Should skip helper1, since it has __tracebackhide__ = True """ pytester.copy_example("examples/test_example_tracebackhide.py") result = pytester.runpytest("--check-max-tb=2") result.assert_outcomes(failed=1, passed=0) result.stdout.no_fnmatch_line("*in helper1() -> helper2()") pytest-check-2.3.0/tests/test_xfail.py000066400000000000000000000017721455206114200200040ustar00rootroot00000000000000def test_xfail(pytester): pytester.copy_example("examples/test_example_xfail.py") result = pytester.runpytest("test_example_xfail.py::test_xfail") result.assert_outcomes(xfailed=1) result.stdout.fnmatch_lines(["* 1 xfailed *"]) def test_xfail_strict(pytester): pytester.copy_example("examples/test_example_xfail.py") result = pytester.runpytest("test_example_xfail.py::test_xfail_strict") result.assert_outcomes(xfailed=1) result.stdout.fnmatch_lines(["* 1 xfailed *"]) def test_xpass(pytester): pytester.copy_example("examples/test_example_xfail.py") result = pytester.runpytest("test_example_xfail.py::test_xfail_pass") result.assert_outcomes(xpassed=1) result.stdout.fnmatch_lines(["* 1 xpassed *"]) def test_xpass_strict(pytester): pytester.copy_example("examples/test_example_xfail.py") result = pytester.runpytest("test_example_xfail.py::test_xfail_pass_strict") result.assert_outcomes(failed=1) result.stdout.fnmatch_lines(["* 1 failed *"]) pytest-check-2.3.0/tox.ini000066400000000000000000000015441455206114200154360ustar00rootroot00000000000000[tox] envlist = py37, py38, py39, py310, py311, py312, pytest_earliest, coverage, lint skip_missing_interpreters = true [testenv] commands = pytest {posargs} description = Run pytest package = wheel wheel_build_env = .pkg [testenv:coverage] deps = coverage basepython = python3.11 commands = coverage run --source={envsitepackagesdir}/pytest_check,tests -m pytest coverage report --fail-under=100 --show-missing description = Run pytest, with coverage [testenv:pytest_earliest] deps = pytest==7.0.0 basepython = python3.11 commands = pytest {posargs} description = Run earliest supported pytest [testenv:lint] skip_install = true deps = ruff basepython = python3.11 commands = ruff src tests examples description = Run ruff over src, test, exampless [pytest] addopts = --color=yes --strict-markers --strict-config -ra testpaths = tests