pax_global_header00006660000000000000000000000064142652005370014516gustar00rootroot0000000000000052 comment=a45058da8ef5dfffef3b3586ae5ecd6a92f0c7a0 pytest-golden-0.2.2/000077500000000000000000000000001426520053700143155ustar00rootroot00000000000000pytest-golden-0.2.2/.github/000077500000000000000000000000001426520053700156555ustar00rootroot00000000000000pytest-golden-0.2.2/.github/workflows/000077500000000000000000000000001426520053700177125ustar00rootroot00000000000000pytest-golden-0.2.2/.github/workflows/ci.yml000066400000000000000000000027111426520053700210310ustar00rootroot00000000000000name: CI on: push: pull_request: schedule: - cron: '0 6 * * 6' defaults: run: shell: bash jobs: build: strategy: fail-fast: false matrix: include: - python: '^3.9' os: macos-latest - python: 3.8 os: ubuntu-latest - python: 3.7 os: windows-latest - python: 3.6 os: ubuntu-latest versions: minimal runs-on: ${{matrix.os}} steps: - name: Download source uses: actions/checkout@v2 - name: Install Python uses: actions/setup-python@v2 with: python-version: ${{matrix.python}} - name: Pin to lowest versions if: matrix.versions == 'minimal' run: | sed -i -E 's/"(\^|>=)([0-9])/"==\2/' pyproject.toml - name: Setup virtualenv uses: syphar/restore-virtualenv@d0a933d92488e0505e012c3367e3f987a6276f5a with: requirement_files: pyproject.toml - name: Install packages run: | python -m pip install -U pip'>=19'; pip install -U wheel pip install -U --upgrade-strategy=eager . $(awk '/^$/ {p = 0} ! /${{runner.os}}/ { if (p) {print $1} } /dev-dependencies/ {p = 1}' pyproject.toml) pip install -U ./example - name: Test run: | .tools/ci.sh with_groups - name: Check formatting if: matrix.versions == null run: | git diff --exit-code pytest-golden-0.2.2/.gitignore000066400000000000000000000001321426520053700163010ustar00rootroot00000000000000/dist/ site/ .pytype/ .pytest_cache/ __pycache__/ *.egg-info/ .venv/ poetry.lock .vscode/ pytest-golden-0.2.2/.tools/000077500000000000000000000000001426520053700155335ustar00rootroot00000000000000pytest-golden-0.2.2/.tools/ci.sh000077500000000000000000000007261426520053700164720ustar00rootroot00000000000000#!/bin/sh set -e cd "$(dirname "$0")/.." with_groups() { echo "::group::$@" "$@" && echo "::endgroup::" } "$@" autoflake -i -r --remove-all-unused-imports --remove-unused-variables pytest_golden tests "$@" isort -q pytest_golden tests "$@" black -q pytest_golden tests "$@" pytest -q python -c 'import sys, os; sys.exit((3,8) <= sys.version_info < (3,10) and os.name == "posix")' || "$@" pytype pytest_golden PYTHONPATH=$(pwd)/example "$@" pytest -q example pytest-golden-0.2.2/.tools/copier-answers.yml000066400000000000000000000005661426520053700212260ustar00rootroot00000000000000_commit: a2c083636 _src_path: gh:oprypin/py-library-template copyright_date: 2020 min_python_version: '3.6' mkdocs: false project_description: Plugin for pytest that offloads expected outputs to data files project_name: pytest-golden pytest: true python_distribution_name: pytest-golden python_import_name: pytest_golden pytype: true repository_name: oprypin/pytest-golden pytest-golden-0.2.2/.tools/hooks/000077500000000000000000000000001426520053700166565ustar00rootroot00000000000000pytest-golden-0.2.2/.tools/hooks/pre-commit000077500000000000000000000000651426520053700206610ustar00rootroot00000000000000#!/bin/sh exec poetry run "$(dirname "$0")/../ci.sh" pytest-golden-0.2.2/.tools/release.sh000077500000000000000000000004261426520053700175140ustar00rootroot00000000000000#!/bin/bash set -e -u -x cd "$(dirname "$0")/.." git diff --staged --quiet git diff --quiet HEAD pyproject.toml poetry version "$1" poetry install poetry build git add pyproject.toml git commit -m "v$1" git tag -a -m "" "v$1" poetry publish echo git push origin master --tags pytest-golden-0.2.2/LICENSE.md000066400000000000000000000020731426520053700157230ustar00rootroot00000000000000MIT License Copyright (c) 2020 Oleh Prypin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. pytest-golden-0.2.2/README.md000066400000000000000000000235741426520053700156070ustar00rootroot00000000000000# pytest-golden **Plugin for [pytest] that offloads expected outputs to data files** [![PyPI](https://img.shields.io/pypi/v/pytest-golden)](https://pypi.org/project/pytest-golden/) [![GitHub](https://img.shields.io/github/license/oprypin/pytest-golden)](https://github.com/oprypin/pytest-golden/blob/master/LICENSE.md) [![GitHub Workflow Status](https://img.shields.io/github/workflow/status/oprypin/pytest-golden/CI)](https://github.com/oprypin/pytest-golden/actions?query=event%3Apush+branch%3Amaster) [pytest]: https://pytest.org/ ## Usage, in short (see also: [example/](example/)) [Install the pytest plugin](https://docs.pytest.org/en/latest/plugins.html): ```shell pip install pytest-golden ``` Create a test file (e.g. *tests/test_foo.py*): ```python @pytest.mark.golden_test("test_bar/*.yml") def test_bar(golden): assert foo.bar(golden["input"]) == golden.out["output"] ``` The wildcard selects the "golden" files which serve as both the input and the expected output for the test. The test is basically parameterized on the files. Create one or more of such YAML files (e.g. *tests/test_bar/basic.yml*): ```yaml input: Abc output: Nop ``` Run `pytest` to execute the test(s). Whenever the function under test gets changed, its result may change as well, and the test won't pass anymore. You can run `pytest --update-goldens` to automatically re-populate the output. **See [detailed usage](#usage).** ## The case for golden testing Consider this normal situation when testing a function (e.g. a function to list all words in a sentence). #### *foo.py* ```python def find_words(text: str) -> list: return text.split() ``` #### *tests/test_foo.py* ```python from foo import find_words def test_find_words(): assert find_words("If at first you don't succeed, try, try again.") == [ "If", "at", "first", "you", "don't", "succeed,", "try,", "try", "again." ] ``` You wrote a basic test for that function, but it can be quite tedious to manually write out what the expected output is, especially if the output was something bigger. Sometimes perhaps you'd resort to just writing a dummy test first and copying the actual output from the failure message. And there's nothing really wrong with that, because then you'd still manually inspect whether the new output is good. ### With golden testing But let's rewrite this test using "golden testing". #### *tests/test_foo.py* ```python from foo import find_words def test_find_words(golden): golden = golden.open("test_find_words/test_basic.yml") assert find_words(golden["input"]) == golden.out["output"] ``` Here `golden["xxx"]` will be a value read directly from the associated file. Let's create that (YAML) file: #### *tests/test_find_words/test_basic.yml* ```yaml input: |- If at first you don't succeed, try, try again. ``` Unlike the input, `golden.out["yyy"]` works a little differently. Normally it will also be just an input for the test, taken from the file (and the assertion will be a completely normal [pytest][] assertion), but in a special "update" mode it will instead accept whatever the result is at runtime and put it back into the "golden" file. Both updating and initially populating the file is done automatically with the command **`pytest --update-goldens`**: #### *tests/test_find_words/test_basic.yml* ```yaml input: |- If at first you don't succeed, try, try again. output: - If - at - first - you - don't - succeed, - try, - try - again. ``` Now, when running just `pytest`, the test will always assert that the result is exactly equal to the expected output. Which is just how unittests work. Now you can add all of this into your source control system. ### Introducing a change Let's say you're not happy that the punctuation gets clumped with the words, so you devise a different implementation for this function. #### *foo.py* ```python import re def find_words(text: str) -> list: return re.findall(r"\w+", text) ``` You also want to add another test case for it: #### *tests/test_find_words/test_quotation.yml* ```yaml input: |- Dr. King said, 'I have a dream.' output: - Dr - King - said - I - have - a - dream ``` And let's just turn this into a *parameterized* golden test (one test generated per each file that matches the wildcard): #### *tests/test_foo.py* ```python import pytest from foo import find_words @pytest.mark.golden_test("test_find_words/*.yml") def test_find_words(golden): assert find_words(golden["input"]) == golden.out["output"] ``` Now if we run `pytest -v`, we see that all is well with the new test, which gets picked up as `test_find_words[test_quotation.yml]`, but the code changes also made it so the previous test now disagrees! You get a normal failure message from *pytest* itself. Normally in such situations you'd go back to the test file and edit the expected output (if you indeed expected it to change). But with this you can instead just run `pytest --update-goldens`, and you'll see that instead the "golden" file gets updated by itself (with no test failure). The resulting diff can then still be viewed in your source control system: ```diff --- a/tests/test_find_words/test_basic.yml +++ b/tests/test_find_words/test_basic.yml @@ -5,8 +5,9 @@ output: - at - first - you -- don't -- succeed, -- try, +- don +- t +- succeed - try -- again. +- try +- again ``` Now you (and potentially your code reviewers) get to decide whether this diff is an acceptable one, or whether more changes are needed. You can do another iteration on the code, and the unittest will get updated as you go, and you never need to manually edit it -- just visually inspect the changes and check them in. ## Usage ### `golden` fixture Add a `golden` parameter to your [pytest][] test function, and it will be passed a `GoldenTestFixtureFactory`. ### class `GoldenTestFixtureFactory` #### `golden.open(path) -> GoldenTestFixture` Call this method on the `golden` object to get an actual usable [fixture](#class-goldentestfixture). The `path` argument is a path to a file, relative to the calling Python test file. Teardown is done automatically when the test function finishes. ### `@pytest.mark.golden_test(*patterns: str)` Use this decorator to: 1. avoid having to call `.open` and get a [proper fixture](#class-goldentestfixture) directly as the `golden` argument of your test function and 2. add parameterization to your "golden" test. The `patterns` are one or more [glob patterns](https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob), relative to the calling Python test file. One test will be created for each matched file. ### class `GoldenTestFixture` #### `golden[input_key: str] -> Any` Get a value from the associated YAML file, at the top-level key. May raise `KeyError`. #### `golden.get(input_key: str) -> Optional[Any]` Ditto, but returns `None` if the key is missing. #### `golden.out[output_key: str] -> Any` * In normal mode: Get a value from the associated YAML file, at the top-level key. May raise `KeyError`. * If `--update-goldens` flag is passed: Get a proxy object for the key, which, upon being compared for equality (and subsequently asserted on), marks that the "golden" file should get an updated value for this top-level key. Such updates get performed upon teardown of the fixture: the original file always gets rewritten once. #### `golden.out.get(output_key: str) -> Optional[Any]` Ditto, but when compared to `None`, marks the key as deleted from the file, rather than just having the value `None`. ## How to... ### Make a custom type representable in YAML We will make these types known to the underlying implementation -- [ruamel.yaml](https://yaml.readthedocs.io/), but let's use only the passthrough functions provided by the module `pytest_golden.yaml`. It is best to apply this globally, in *conftest.py*. ```python import pytest_golden.yaml pytest_golden.yaml.register_class(MyClass) ``` (and see [details for `ruamel.yaml`](https://yaml.readthedocs.io/en/latest/dumpcls.html)) Alternate example if your class is equivalent to a single value: ```python class MyClass: def __init__(self, value: str): self.value = value pytest_golden.yaml.add_representer(MyClass, lambda dumper, data: dumper.represent_scalar("!MyClass", data.value)) pytest_golden.yaml.add_constructor('!MyClass', lambda loader, node: MyClass(node.value)) ``` Or in the particular case of subclassing a standard type, you could just drop the tag altogether and rely on equality to the base type. ```python class MyClass(str): pass pytest_golden.yaml.add_representer(MyClass, lambda dumper, data: dumper.represent_str(data)) ``` ### Apply a default golden file for all tests in a module Consider this test where we use `pytest_golden` only for storing the outputs: NOTE: These `*.yml` files need to be manually created first, even if empty. ```python def test_foo(golden): golden = golden.open("stuff/test_foo.yml") assert foo() == golden.out["output"] def test_bar(golden): golden = golden.open("stuff/test_bar.yml") assert bar("a", "b") == golden.out["output"] ``` The test bodies are different (so applying a pattern via a `mark` is not applicable), but we still want to automatically assign the golden files without repeating ourselves. To do that, we can augment the `golden` fixture like this: ```python @pytest.fixture def my_golden(request, golden): return golden.open(f"stuff/{request.node.name}.yml") def test_foo(my_golden): assert foo() == my_golden.out["output"] def test_bar(my_golden): assert bar("a", "b") == my_golden.out["output"] ``` Here the name of the YAML files is based on the test name. Previously the file names were manually ensured to match. So these two snippets are fully equivalent. Note that you don't even need to come up with a separate name like `my_golden` and just overwrite the original `golden` fixture for the whole module. See [a real example of this](https://github.com/oprypin/mkdocs-gen-files/tree/233486840c8f8e5d3e86c1c0bf9032d758818406/tests). pytest-golden-0.2.2/example/000077500000000000000000000000001426520053700157505ustar00rootroot00000000000000pytest-golden-0.2.2/example/foo.py000066400000000000000000000001221426520053700171000ustar00rootroot00000000000000import re def find_words(text: str) -> list: return re.findall(r"\w+", text) pytest-golden-0.2.2/example/pyproject.toml000066400000000000000000000005561426520053700206720ustar00rootroot00000000000000[tool.poetry] name = "foo" version = "0.1.0" description = "example" authors = ["Oleh Prypin "] [tool.poetry.dependencies] python = "^3.6" [tool.poetry.dev-dependencies] pytest = "^6.1.2" pytest-golden = "*" [tool.pytest.ini_options] enable_assertion_pass_hook = true [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" pytest-golden-0.2.2/example/tests/000077500000000000000000000000001426520053700171125ustar00rootroot00000000000000pytest-golden-0.2.2/example/tests/test_find_words/000077500000000000000000000000001426520053700223075ustar00rootroot00000000000000pytest-golden-0.2.2/example/tests/test_find_words/test_basic.yml000066400000000000000000000002031426520053700251450ustar00rootroot00000000000000input: |- If at first you don't succeed, try, try again. output: - If - at - first - you - don - t - succeed - try - try - again pytest-golden-0.2.2/example/tests/test_find_words/test_quotation.yml000066400000000000000000000001371426520053700261150ustar00rootroot00000000000000input: |- Dr. King said, "I have a dream." output: - Dr - King - said - I - have - a - dream pytest-golden-0.2.2/example/tests/test_foo.py000066400000000000000000000002701426520053700213050ustar00rootroot00000000000000import pytest from foo import find_words @pytest.mark.golden_test("test_find_words/*.yml") def test_find_words(golden): assert find_words(golden["input"]) == golden.out["output"] pytest-golden-0.2.2/pyproject.toml000066400000000000000000000023271426520053700172350ustar00rootroot00000000000000[tool.poetry] name = "pytest-golden" version = "0.2.2" description = "Plugin for pytest that offloads expected outputs to data files" authors = ["Oleh Prypin "] license = "MIT" repository = "https://github.com/oprypin/pytest-golden" keywords = ["pytest", "pytest-plugin"] classifiers = ["Framework :: Pytest"] readme = "README.md" [tool.poetry.plugins."pytest11"] pytest-golden = "pytest_golden.plugin" [tool.poetry.dependencies] python = "^3.6" pytest = ">=6.1.2" "ruamel.yaml" = ">=0.16.12, <1.0" atomicwrites = "^1.4.0" dataclasses = {version = ">=0.7, <1.0", python = "<3.7"} testfixtures = "^6.15.0" [tool.poetry.dev-dependencies] black = ">=20.8b1" isort = "^5.6.4" autoflake = "^1.4" pytype = {version = ">=2021.04.15", markers = "python_version>='3.6' and python_version<'3.10' and sys_platform!='win32'"} # Skip on Windows [tool.black] line-length = 100 [tool.isort] line_length = 100 multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 use_parentheses = true ensure_newline_before_comments = true [tool.pytest.ini_options] addopts = "--tb=native" enable_assertion_pass_hook = true norecursedirs = "example" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" pytest-golden-0.2.2/pytest_golden/000077500000000000000000000000001426520053700171755ustar00rootroot00000000000000pytest-golden-0.2.2/pytest_golden/__init__.py000066400000000000000000000001061426520053700213030ustar00rootroot00000000000000import pytest pytest.register_assert_rewrite("pytest_golden.plugin") pytest-golden-0.2.2/pytest_golden/plugin.py000066400000000000000000000257051426520053700210560ustar00rootroot00000000000000import contextlib import dataclasses import inspect import logging import os import pathlib import warnings from typing import Any, Callable, Collection, Dict, List, Sequence, Set, Tuple, Type, TypeVar, Union import atomicwrites import pytest from . import yaml T = TypeVar("T") def pytest_addoption(parser): parser.addoption( "--update-goldens", action="store_true", default=False, help="reset golden master benchmarks", ) @pytest.fixture def golden(request): path = None try: path, func = request.param except AttributeError: func = request.function fixt = GoldenTestFixtureFactory( pathlib.Path(request.module.__file__), func, request.config.getoption("--update-goldens"), request.config.getini("enable_assertion_pass_hook"), ) if path is not None: fixt = fixt.open(path) yield fixt fixt.teardown(request.node) FIXTURE_NAME = golden.__name__ MARKER_NAME = "golden_test" def pytest_configure(config): if not config.getini("enable_assertion_pass_hook"): warnings.warn( "Add 'enable_assertion_pass_hook=true' to pytest.ini for safer usage of pytest-golden.", GoldenTestUsageWarning, ) config.addinivalue_line("markers", MARKER_NAME + "(*file_patterns): TODO") def _golden_test_marker(*file_patterns: str): return file_patterns class UsageError(Exception): pass class GoldenTestUsageWarning(Warning): pass @dataclasses.dataclass class GoldenTestFixtureFactory: name = FIXTURE_NAME path: pathlib.Path func: Callable update_goldens: bool assertions_enabled: bool _fixtures = ... # type: List["GoldenTestFixture"] def __post_init__(self): self._fixtures = [] def open(self, path: os.PathLike) -> "GoldenTestFixture": kwargs = dataclasses.asdict(self) kwargs["path"] = kwargs["path"].parent / path fixt = GoldenTestFixture(**kwargs) self._fixtures.append(fixt) return fixt def _add_record(self, r): for f in self._fixtures: f._add_record(r) def teardown(self, item): for f in self._fixtures: f.teardown(item) @dataclasses.dataclass class GoldenTestFixture(GoldenTestFixtureFactory): _used_fields = ... # type: Set[str] _records = ... # type: List[Union["_ComparisonRecord", "_AssertionRecord"]] _inputs = ... # type: Dict[str, Any] def __post_init__(self): self._used_fields = set() # Keep inputs as a separate copy, so if an input gets mutated, it isn't written back. with open(self.path, encoding="utf-8") as f: self._inputs = yaml._safe.load(f) or {} if not isinstance(self._inputs, dict): raise UsageError(f"The YAML file '{self.path}' must contain a dict at the top level.") if self.update_goldens: self.out = GoldenOutputProxy(self) self._records = [] else: with open(self.path, encoding="utf-8") as f: self.out = yaml._safe.load(f) or {} def __getitem__(self, key: str) -> Any: self._used_fields.add(key) return self._inputs[key] def get(self, key: str, default: T = None) -> Union[Any, T]: self._used_fields.add(key) return self._inputs.get(key, default) def _add_record(self, r): self._records.append(r) def teardown(self, item): if not self.update_goldens: return actual: Dict[str, Union[_AbsentValue, Any]] = {} approved_lines: Set[int] = set() to_warn: List[Tuple[str, _ComparisonRecord]] = [] warn = lambda *args: to_warn.append(args) for record in reversed(self._records): if isinstance(record, _AssertionRecord): approved_lines.add(record.lineno) elif isinstance(record, _ComparisonRecord): comparison = record.comparison if record.location.lineno in approved_lines: comparison.approve() if self.assertions_enabled and not comparison.approved: warn( f"Comparison to a golden output {record.key!r} outside of an assert is ignored:" f"\n{comparison}", record, ) continue value = record.other if comparison.optional and value is None: value = _AbsentValue() if comparison.key in actual and actual[comparison.key] != value: warn( f"Comparison to golden output {comparison.key!r} has gotten conflicting values: " f"{record.other!r} vs {actual[comparison.key]!r}", record, ) continue actual[record.key] = value for msg, record in reversed(to_warn): warnings.warn_explicit( msg, GoldenTestUsageWarning, record.location.filename, record.location.lineno ) yaml._prepare_for_output(actual) with open(self.path, encoding="utf-8") as f: outputs = yaml._rt.load(f) or {} for k, v in actual.items(): if isinstance(v, _AbsentValue): outputs.pop(k, None) else: outputs[k] = v unused_fields = outputs.keys() - self._used_fields if unused_fields: f_code = self.func.__code__ warnings.warn_explicit( f"Unused field(s) {', '.join(map(repr, sorted(unused_fields)))} in {item.name}", GoldenTestUsageWarning, f_code.co_filename, f_code.co_firstlineno, ) with atomicwrites.atomic_write(self.path, mode="w", encoding="utf-8", overwrite=True) as f: yaml._rt.dump(outputs, f) @contextlib.contextmanager def may_raise(self, cls: Type[Exception], *, key: str = "exception"): try: yield except cls as e: assert self.out.get(key) == {type(e).__name__: str(e)} else: assert self.out.get(key) == None @contextlib.contextmanager def capture_logs( self, loggers: Union[str, Sequence[str]], level: int = logging.INFO, attributes: Sequence[str] = ("levelname", "getMessage"), *, key: str = "logs", ): import testfixtures with testfixtures.LogCapture(loggers, attributes=attributes, level=level) as capture: yield logs = [":".join(log) for log in capture.actual()] or None assert self.out.get(key) == logs @dataclasses.dataclass class GoldenOutputProxy: fixt: GoldenTestFixture def __getitem__(self, key: str) -> "GoldenOutput": self.fixt._used_fields.add(key) return GoldenOutput(self.fixt, key) def get(self, key: str) -> "GoldenOutput": self.fixt._used_fields.add(key) return GoldenOutput(self.fixt, key, optional=True) @dataclasses.dataclass class GoldenOutput: fixt: GoldenTestFixture key: str optional: bool = False @property def value(self): return self.fixt[self.key] def __eq__(self, other) -> "GoldenComparison": if isinstance(other, GoldenOutput): raise TypeError("Can't compare two golden output placeholders") return GoldenComparison(self.fixt, self.key, other, self.optional) def __ne__(self, other) -> "GoldenComparison": if isinstance(other, GoldenOutput): raise TypeError("Can't compare two golden output placeholders") warnings.warn( "Only '==' comparison should be used on a golden output", GoldenTestUsageWarning, stacklevel=2, ) return GoldenComparison(self.fixt, self.key, other, self.optional, eq=False) def __str__(self) -> str: return f"{self.fixt.name}.out[{self.key!r}]" @dataclasses.dataclass class GoldenComparison: fixt: GoldenTestFixture key: str other: Any optional: bool eq: bool = True approved: bool = False def __bool__(self) -> bool: stack = inspect.stack() approved = [ inspect.unwrap(f).__code__ for f in (self.fixt.func, GoldenTestFixture.may_raise, GoldenTestFixture.capture_logs) ] for info in stack: if info.frame.f_code in approved: self.fixt._add_record(_ComparisonRecord(self, inspect.getframeinfo(info.frame))) break return self.eq def __str__(self) -> str: op = "==" if self.eq else "!=" return f"{self.other!r} {op} {self.fixt.name}.out[{self.key!r}]" def approve(self: T) -> T: if isinstance(self, GoldenComparison): self.approved = True return self @dataclasses.dataclass class _ComparisonRecord: comparison: "GoldenComparison" location: inspect.Traceback @property def key(self) -> str: return self.comparison.key @property def other(self): return self.comparison.other @dataclasses.dataclass class _AssertionRecord: lineno: int @dataclasses.dataclass class _AbsentValue: def __repr__(self): return "" def pytest_generate_tests(metafunc): item = metafunc.definition marker = item.get_closest_marker(MARKER_NAME) if not marker: return patterns = _golden_test_marker(*marker.args, **marker.kwargs) f_code = metafunc.function.__code__ def warn(msg): warnings.warn_explicit( f"{msg}: {metafunc.function}", GoldenTestUsageWarning, f_code.co_filename, f_code.co_firstlineno, ) if FIXTURE_NAME not in metafunc.fixturenames: warn(f"Useless '{MARKER_NAME}' marker on a test without a '{FIXTURE_NAME}' fixture") return directory = pathlib.Path(metafunc.module.__file__).parent paths: Collection[pathlib.Path] = dict.fromkeys( path for pattern in patterns for path in directory.glob(pattern) ) if not paths: warn(f"The patterns {patterns!r} didn't match anything") return # `::test_foo[foo/*.yaml]` -> `::test_foo[*.yaml]` rel_paths = [path.relative_to(directory) for path in paths] skip_parts = None if all( _removeprefix("test_", path.parts[0]) == _removeprefix("test_", item.originalname) for path in rel_paths ): skip_parts = 1 ids = ("/".join(path.parts[skip_parts:]) for path in rel_paths) metafunc.parametrize( FIXTURE_NAME, ((path, metafunc.function) for path in paths), ids=ids, indirect=True, ) def _removeprefix(prefix: str, s: str): if s.startswith(prefix): s = s[len(prefix) :] return s def pytest_assertion_pass(item, lineno, orig, expl): fixt = item.funcargs.get(FIXTURE_NAME) if isinstance(fixt, GoldenTestFixtureFactory) and fixt.update_goldens: fixt._add_record(_AssertionRecord(lineno)) pytest-golden-0.2.2/pytest_golden/yaml.py000066400000000000000000000026551426520053700205210ustar00rootroot00000000000000from typing import Any, Callable, Type import ruamel.yaml UserType = Any YamlType = Any _safe = ruamel.yaml.YAML(typ="safe", pure=True) _rt = ruamel.yaml.YAML(typ="rt", pure=True) def register_class(cls: Type) -> None: _safe.register_class(cls) _rt.register_class(cls) def add_representer( data_type: Type, representer: Callable[[ruamel.yaml.BaseRepresenter, UserType], YamlType] ) -> None: _safe.representer.add_representer(data_type, representer) _rt.representer.add_representer(data_type, representer) def add_multi_representer( base_data_type: Type, multi_representer: Callable[[ruamel.yaml.BaseRepresenter, UserType], YamlType], ) -> None: _safe.representer.add_multi_representer(base_data_type, multi_representer) _rt.representer.add_multi_representer(base_data_type, multi_representer) def add_constructor( tag: str, constructor: Callable[[ruamel.yaml.BaseConstructor, YamlType], UserType] ) -> None: _safe.constructor.add_constructor(tag, constructor) _rt.constructor.add_constructor(tag, constructor) def add_multi_constructor( tag_prefix: str, multi_constructor: Callable[[ruamel.yaml.BaseConstructor, str, YamlType], UserType], ) -> None: _safe.constructor.add_multi_constructor(tag_prefix, multi_constructor) _rt.constructor.add_multi_constructor(tag_prefix, multi_constructor) def _prepare_for_output(d: dict) -> None: ruamel.yaml.scalarstring.walk_tree(d) pytest-golden-0.2.2/tests/000077500000000000000000000000001426520053700154575ustar00rootroot00000000000000pytest-golden-0.2.2/tests/conftest.py000066400000000000000000000000371426520053700176560ustar00rootroot00000000000000pytest_plugins = ("pytester",) pytest-golden-0.2.2/tests/full/000077500000000000000000000000001426520053700164215ustar00rootroot00000000000000pytest-golden-0.2.2/tests/full/test_empty.yml000066400000000000000000000001361426520053700213410ustar00rootroot00000000000000test: | def test_empty(golden): pass outcomes: passed: 1 outcomes_update: passed: 1 pytest-golden-0.2.2/tests/full/test_missing_file.yml000066400000000000000000000002351426520053700226530ustar00rootroot00000000000000test: | def test_missing_file(golden): golden.open("missing.yml") match_output: - FileNotFoundError outcomes: failed: 1 outcomes_update: failed: 1 pytest-golden-0.2.2/tests/full/test_missing_key.yml000066400000000000000000000003631426520053700225260ustar00rootroot00000000000000test: | def test_missing_key(golden): golden = golden.open("gold.yml") golden["not_missing"] golden["missing"] files: gold.yml: | not_missing: zzz match_output: - KeyError outcomes: failed: 1 outcomes_update: failed: 1 pytest-golden-0.2.2/tests/full/test_multiline_at_root.yml000066400000000000000000000005331426520053700237350ustar00rootroot00000000000000test: | def test_multiline_at_root(golden): golden = golden.open("gold.yml") assert golden["input"] == golden.out["output"] files: gold.yml: | input: | a b c outcomes: failed: 1 outcomes_update: passed: 1 updated_files: gold.yml: | input: | a b c output: | a b c pytest-golden-0.2.2/tests/full/test_mutate.yml000066400000000000000000000005211426520053700215000ustar00rootroot00000000000000test: | def test_mutate(golden): golden = golden.open("gold.yml") inp = golden["input"] inp["x"] = 5 assert inp == golden.out["output"] files: gold.yml: | input: {input: a} outcomes: failed: 1 outcomes_update: passed: 1 updated_files: gold.yml: | input: {input: a} output: input: a x: 5 pytest-golden-0.2.2/tests/full/test_mutate_after.yml000066400000000000000000000004651426520053700226700ustar00rootroot00000000000000test: | def test_mutate_after(golden): golden = golden.open("gold.yml") x = ["good"] assert x == golden.out["output"] x.append("bad") files: gold.yml: | output: - good outcomes: passed: 1 outcomes_update: passed: 1 updated_files: gold.yml: | output: - good - bad pytest-golden-0.2.2/tests/full/test_no_matched_files.yml000066400000000000000000000005071426520053700234700ustar00rootroot00000000000000test: | import pytest @pytest.mark.golden_test("missing.yml") def test_no_matched_files(golden): golden.get("foo") files: notmissing.yml: '' match_output: - "The patterns ('missing.yml',) didn't match anything: *test_no_matched_files" outcomes: failed: 1 warnings: 1 outcomes_update: failed: 1 warnings: 1 pytest-golden-0.2.2/tests/full/test_preserve_comment_at_beginning.yml000066400000000000000000000005531426520053700262670ustar00rootroot00000000000000test: | def test_preserve_comment_at_beginning(golden): golden = golden.open("gold.yml") assert golden["input"] == golden.out["output"] files: gold.yml: | # Foo bar input: | a b outcomes: failed: 1 outcomes_update: passed: 1 updated_files: gold.yml: | # Foo bar input: | a b output: | a b pytest-golden-0.2.2/tests/full/test_unused_field.yml000066400000000000000000000003411426520053700226470ustar00rootroot00000000000000test: | def test_unused_field(golden): golden = golden.open("gold.yml") assert 5 == golden.out["used"] files: gold.yml: | used: 5 unused: 7 outcomes: passed: 1 outcomes_update: passed: 1 warnings: 1 pytest-golden-0.2.2/tests/full/test_update.yml000066400000000000000000000003561426520053700214710ustar00rootroot00000000000000test: | def test_update(golden): golden = golden.open("gold.yml") assert 5 == golden.out["output"] files: gold.yml: | output: 7 outcomes: failed: 1 outcomes_update: passed: 1 updated_files: gold.yml: | output: 5 pytest-golden-0.2.2/tests/full/test_without_assertion_pass_hook.yml000066400000000000000000000005041426520053700260420ustar00rootroot00000000000000test: | def test_without_assertion_pass_hook(golden): golden = golden.open("gold.yml") 5 == golden.out["output"] files: pytest.ini: '' gold.yml: | output: 7 outcomes: passed: 1 outcomes_update: passed: 1 warnings: - Add 'enable_assertion_pass_hook=true' to pytest.ini for safer usage of pytest-golden. pytest-golden-0.2.2/tests/test_plugin.py000066400000000000000000000026171426520053700203740ustar00rootroot00000000000000import warnings import pytest from pytest_golden import plugin @pytest.mark.golden_test("full/test_*.yml") @pytest.mark.parametrize("upd", [False, True]) def test_full(testdir, golden, upd): assert golden.path.stem in golden["test"] testdir.makefile(".ini", pytest="[pytest]\nenable_assertion_pass_hook=true\n") testdir.makepyfile(golden["test"]) files = golden.get("files") or {} for name, content in files.items(): (testdir.tmpdir / name).write_text(content, encoding="utf-8") with pytest.warns(plugin.GoldenTestUsageWarning) as record: warnings.warn("OK", plugin.GoldenTestUsageWarning) result = testdir.runpytest(*(("--update-goldens",) if upd else ())) assert ([str(w.message) for w in record[1:]] or None) == golden.out.get("warnings") new_files = {} for k, v in files.items(): content = (testdir.tmpdir / k).read_text(encoding="utf-8") if upd and content == v: continue new_files[k] = content updated_files_golden = golden.out.get("updated_files") if upd: assert (new_files or None) == updated_files_golden else: assert new_files == files outcomes = (golden.out["outcomes"], golden.out["outcomes_update"]) assert result.parseoutcomes() == outcomes[upd] if golden.get("match_output"): result.stdout.fnmatch_lines(["*" + l + "*" for l in golden["match_output"]])