././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580715859.9252832 hypothesis-auto-1.1.4/LICENSE0000644000000000000000000000206000000000000014014 0ustar0000000000000000MIT License Copyright (c) 2019 Timothy Crosley Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580715859.9252832 hypothesis-auto-1.1.4/README.md0000644000000000000000000001620500000000000014274 0ustar0000000000000000[![hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/logo_large.png)](https://timothycrosley.github.io/hypothesis-auto/) _________________ [![PyPI version](https://badge.fury.io/py/hypothesis-auto.svg)](http://badge.fury.io/py/hypothesis-auto) [![Build Status](https://travis-ci.org/timothycrosley/hypothesis-auto.svg?branch=master)](https://travis-ci.org/timothycrosley/hypothesis-auto) [![codecov](https://codecov.io/gh/timothycrosley/hypothesis-auto/branch/master/graph/badge.svg)](https://codecov.io/gh/timothycrosley/hypothesis-auto) [![Join the chat at https://gitter.im/timothycrosley/hypothesis-auto](https://badges.gitter.im/timothycrosley/hypothesis-auto.svg)](https://gitter.im/timothycrosley/hypothesis-auto?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.python.org/pypi/hypothesis-auto/) [![Downloads](https://pepy.tech/badge/hypothesis-auto)](https://pepy.tech/project/hypothesis-auto) _________________ [Read Latest Documentation](https://timothycrosley.github.io/hypothesis-auto/) - [Browse GitHub Code Repository](https://github.com/timothycrosley/hypothesis-auto/) _________________ **hypothesis-auto** is an extension for the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) project that enables fully automatic tests for type annotated functions. [![Hypothesis Pytest Auto Example](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/demo.gif)](https://github.com/timothycrosley/hypothesis-auto/blob/master/art/demo.gif) Key Features: * **Type Annotation Powered**: Utilize your function's existing type annotations to build dozens of test cases automatically. * **Low Barrier**: Start utilizing property-based testing in the lowest barrier way possible. Just run `auto_test(FUNCTION)` to run dozens of test. * **pytest Compatible**: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular [pytest](https://docs.pytest.org/en/latest/) testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line. * **Scales Up**: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of [Hypothesis](https://hypothesis.readthedocs.io/en/latest/), including custom strategies per a parameter. ## Installation: To get started - install `hypothesis-auto` into your projects virtual environment: `pip3 install hypothesis-auto` OR `poetry add hypothesis-auto` OR `pipenv install hypothesis-auto` ## Usage Examples: !!! warning In old usage examples you will see `_` prefixed parameters like `_auto_verify=`. This was done to avoid conflicting with existing function parameters. Based on community feedback the project switched to `_` suffixes, such as `auto_verify_=` to keep the likely hood of conflicting low while avoiding the connotation of private parameters. ### Framework independent usage #### Basic `auto_test` usage: ```python3 from hypothesis_auto import auto_test def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 auto_test(add) # 50 property based scenarios are generated and ran against add auto_test(add, auto_runs_=1_000) # Let's make that 1,000 ``` #### Adding an allowed exception: ```python3 from hypothesis_auto import auto_test def divide(number_1: int, number_2: int) -> int: return number_1 / number_2 auto_test(divide) -> 1012 raise the_error_hypothesis_found 1013 1014 for attrib in dir(test): in divide(number_1, number_2) 1 def divide(number_1: int, number_2: int) -> int: ----> 2 return number_1 / number_2 3 0/0 ZeroDivisionError: division by zero auto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, )) ``` #### Using `auto_test` with a custom verification method: ```python3 from hypothesis_auto import Scenario, auto_test def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 def my_custom_verifier(scenario: Scenario): if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0: assert scenario.result > scenario.kwargs["number_1"] assert scenario.result > scenario.kwargs["number_2"] elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0: assert scenario.result < scenario.kwargs["number_1"] assert scenario.result < scenario.kwargs["number_2"] else: assert scenario.result >= min(scenario.kwargs.values()) assert scenario.result <= max(scenario.kwargs.values()) auto_test(add, auto_verify_=my_custom_verifier) ``` Custom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors. For the full set of parameters, you can pass into auto_test see its [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/). ### pytest usage #### Using `auto_pytest_magic` to auto-generate dozens of pytest test cases: ```python3 from hypothesis_auto import auto_pytest_magic def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 auto_pytest_magic(add) ``` #### Using `auto_pytest` to run dozens of test case within a temporary directory: ```python3 from hypothesis_auto import auto_pytest def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 @auto_pytest() def test_add(test_case, tmpdir): tmpdir.mkdir().chdir() test_case() ``` #### Using `auto_pytest_magic` with a custom verification method: ```python3 from hypothesis_auto import Scenario, auto_pytest def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 def my_custom_verifier(scenario: Scenario): if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0: assert scenario.result > scenario.kwargs["number_1"] assert scenario.result > scenario.kwargs["number_2"] elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0: assert scenario.result < scenario.kwargs["number_1"] assert scenario.result < scenario.kwargs["number_2"] else: assert scenario.result >= min(scenario.kwargs.values()) assert scenario.result <= max(scenario.kwargs.values()) auto_pytest_magic(add, auto_verify_=my_custom_verifier) ``` Custom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors. For the full reference of the pytest integration API see the [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/pytest/). ## Why Create hypothesis-auto? I wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me. I hope you too find `hypothesis-auto` useful! ~Timothy Crosley ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1580716829.193896 hypothesis-auto-1.1.4/hypothesis_auto/__init__.py0000644000000000000000000000043000000000000020346 0ustar0000000000000000from hypothesis_auto.tester import ( Scenario, auto_parameters, auto_test, auto_test_cases, auto_test_module, ) try: from hypothesis_auto.pytest import auto_pytest, auto_pytest_magic except ImportError: # pragma: no cover pass __version__ = "1.1.4" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580715859.9372828 hypothesis-auto-1.1.4/hypothesis_auto/pytest.py0000644000000000000000000001074700000000000020153 0ustar0000000000000000import inspect from typing import Any, Callable, Optional, Tuple, Union from uuid import uuid4 import pytest from hypothesis_auto.tester import Scenario, auto_test_cases def auto_pytest( auto_function_: Callable, *args, auto_allow_exceptions_: Union[Tuple[BaseException], Tuple] = (), auto_runs_: int = 50, auto_verify_: Optional[Callable[[Scenario], Any]] = None, **kwargs, ) -> None: """A decorator that marks a parameterized pytest function passing along a callable test case. The function should take a `test_case` parameter. By default auto_pytest uses type annotations to automatically decide on strategies via the hypothesis builds strategy. You can override individual strategies by passing them in under the corresponding `*arg` or `**kwarg` OR you can pass in specific values that must be used for certain parameters while letting others be auto generated. All `*arg` and `**kwargs` are automatically passed along to `hypothesis.strategies.builds` to enable this. Non strategies are automatically converted to strategies using `hypothesis.strategies.just`. Except for the following options: - *auto_allow_exceptions_*: A tuple of exceptions that are acceptable for the function to raise and will no be considered a test error. - *auto_runs_*: Number of strategies combinations to run the given function against. - *auto_verify_*: An optional callback function that will be called to allow custom verification of the functions return value. The callback function should raise an AssertionError if the return value does not match expectations. Example: def my_function(number_1: int, number_2: int) -> int: return number_1 + number_2 @auto_pytest(my_function) def test_auto_pytest(test_case): test_case() ----- """ return pytest.mark.parametrize( "test_case", auto_test_cases( auto_function_, *args, auto_allow_exceptions_=auto_allow_exceptions_, auto_limit_=auto_runs_, auto_verify_=auto_verify_, **kwargs, ), ) def auto_pytest_magic( auto_function_: Callable, *args, auto_allow_exceptions_: Union[Tuple[BaseException], Tuple] = (), auto_runs_: int = 50, auto_verify_: Optional[Callable[[Scenario], Any]] = None, **kwargs, ) -> None: """A convenience function that builds a new test function inside the calling module and passes into it test cases using the `auto_pytest` decorator. The least effort and most magical way to integrate with pytest. By default auto_pytest_magic uses type annotations to automatically decide on strategies via the hypothesis builds strategy. You can override individual strategies by passing them in under the corresponding `*arg` or `**kwarg` OR you can pass in specific values that must be used for certain parameters while letting others be auto generated. All `*arg` and `**kwargs` are automatically passed along to `hypothesis.strategies.builds` to enable this. Non strategies are automatically converted to strategies using `hypothesis.strategies.just`. Except for the following options: - *auto_allow_exceptions_*: A tuple of exceptions that are acceptable for the function to raise and will no be considered a test error. - *auto_runs_*: Number of strategies combinations to run the given function against. - *auto_verify_*: An optional callback function that will be called to allow custom verification of the functions return value. The callback function should raise an AssertionError if the return value does not match expectations. Example: def my_function(number_1: int, number_2: int) -> int: return number_1 + number_2 auto_pytest_magic(my_function) """ called_from = inspect.stack()[1] module = inspect.getmodule(called_from[0]) def test_function(test_case): test_case() uuid = str(uuid4()).replace("-", "") test_function.__name__ = f"test_auto_{auto_function_.__name__}_{uuid}" setattr(module, test_function.__name__, test_function) pytest.mark.parametrize( "test_case", auto_test_cases( auto_function_, *args, auto_allow_exceptions_=auto_allow_exceptions_, auto_limit_=auto_runs_, auto_verify_=auto_verify_, **kwargs, ), )(test_function) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580716697.1362462 hypothesis-auto-1.1.4/hypothesis_auto/tester.py0000644000000000000000000001740000000000000020122 0ustar0000000000000000from inspect import isfunction, signature from types import ModuleType from typing import ( Any, Callable, Dict, Generator, List, NamedTuple, Optional, Tuple, Union, get_type_hints, ) from hypothesis.strategies import SearchStrategy, builds, just from pydantic import BaseModel class Parameters(NamedTuple): """Represents the parameters meant to passed into a callable.""" args: List[Any] kwargs: Dict[str, Any] class TestCase(NamedTuple): """Represents an individual auto generated test case. To run the test case simply call() it.""" parameters: Parameters test_function: Callable def __call__(self) -> Any: """Calls the given test case returning the called functions result on success or Raising an exception on error """ return self.test_function(*self.parameters.args, **self.parameters.kwargs) class Scenario(NamedTuple): """Represents entirety of the scenario being tested: - *args*: The auto-generated `*args` being passed into the test function. - *kwargs*: The auto-generated `**kwargs` being passed into the test function. - *result*: The result returned from calling the test function. - *test_function*: The test_function that was called as part of the test scenario. """ args: List[Any] kwargs: Dict[str, Any] result: Any test_function: Callable def _test_function( auto_function_: Callable, auto_verify_: Optional[Callable[[Scenario], Any]] = None, auto_allow_exceptions_: Union[Tuple[BaseException], Tuple] = (), ) -> Callable: return_type = get_type_hints(auto_function_).get("return", None) return_model = None if return_type: class ReturnModel(BaseModel): __annotations__ = {"returns": return_type} class Config: arbitrary_types_allowed = True return_model = ReturnModel def test_function(*args, **kwargs) -> Any: try: result = auto_function_(*args, **kwargs) except auto_allow_exceptions_: # type: ignore return if return_model: return_model(returns=result) if auto_verify_: auto_verify_( Scenario( args=list(args), kwargs=kwargs, result=result, test_function=auto_function_ ) ) return result return test_function def auto_parameters( auto_function_: Callable, *args, auto_limit_: int = 50, **kwargs ) -> Generator[Parameters, None, None]: """Generates parameters from the given callable up to the specified limit (`auto_limit_` parameter). By default auto_parameters uses type annotations to automatically decide on strategies via the hypothesis builds strategy. You can override individual strategies by passing them in under the corresponding `*arg` or `**kwarg` OR you can pass in specific values that must be used for certain parameters while letting others be auto generated. All `*arg` and `**kwargs` are automatically passed along to `hypothesis.strategies.builds` to enable this. Non strategies are automatically converted to strategies using `hypothesis.strategies.just`. Except for the following option: - *auto_limit_*: Number of strategies combinations to run the given function against. """ strategy_args = [arg if isinstance(arg, SearchStrategy) else just(arg) for arg in args] strategy_kwargs = { name: value if isinstance(value, SearchStrategy) else just(value) for name, value in kwargs.items() } def pass_along_variables(*args, **kwargs): return Parameters(args=args, kwargs=kwargs) pass_along_variables.__signature__ = signature(auto_function_) # type: ignore pass_along_variables.__annotations__ = getattr(auto_function_, "__annotations__", {}) strategy = builds(pass_along_variables, *strategy_args, **strategy_kwargs) for _ in range(auto_limit_): yield strategy.example() def auto_test_cases( auto_function_: Callable, *args, auto_allow_exceptions_: Union[Tuple[BaseException], Tuple] = (), auto_limit_: int = 50, auto_verify_: Optional[Callable[[Scenario], Any]] = None, **kwargs ) -> Generator[TestCase, None, None]: """Generates test cases from the given callable up to the specified limit (`auto_limit_` parameter). By default auto_test_cases uses type annotations to automatically decide on strategies via the hypothesis builds strategy. You can override individual strategies by passing them in under the corresponding `*arg` or `**kwarg` OR you can pass in specific values that must be used for certain parameters while letting others be auto generated. All `*arg` and `**kwargs` are automatically passed along to `hypothesis.strategies.builds` to enable this. Non strategies are automatically converted to strategies using `hypothesis.strategies.just`. Except for the following options: - *auto_allow_exceptions_*: A tuple of exceptions that are acceptable for the function to raise and will no be considered a test error. - *auto_limit_*: Number of strategies combinations to run the given function against. - *auto_verify_*: An optional callback function that will be called to allow custom verification of the functions return value. The callback function should raise an AssertionError if the return value does not match expectations. """ test_function = _test_function( auto_function_, auto_verify_=auto_verify_, auto_allow_exceptions_=auto_allow_exceptions_ ) for parameters in auto_parameters(auto_function_, *args, auto_limit_=auto_limit_, **kwargs): yield TestCase(parameters=parameters, test_function=test_function) def auto_test( auto_function_: Callable, *args, auto_allow_exceptions_: Union[Tuple[BaseException], Tuple] = (), auto_runs_: int = 50, auto_verify_: Optional[Callable[[Scenario], Any]] = None, **kwargs ) -> None: """A simple utility function for hypothesis that enables fully automatic testing for a type hinted callable, including return type verification. By default auto_test uses type annotations to automatically decide on strategies via the hypothesis builds strategy. You can override individual strategies by passing them in under the corresponding `*arg` or `**kwarg` OR you can pass in specific values that must be used for certain parameters while letting others be auto generated. All `*arg` and `**kwargs` are automatically passed along to `hypothesis.strategies.builds` to enable this. Non strategies are automatically converted to strategies using `hypothesis.strategies.just`. Except for the following options: - *auto_allow_exceptions_*: A tuple of exceptions that are acceptable for the function to raise and will no be considered a test error. - *auto_runs_*: Number of strategies combinations to run the given function against. - *auto_verify_*: An optional callback function that will be called to allow custom verification of the functions return value. The callback function should raise an AssertionError if the return value does not match expectations. """ for test_case in auto_test_cases( auto_function_, *args, auto_allow_exceptions_=auto_allow_exceptions_, auto_limit_=auto_runs_, auto_verify_=auto_verify_, **kwargs ): test_case() def auto_test_module(module: ModuleType) -> None: """Attempts to automatically test every public function within a module. For the brave only.""" for attribute_name in dir(module): if not attribute_name.startswith("_"): attribute = getattr(module, attribute_name) if isfunction(attribute): auto_test(attribute) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580716814.7097225 hypothesis-auto-1.1.4/pyproject.toml0000644000000000000000000000201700000000000015725 0ustar0000000000000000[tool.poetry] name = "hypothesis-auto" version = "1.1.4" description = "Extends Hypothesis to add fully automatic testing of type annotated functions" authors = ["Timothy Crosley "] license = "MIT" readme = "README.md" [tool.poetry.dependencies] python = "^3.6" pydantic = ">=0.32.2<2.0.0" hypothesis = ">=4.36<6.0.0" pytest = { version = "^4.0.0", optional = true } [tool.poetry.extras] pytest = ["pytest"] [tool.poetry.dev-dependencies] vulture = "^1.0" bandit = "^1.6" safety = "^1.8" isort = "^4.3" flake8-bugbear = "^19.8" black = {version = "^18.3-alpha.0", allow-prereleases = true} mypy = "^0.730.0" ipython = "^7.7" pytest-cov = "^2.7" pytest-mock = "^1.10" pep8-naming = "^0.8.2" portray = "^1.3.0" cruft = "^1.1" numpy = "^1.18.0" [tool.portray] modules = ["hypothesis_auto"] [tool.portray.mkdocs.theme] favicon = "art/logo.png" logo = "art/logo.png" name = "material" palette = {primary = "teal", accent = "cyan"} [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580716884.0844183 hypothesis-auto-1.1.4/setup.py0000644000000000000000000002013300000000000014522 0ustar0000000000000000# -*- coding: utf-8 -*- from setuptools import setup packages = \ ['hypothesis_auto'] package_data = \ {'': ['*']} install_requires = \ ['hypothesis>=4.36', 'pydantic>=0.32.2'] extras_require = \ {'pytest': ['pytest>=4.0.0,<5.0.0']} setup_kwargs = { 'name': 'hypothesis-auto', 'version': '1.1.4', 'description': 'Extends Hypothesis to add fully automatic testing of type annotated functions', 'long_description': '[![hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/logo_large.png)](https://timothycrosley.github.io/hypothesis-auto/)\n_________________\n\n[![PyPI version](https://badge.fury.io/py/hypothesis-auto.svg)](http://badge.fury.io/py/hypothesis-auto)\n[![Build Status](https://travis-ci.org/timothycrosley/hypothesis-auto.svg?branch=master)](https://travis-ci.org/timothycrosley/hypothesis-auto)\n[![codecov](https://codecov.io/gh/timothycrosley/hypothesis-auto/branch/master/graph/badge.svg)](https://codecov.io/gh/timothycrosley/hypothesis-auto)\n[![Join the chat at https://gitter.im/timothycrosley/hypothesis-auto](https://badges.gitter.im/timothycrosley/hypothesis-auto.svg)](https://gitter.im/timothycrosley/hypothesis-auto?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n[![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.python.org/pypi/hypothesis-auto/)\n[![Downloads](https://pepy.tech/badge/hypothesis-auto)](https://pepy.tech/project/hypothesis-auto)\n_________________\n\n[Read Latest Documentation](https://timothycrosley.github.io/hypothesis-auto/) - [Browse GitHub Code Repository](https://github.com/timothycrosley/hypothesis-auto/)\n_________________\n\n**hypothesis-auto** is an extension for the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) project that enables fully automatic tests for type annotated functions.\n\n[![Hypothesis Pytest Auto Example](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/demo.gif)](https://github.com/timothycrosley/hypothesis-auto/blob/master/art/demo.gif)\n\nKey Features:\n\n* **Type Annotation Powered**: Utilize your function\'s existing type annotations to build dozens of test cases automatically.\n* **Low Barrier**: Start utilizing property-based testing in the lowest barrier way possible. Just run `auto_test(FUNCTION)` to run dozens of test.\n* **pytest Compatible**: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular [pytest](https://docs.pytest.org/en/latest/) testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line.\n* **Scales Up**: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of [Hypothesis](https://hypothesis.readthedocs.io/en/latest/), including custom strategies per a parameter.\n\n## Installation:\n\nTo get started - install `hypothesis-auto` into your projects virtual environment:\n\n`pip3 install hypothesis-auto`\n\nOR\n\n`poetry add hypothesis-auto`\n\nOR\n\n`pipenv install hypothesis-auto`\n\n## Usage Examples:\n\n!!! warning\n In old usage examples you will see `_` prefixed parameters like `_auto_verify=`. This was done to avoid conflicting with existing function parameters.\n Based on community feedback the project switched to `_` suffixes, such as `auto_verify_=` to keep the likely hood of conflicting low while\n avoiding the connotation of private parameters.\n\n### Framework independent usage\n\n#### Basic `auto_test` usage:\n\n```python3\nfrom hypothesis_auto import auto_test\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\nauto_test(add) # 50 property based scenarios are generated and ran against add\nauto_test(add, auto_runs_=1_000) # Let\'s make that 1,000\n```\n\n#### Adding an allowed exception:\n\n```python3\nfrom hypothesis_auto import auto_test\n\n\ndef divide(number_1: int, number_2: int) -> int:\n return number_1 / number_2\n\nauto_test(divide)\n\n-> 1012 raise the_error_hypothesis_found\n 1013\n 1014 for attrib in dir(test):\n\n in divide(number_1, number_2)\n 1 def divide(number_1: int, number_2: int) -> int:\n----> 2 return number_1 / number_2\n 3\n\n0/0\n\nZeroDivisionError: division by zero\n\n\nauto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, ))\n```\n\n#### Using `auto_test` with a custom verification method:\n\n```python3\nfrom hypothesis_auto import Scenario, auto_test\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\ndef my_custom_verifier(scenario: Scenario):\n if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:\n assert scenario.result > scenario.kwargs["number_1"]\n assert scenario.result > scenario.kwargs["number_2"]\n elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:\n assert scenario.result < scenario.kwargs["number_1"]\n assert scenario.result < scenario.kwargs["number_2"]\n else:\n assert scenario.result >= min(scenario.kwargs.values())\n assert scenario.result <= max(scenario.kwargs.values())\n\n\nauto_test(add, auto_verify_=my_custom_verifier)\n```\n\nCustom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors.\n\nFor the full set of parameters, you can pass into auto_test see its [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/).\n\n### pytest usage\n\n#### Using `auto_pytest_magic` to auto-generate dozens of pytest test cases:\n\n```python3\nfrom hypothesis_auto import auto_pytest_magic\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\nauto_pytest_magic(add)\n```\n\n#### Using `auto_pytest` to run dozens of test case within a temporary directory:\n\n```python3\nfrom hypothesis_auto import auto_pytest\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\n@auto_pytest()\ndef test_add(test_case, tmpdir):\n tmpdir.mkdir().chdir()\n test_case()\n```\n\n#### Using `auto_pytest_magic` with a custom verification method:\n\n```python3\nfrom hypothesis_auto import Scenario, auto_pytest\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\ndef my_custom_verifier(scenario: Scenario):\n if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:\n assert scenario.result > scenario.kwargs["number_1"]\n assert scenario.result > scenario.kwargs["number_2"]\n elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:\n assert scenario.result < scenario.kwargs["number_1"]\n assert scenario.result < scenario.kwargs["number_2"]\n else:\n assert scenario.result >= min(scenario.kwargs.values())\n assert scenario.result <= max(scenario.kwargs.values())\n\n\nauto_pytest_magic(add, auto_verify_=my_custom_verifier)\n```\n\nCustom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors.\n\nFor the full reference of the pytest integration API see the [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/pytest/).\n\n## Why Create hypothesis-auto?\n\nI wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me.\n\nI hope you too find `hypothesis-auto` useful!\n\n~Timothy Crosley\n', 'author': 'Timothy Crosley', 'author_email': 'timothy.crosley@gmail.com', 'maintainer': None, 'maintainer_email': None, 'url': None, 'packages': packages, 'package_data': package_data, 'install_requires': install_requires, 'extras_require': extras_require, 'python_requires': '>=3.6,<4.0', } setup(**setup_kwargs) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1580716884.0847313 hypothesis-auto-1.1.4/PKG-INFO0000644000000000000000000001747200000000000014121 0ustar0000000000000000Metadata-Version: 2.1 Name: hypothesis-auto Version: 1.1.4 Summary: Extends Hypothesis to add fully automatic testing of type annotated functions License: MIT Author: Timothy Crosley Author-email: timothy.crosley@gmail.com Requires-Python: >=3.6,<4.0 Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Provides-Extra: pytest Requires-Dist: hypothesis (>=4.36) Requires-Dist: pydantic (>=0.32.2) Requires-Dist: pytest (>=4.0.0,<5.0.0); extra == "pytest" Description-Content-Type: text/markdown [![hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/logo_large.png)](https://timothycrosley.github.io/hypothesis-auto/) _________________ [![PyPI version](https://badge.fury.io/py/hypothesis-auto.svg)](http://badge.fury.io/py/hypothesis-auto) [![Build Status](https://travis-ci.org/timothycrosley/hypothesis-auto.svg?branch=master)](https://travis-ci.org/timothycrosley/hypothesis-auto) [![codecov](https://codecov.io/gh/timothycrosley/hypothesis-auto/branch/master/graph/badge.svg)](https://codecov.io/gh/timothycrosley/hypothesis-auto) [![Join the chat at https://gitter.im/timothycrosley/hypothesis-auto](https://badges.gitter.im/timothycrosley/hypothesis-auto.svg)](https://gitter.im/timothycrosley/hypothesis-auto?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.python.org/pypi/hypothesis-auto/) [![Downloads](https://pepy.tech/badge/hypothesis-auto)](https://pepy.tech/project/hypothesis-auto) _________________ [Read Latest Documentation](https://timothycrosley.github.io/hypothesis-auto/) - [Browse GitHub Code Repository](https://github.com/timothycrosley/hypothesis-auto/) _________________ **hypothesis-auto** is an extension for the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) project that enables fully automatic tests for type annotated functions. [![Hypothesis Pytest Auto Example](https://raw.github.com/timothycrosley/hypothesis-auto/master/art/demo.gif)](https://github.com/timothycrosley/hypothesis-auto/blob/master/art/demo.gif) Key Features: * **Type Annotation Powered**: Utilize your function's existing type annotations to build dozens of test cases automatically. * **Low Barrier**: Start utilizing property-based testing in the lowest barrier way possible. Just run `auto_test(FUNCTION)` to run dozens of test. * **pytest Compatible**: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular [pytest](https://docs.pytest.org/en/latest/) testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line. * **Scales Up**: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of [Hypothesis](https://hypothesis.readthedocs.io/en/latest/), including custom strategies per a parameter. ## Installation: To get started - install `hypothesis-auto` into your projects virtual environment: `pip3 install hypothesis-auto` OR `poetry add hypothesis-auto` OR `pipenv install hypothesis-auto` ## Usage Examples: !!! warning In old usage examples you will see `_` prefixed parameters like `_auto_verify=`. This was done to avoid conflicting with existing function parameters. Based on community feedback the project switched to `_` suffixes, such as `auto_verify_=` to keep the likely hood of conflicting low while avoiding the connotation of private parameters. ### Framework independent usage #### Basic `auto_test` usage: ```python3 from hypothesis_auto import auto_test def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 auto_test(add) # 50 property based scenarios are generated and ran against add auto_test(add, auto_runs_=1_000) # Let's make that 1,000 ``` #### Adding an allowed exception: ```python3 from hypothesis_auto import auto_test def divide(number_1: int, number_2: int) -> int: return number_1 / number_2 auto_test(divide) -> 1012 raise the_error_hypothesis_found 1013 1014 for attrib in dir(test): in divide(number_1, number_2) 1 def divide(number_1: int, number_2: int) -> int: ----> 2 return number_1 / number_2 3 0/0 ZeroDivisionError: division by zero auto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, )) ``` #### Using `auto_test` with a custom verification method: ```python3 from hypothesis_auto import Scenario, auto_test def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 def my_custom_verifier(scenario: Scenario): if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0: assert scenario.result > scenario.kwargs["number_1"] assert scenario.result > scenario.kwargs["number_2"] elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0: assert scenario.result < scenario.kwargs["number_1"] assert scenario.result < scenario.kwargs["number_2"] else: assert scenario.result >= min(scenario.kwargs.values()) assert scenario.result <= max(scenario.kwargs.values()) auto_test(add, auto_verify_=my_custom_verifier) ``` Custom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors. For the full set of parameters, you can pass into auto_test see its [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/). ### pytest usage #### Using `auto_pytest_magic` to auto-generate dozens of pytest test cases: ```python3 from hypothesis_auto import auto_pytest_magic def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 auto_pytest_magic(add) ``` #### Using `auto_pytest` to run dozens of test case within a temporary directory: ```python3 from hypothesis_auto import auto_pytest def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 @auto_pytest() def test_add(test_case, tmpdir): tmpdir.mkdir().chdir() test_case() ``` #### Using `auto_pytest_magic` with a custom verification method: ```python3 from hypothesis_auto import Scenario, auto_pytest def add(number_1: int, number_2: int = 1) -> int: return number_1 + number_2 def my_custom_verifier(scenario: Scenario): if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0: assert scenario.result > scenario.kwargs["number_1"] assert scenario.result > scenario.kwargs["number_2"] elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0: assert scenario.result < scenario.kwargs["number_1"] assert scenario.result < scenario.kwargs["number_2"] else: assert scenario.result >= min(scenario.kwargs.values()) assert scenario.result <= max(scenario.kwargs.values()) auto_pytest_magic(add, auto_verify_=my_custom_verifier) ``` Custom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors. For the full reference of the pytest integration API see the [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/pytest/). ## Why Create hypothesis-auto? I wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me. I hope you too find `hypothesis-auto` useful! ~Timothy Crosley